Rob Kuijt's Testing Blog
Visual Studio Team System 2010 – Episode 2: No Risk No Test  
Tuesday, February 10, 2009, 08:08 PM - ALM, Quality, Testing, TMap®, Rosario
Posted by Rob Kuijt
In episode 1, Clemens introduced the focus of Visual Studio Team System 2010 on the collaborative effort between the tester and the other roles in the application lifecycle. In this second episode I will start with a short introduction on testing.

Testing is not an aim in itself! Testing is, in fact, a balancing act. What risks must be covered, what results are to be delivered and how much time and money can be spend, based on rational and economic grounds?

The right test strategy will balance among risks and costs .


Testing supplies insight in the difference between the actual and the required status of an object. Where quality is roughly to be described as 'meeting the requirements and expectations', testing delivers information on the quality.

In this, there is no difference for developers who are testing, specialist testers during system testing or generalist testers during the final acceptance test. Choosing the right test strategy is a joined effort between every tester and the other roles in the application lifecycle. This collaboration is needed because, beside insight in the business risks, much (technical and test) knowledge is needed to find the most important defects as early as possible at the lowest price.

The collaboration doesn't stop after choosing the best test strategy. After designing the test cases, also test execution must be organized as a joined effort. Collaboration between the different roles in the application lifecycle is not self-evident. By nature, it looks like developers and testers don't (want to) understand each other. Having this attitude, it is difficult to find out you need each other to deliver software of good quality.

...don’t (want to) understand each other.


The consequence of this virtual (and sometimes real) wall is calamitous for quality. Many bugs arise by doing wrong assumptions and interpretations of the, in generally, unclear specifications and requirements. Reporting, analyzing and resolving these bugs take a lot of time, especially when they prove to be not reproducible or wrong.

An Example: After a development project of 5 man year, testing is done by a team of generalist testers. With much enthusiasm the test team executes the test cases they designed during the building of the system. After 3 months the verdict is given: Negative release advice, because of (8) blocking and (22) big defects, the test team gives the advice NOT to go into production. As you may understand this was a big disappointment for both the project team as the business department. A taskforce was established to keep the delay (damage) in control. After the defect analysis the general feeling became better, after filtering the test faults (20%), the functional wishes (20%) and changes (not passed through to the testers; 30%) the repair costs were approximately 200 hours ( for the remaining 30%). In fact the development team had done a great job. Better communication from both sides, before and during test execution, probably had made the verdict a, celebration worthy, positive release advice. Much, much better than this cold shower!

The above paragraphs show that the collaboration between the tester and developer is important for success. If the collaboration between the different roles in the application lifecycle is not actively stimulated and facilitated, the proverbial wall will arise.

In the next episode Clemens shows us how Visual Studio Team System 2010 will support and stimulate this collaboration...

add comment ( 231 views )   |  0 trackbacks   |  permalink   |   ( 3 / 2576 )
Testing or Documenting? 
Monday, January 12, 2009, 10:28 PM - Quality, Testing
Posted by Rob Kuijt
How sure is a company about the quality of their Calculation Component(s)? Are they bug free? Even after, for instance, nine changes? Testing changed components is tricky, because you never get the time to test a Calculation Component with lots of values, because most of the testing is still done manually.

There are many test tools that could do this job much faster, but they are complex to use and mostly pretty expensive.
So I did some thinking…….I want to do lean and mean Calculation Component Analyzing!
Preferably: User-friendly, quick, low cost and in a way that the outcome is easy to check!

Personally I like pictures instead of figures. So I started some PHP programming, and voila these are my results:
The first version is a service that can analyze a “one parameter” Calculation Component. Let me give an example.
The test object is a web component with one input parameter. In my home made analyzer service I give the URL of the test object, the start input value, the end input value and the step size (see fig.1)


Fig.1 Input screen of my home made Analyzer

After sending the input, the analyzer performs, in this case, 21 calls to the test object, and responses with the following graphics:

Fig.2 Output screen of my home made Analyzer

This output gives a first impression of the calculation. It looks like: output=input*input (if input > zero) or output=input*input (if input >= zero).

But to be sure, let's make the steps in the range smaller(step=10):

That’s strange! The input value 10 gives a response: zero!

Again, let's choose step=1:

Now I'm pretty sure that the calculation function is: output=input*input (if input greater than ten)

And for a last check (in this example) I choose step=0.1:

Yes, it still looks like: output=input*input (if input greater than ten)

Conclusion:
I think that this kind of functionality is valuable, not only for testing, but also for documenting Calculation Components.
Above that, it is:
  • Easy to use (one simple input screen),
  • Low cost (I did build the function in 2 hours (with the help of open source: PHPLOT)),
  • Quick (the above analyzing took me 3 minutes (including capturing the pictures for my blog)),
  • Simple to read.

And it's FUN to play with!
Now I'll try to find some use for this kind of functionality, and l start thinking how to handle (and present the outcome) for a “two parameter” Calculation Component.

Rob
add comment ( 244 views )   |  0 trackbacks   |  permalink   |   ( 2.9 / 25 )
We can test Verbal Diarrhea! 
Sunday, October 19, 2008, 02:55 PM - ALM, Quality, Testing, Rosario
Posted by Rob Kuijt
We, professional testers, are proud that we can create good test cases from bad requirements.....That's a great achievement.......We can save projects much time by starting with the test process at an early stage in the life cycle........Or not? Who are we doing a favor?

Believe it or not, with test specification techniques like 'Data Combination Test' it is possible to create good test cases from incomplete and/or vague requirements. By combining the (bad) requirements with the knowledge of domain specialists in creative team sessions, it is possible to create great test cases, with even the choice to cover the risks with different test depths levels.

So, if the requirements team runs out of time (or don't understand their own job properly);
No problem;
Send the stuff;
We can start right away with the preparation of the test cases.

It is fun to do, it saves time and we do find many defects with this approach!

GREAT job! or NOT? ............Is deriving good test cases from bad requirements professionalism?

Let's look at the Project level
Besides us testers there are more parties trying to do their jobs on base of the requirements. For instance can the development team build the software? ....Yes, they can! Most teams are very experienced in making assumptions and interpretations, so bad requirements are not a problem.
OK, and the project managers? Can they do their job? Yes, they can...not an easy job, and sometimes a project don't make it or has some delays, but what the hack, that's how it works in the ICT!

GREAT job! or NOT?............What about the customers? Do we solve their problems?

Let's look at the Application Lifecycle Management [ALM] level
Try to answer this question: If the same bad requirements are sent to 10 different projects with the same assignment to make the needed Information System. What is the change that the projects will deliver an Information System that solves the problems of the customer?..............I think that, of course depending on the amount of communication with the customer during the projects, the changes are not very great, maybe some projects will, but most of them will need more than one release....

And that's NOT a GREAT job!
I know....we testers can't solve this problem.... but what, if we stop accepting Verbal Diarrhea as a test base....


What will happen if testers refuse to make test cases from bad requirements?

Or even....what happens if testers create, as a first step of the activity 'test case specification', for instance during the testability review, formal models like process flows or activity diagrams?
Problems like interpretations and/or assumptions rise instantly, and if these problems in the requirements can be solved before the development team start building the software, it would differ a lot!!

In this 'First Model then Test'(FMTT) approach , we can use the following set of models:
  1. Process Flows and/or Activity Diagrams for sequential processes/activities,
  2. CRUD matrices for database manipulations,
  3. Pseudo Code for business rules and validations,
  4. State Transition Diagrams for state dependent processes/activities.

Disadvantages of FMTT:
  • Investment comes before profit: the testability review will take longer,
  • You need to know more about models and modeling,
  • It's different as we usually do,
  • Performing the tests is less complex, less creative.

Advantages of 'First Model then Test' (FMTT):
  • Finding Bugs As Early As Possible,
  • The development and test process is less human dependant,
  • Less interpretations and assumptions are needed during the building and testing,
  • Less bugs to find (some will consider this as a disadvantage....),
  • It's fun to collaborate with the other parties in Application Lifecycle Management [ALM],
  • We testers help developers create higher quality systems.


Afraid to be bored by less complexity?
Try to make the next step: Model Based Development, Testing and/or Estimation!
I've already proved that generating test cases from models is possible (see previous articles on that subject).

Rob Kuijt

add comment ( 90 views )   |  0 trackbacks   |  permalink   |   ( 2.8 / 1369 )
[ALM] & Business Continuity 
Sunday, July 27, 2008, 08:38 AM - ALM, Quality, Testing, TMap®, Rosario
Posted by Rob Kuijt
Application Lifecycle Management [ALM] should ensure that an organization experiences an improved "business as usual" in the event of the implementation of new and/or changed functionality. Other (older) industries can give assurance, so the ICT industry should follow (soon?). This article will, I hope, give some next steps towards adulthood, by giving [ALM] directions how to prevent costly or even lethal disasters caused by bugs.

Business Continuity has for [ALM] two viewpoints:
  • [ALM] is part of the interdisciplinary concept, called Business Continuity Planning (BCP), used to create and validate a practiced logistical plan for how an organization will recover and restore partially or completely interrupted critical function(s) within a predetermined time after a disaster or extended disruption (see wikipedia for more information),
  • [ALM] must give assurance that developing and/or changing applications won't create disasters or extended disruptions! (I won't explain what software bugs can accomplish...).


In this entry I will give some thoughts concerning the second viewpoint:
How to prevent a major disruption caused by the implementation of a new or changed application?

Of Course, the Acceptance Test is very important in our "battle" to prevent the worst to happen. But, as I stated earlier in my article "What's in the Box?", the Acceptance Test alone is not enough! Based on the Business Risks (Business Driven Test Management *) the Master Test strategy in [ALM] must contain at least the following three pillars:
  1. Finding Bugs AEAP (As Early As Possibly)
  2. Bug Prevention by Testing Requirements and Use Cases
  3. Gathering and Analyzing data to do Business Continuity Predictions

*) Business Driven Test Management gives client grip on test process, uses client language, delivers appropriate test coverage on right spot and makes test results visible for client (see TMap.net for more information)

Pillar 1: Finding Bugs AEAP (As Early As Possibly)
Finding bugs as early as possible prevents changes in the software in the last phases of a project.
Recently I participated in an (early) evaluation of an organisation, where the strategy of finding bugs as early as possible was implemented for the complete software engineering department by choosing the Quality Levels, tuning the Test Coverage of the successive test levels and introducing Learning Cycles. They had geat results!
See two earlier articles on this topic:

Problem solved? No, not yet, by diminishing the bugs, it became clear that the number of late change requests and/or wishes for extension of the functionality were disturbing the implementation of the applications. So it became obvious that the next pillar became more important...

Pillar 2: Bug Prevention by Testing Requirements and Use Cases.
If software development is based on inaccurate requirements, then despite well written code, the software will be unsatisfactory. No doubt that the users do want to change the application before it is implemented. Changing an application in the last stages of a project will generate huge risks.


Testing Requirements
Testing the requirements in the early stages of the project will minimize the changes on the application just before implementation. To give the testing of requirements a head start, I derived a checklist from the article "An Early Start to Testing: How to Test Requirements (Suzanne Robertson)"
  1. Does each requirement have a quality measure that can be used to test whether any solution meets the requirement?
  2. Does the specification contain a definition of the meaning of every essential subject matter term within the specification?
  3. Is every reference to a defined term consistent with its definition?
  4. Is the context of the requirements wide enough to cover everything we need to understand?
  5. Have we asked the stakeholders about conscious, unconscious and undreamed of requirements? Can you show that a modeling effort has taken place to discover the unconscious requirements? Can you demonstrate that brainstorming or similar efforts taken place to find the undreamed of requirements?
  6. Is every requirement in the specification relevant to this system?
  7. Does the specification contain solutions posturing as requirements?
  8. Is the stakeholder value defined for each requirement?
  9. Is each requirement uniquely identifiable?
  10. Is each requirement tagged to all parts of the system where it is used? For any change to requirements, can you identify all parts of the system where this change has an effect?


Testing Use Cases
After the requirements are tested they evolve in a functional model (for instance Use Cases) of the required application. To Test the Use Cases you can use the checklist I derived from the article: "Use Cases and Testing (Lee Copeland)"
1)Syntax Testing
  • Complete:
    • Are all use case definition fields filled in? Do we really know what the words mean?
    • Are all of the steps required to implement the use case included?
    • Are all of the ways that things could go right identified and handled properly? Have all combinations been considered?
    • Are all of the ways that things could go wrong identified and handled properly? Have all combinations been considered?
  • Correct:
    • Is the use case name the primary actor's goal expressed as an active verb phrase?
    • Is the use case described at the appropriate black box/white box level?
    • Are the preconditions mandatory? Can they be guaranteed by the system?
    • Does the failed end condition protect the interests of all the stakeholders?
    • Does the success end condition satisfy the interests of all the stakeholders?
    • Does the main success scenario run from the trigger to the delivery of the success end condition?
    • Is the sequence of action steps correct?
    • Is each step stated in the present tense with an active verb as a goal that moves the process forward?
    • Is it clear where and why alternate scenarios depart from the main scenario?
    • Are design decisions (GUI, Database, …) omitted from the use case?
    • Are the use case "generalization," "include," and "extend" relationships used to their fullest extent but used correctly?
  • Consistent:
    • Can the system actually deliver the specified goals?
2) Domain Expert Testing
  • Complete:
    • Are all actors identified? Can you identify a specific person who will play the role of each actor?
    • Is this everything that needs to be developed?
    • Are all external system trigger conditions handled?
    • Have all the words that suggest incompleteness ("some," "etc."…) been removed?
  • Correct:
    • Is this what you really want? Is this all you really want? Is this more than you really want?
  • Consistent:
    • When we build this system according to these use cases, will you be able to determine that we have succeeded?
    • Can the system described actually be built?
3) Traceability Testing
  • Complete:
    • Do the use cases form a story that unfolds from highest to lowest levels?
    • Is there a context-setting, highest-level use case at the outermost design scope for each primary actor?
  • Correct:
    • Are all the system's functional requirements reflected in the use cases?
    • Are all the information sources listed?
  • Consistent:
    • Do the use cases define all the functionality within the scope of the system and nothing outside the scope?
    • Can we trace each use case back to its requirement(s)?
    • Can we trace each use case forward to its class, sequence, and/or state-transition diagrams?


Pillar 3: Gathering Data and Analyzing Trends to do Business Continuity Predictions
It is NOT possible to predict Business Continuity based on the testing process of the concerning project only. It is important to get a better foundation for the decision to implement a changed or new application. In the Rosario TAP we are doing some research for Gathering Data and Analyzing Trends to do Business Continuity Predictions from three viewpoints:
  1. Fault Detection Trends
  2. Change Control Trends (Changes in Requirements, Specifications, LOC’s,...)
  3. Project Control Trends (Estimations, Budget, Overtime,...)

We’ve just started on this topic, so I will keep you posted in later entries.

Collaboration is a critical success factor in preventing a major disruption caused by the implementation of a new or changed application! All parties within [ALM] have to work together in creating good test coverage from the early until the last phases of the projects. I am sure that only when the Quality Levels, Learning Cycles and Metrics are in place, a good Business Continuity risk advice can be given to ensure that an organization experiences an improved "business as usual" in the event of the implementation of new and/or changed functionality.

Rob

add comment ( 218 views )   |  0 trackbacks   |  permalink   |   ( 2.9 / 706 )
Where are the Test Design Patterns? 
Sunday, July 6, 2008, 04:50 PM - Quality, Testing
Posted by Rob Kuijt
Triggered by the article "Common Test Patterns and Reuse of Test Designs" on the MSDN Test Center, I realized that we are very good in reinventing wheels!
Last week I explained two different test teams the same solution for a test problem without referring to a Test Design Pattern at all...


So, let's make my life easier. There must be a giant collection of Test Patterns on the internet. If I find the right sites, all I have to do is referring to these Test Pattern Collections...
A few hours later I woke up:
OK, there are Test Patterns out there, but they are not easy to find.
My first tip: Don’t search for "Test Patterns", otherwise you may find some... ;-)


Better is to look for "Test Design Patterns". After some time I found some promising books, and, even better, I found some patterns.
I was hoping to find Test Design Patterns on a more abstract functional level, like "Search Engine Test Pattern", "Web Shop Test Pattern" or "Database Conversion Test Pattern". But nevertheless I am happy with the ones I found.

Because I didn't find a site with a kind of index mechanism on Test Design Patterns to refer to, I started one of my own. I spent my weekend on making two
Test Design Pattern indexes: Alphabetical and Grouped.

I'll keep on looking for more, and If I can find the time, I'll try to create some "How to Test"-articles on the more abstract test problems like testing search engines, web shops, database conversions and so on... May be these will grow to Test Design Patterns we all can use.

Rob

add comment ( 168 views )   |  0 trackbacks   |  permalink   |   ( 3 / 2372 )

<<First <Back | 1 | 2 | 3 | 4 | 5 | Next> Last>>