Rob Kuijt's Testing Blog
Visual Studio Team System 2010 - Episode 4: Quality Check 
Friday, April 24, 2009, 10:04 AM - ALM, Quality, Testing, TMap®, Rosario
Posted by Rob Kuijt
In this episode I will discuss the different practices around the Quality Check in order to do this important check for ALM as efficient and effective as possible.

Previous episodes:
Visual Studio Team System 2010 - Episode 1: A Focus on Testing
Visual Studio Team System 2010 - Episode 2: No Risk No Test
Visual Studio Team System 2010 - Episode 3: The Lifecycle

In the last episode Clemens talked about the support VSTS2010 will give on the collaboration at artifact level. Different roles in the lifecycle work together on artifacts. Each of them adds their knowledge, vision and ideas to the solution from their view point. These artifacts are accessible by every role in every phase of the project, adding value throughout the lifecycle. People are enabled to collaborate, making applications together, and not only by telling what they are doing but most important by working seamless together on the application.


TMap® Quality Check
A subsequent measure for increasing the quality of the developed artifacts is an evaluation activity: for instance the review.

The review is a method of improving the quality of an artifact by evaluating the work against the requirements and/or guidelines and subjecting it to peer review.

The review on the requirements and/or design can be carried out as a static test activity before the coding starts.
In the review, the following points can be checked, independently of the set requirements:
1. Has the artifact been realized in accordance with the assignment? For example, are the requirements laid down in the technical design realized correctly, completely and demonstrably?
2. Does the artifact meet the following criteria: internally consistent, meeting standards and norms and representing the best possible solution? ‘Best possible solution’ means the ‘best solution’ that could be found within the given preconditions, such as time and finance.
3. Does the artifact contribute to the project and architecture aims? Is the artifact consistent with other, related artifacts (consistency across the board)?
4. Is the artifact suitable for use in the next phase of the development?

Why?
Like testing, the Quality Check is a measure to provide insight into the quality of delivered products and the related risks when taken into the next phase of the lifecycle. If the quality is inadequate timely measures can be taken, such as rework by the designers. However, there is never an unlimited quantity of resources and time. In theory it is important to relate the Quality Check effort to the expected risks. A pragmatic approach to determine the Quality Check effort is to look at some past projects and answer the question:

"How many defects, detected in Acceptance Tests, could have been found much earlier, if we had done a Quality Check? (According to the above points)"
In my experience 10-20% of the defects could have been found much earlier in the lifecycle when the Quality Check was done properly. And because defects in the Acceptance Test are quite expensive, you only have to find 1 or 2 serious defects in the Quality Check to make it economic worthwhile. So my strong recommendation is:

7 Hints and tips
1. Performing a good Quality Check is a kind of inborn specialization. Find a person who is good in recognizing texts with high potential risk on assumptions and or interpretation errors. In other words: find a pencil-pushing, nit-picky quality geek!
;-) Most professional testers are proud to have these qualifications.

2. Before checking an artifact: What is the quality of its source? Is the source of the AUC (Artifact Under Check) ready? authorized? stable? If not, consider to do also a check on the source of the artifact to quantify the possible changes in the (near) future,

3. If a previous version of the artifact is available, and the quality of that version is known: make use of a so called Comparison Tool to find and check the differences. AND! Check always the consistency of the “change register”, especially when the “change register” is used in the next phase to implement the changes.

4. Combine the Quality Check with the estimation activity for the next phase. If the estimation is done by another person, let them work together!

5. Use a checklist as a reference! For your own protection, a checklist prevents that too much attention is paid to the use of standards and correct spelling or even to these aspects alone (This can be a cause of friction among the various people involved.) Partly owing to the diversity of design techniques and information sources that, it is not possible to create one general checklist per artifact type. Therefore, checklists should be created specific to the situation per organization and per project. Of course you can use examples like Testing Requirements or Testing Use Cases as a start for the creating of your checklist.

6. Make always clear what checks you will perform. By communicating your checklist you can prevent a lot of misunderstandings on later findings.

7. Audi alteram partem (hear the other side). Don't report on findings/defects without a fair hearing in which the author of the artifact is given the opportunity to respond to the "accusations" against his work.

Collaborate
For some, it is very tempting to do a review in their own silo: Get the stuff to check, find as much as defects as possible, receive applause for the prevented damage and do that over and over again……
Wrong! Don’t be a Scrooge! (see my blog: "Does Scrooge exist?") An essential part of the Quality Check is the Learning Cycle. By performing Quality Checks the quality of future AUC’s (Artifacts Under Check) must grow. So collaborate with the designer, information analyst or whoever made that artifact, and get your applause on the better quality of artifacts instead on the number of defects.

In the next episode Clemens will explain how the tools support the Quality Check, as well as the collaboration around it, to get the optimum results.




add comment ( 299 views )   |  0 trackbacks   |  permalink   |   ( 3.1 / 2543 )
Visual Studio Team System 2010 – Episode 2: No Risk No Test  
Tuesday, February 10, 2009, 08:08 PM - ALM, Quality, Testing, TMap®, Rosario
Posted by Rob Kuijt
In episode 1, Clemens introduced the focus of Visual Studio Team System 2010 on the collaborative effort between the tester and the other roles in the application lifecycle. In this second episode I will start with a short introduction on testing.

Testing is not an aim in itself! Testing is, in fact, a balancing act. What risks must be covered, what results are to be delivered and how much time and money can be spend, based on rational and economic grounds?

The right test strategy will balance among risks and costs .


Testing supplies insight in the difference between the actual and the required status of an object. Where quality is roughly to be described as 'meeting the requirements and expectations', testing delivers information on the quality.

In this, there is no difference for developers who are testing, specialist testers during system testing or generalist testers during the final acceptance test. Choosing the right test strategy is a joined effort between every tester and the other roles in the application lifecycle. This collaboration is needed because, beside insight in the business risks, much (technical and test) knowledge is needed to find the most important defects as early as possible at the lowest price.

The collaboration doesn't stop after choosing the best test strategy. After designing the test cases, also test execution must be organized as a joined effort. Collaboration between the different roles in the application lifecycle is not self-evident. By nature, it looks like developers and testers don't (want to) understand each other. Having this attitude, it is difficult to find out you need each other to deliver software of good quality.

...don’t (want to) understand each other.


The consequence of this virtual (and sometimes real) wall is calamitous for quality. Many bugs arise by doing wrong assumptions and interpretations of the, in generally, unclear specifications and requirements. Reporting, analyzing and resolving these bugs take a lot of time, especially when they prove to be not reproducible or wrong.

An Example: After a development project of 5 man year, testing is done by a team of generalist testers. With much enthusiasm the test team executes the test cases they designed during the building of the system. After 3 months the verdict is given: Negative release advice, because of (8) blocking and (22) big defects, the test team gives the advice NOT to go into production. As you may understand this was a big disappointment for both the project team as the business department. A taskforce was established to keep the delay (damage) in control. After the defect analysis the general feeling became better, after filtering the test faults (20%), the functional wishes (20%) and changes (not passed through to the testers; 30%) the repair costs were approximately 200 hours ( for the remaining 30%). In fact the development team had done a great job. Better communication from both sides, before and during test execution, probably had made the verdict a, celebration worthy, positive release advice. Much, much better than this cold shower!

The above paragraphs show that the collaboration between the tester and developer is important for success. If the collaboration between the different roles in the application lifecycle is not actively stimulated and facilitated, the proverbial wall will arise.

In the next episode Clemens shows us how Visual Studio Team System 2010 will support and stimulate this collaboration...

add comment ( 231 views )   |  0 trackbacks   |  permalink   |   ( 3 / 2576 )
Testing or Documenting? 
Monday, January 12, 2009, 10:28 PM - Quality, Testing
Posted by Rob Kuijt
How sure is a company about the quality of their Calculation Component(s)? Are they bug free? Even after, for instance, nine changes? Testing changed components is tricky, because you never get the time to test a Calculation Component with lots of values, because most of the testing is still done manually.

There are many test tools that could do this job much faster, but they are complex to use and mostly pretty expensive.
So I did some thinking…….I want to do lean and mean Calculation Component Analyzing!
Preferably: User-friendly, quick, low cost and in a way that the outcome is easy to check!

Personally I like pictures instead of figures. So I started some PHP programming, and voila these are my results:
The first version is a service that can analyze a “one parameter” Calculation Component. Let me give an example.
The test object is a web component with one input parameter. In my home made analyzer service I give the URL of the test object, the start input value, the end input value and the step size (see fig.1)


Fig.1 Input screen of my home made Analyzer

After sending the input, the analyzer performs, in this case, 21 calls to the test object, and responses with the following graphics:

Fig.2 Output screen of my home made Analyzer

This output gives a first impression of the calculation. It looks like: output=input*input (if input > zero) or output=input*input (if input >= zero).

But to be sure, let's make the steps in the range smaller(step=10):

That’s strange! The input value 10 gives a response: zero!

Again, let's choose step=1:

Now I'm pretty sure that the calculation function is: output=input*input (if input greater than ten)

And for a last check (in this example) I choose step=0.1:

Yes, it still looks like: output=input*input (if input greater than ten)

Conclusion:
I think that this kind of functionality is valuable, not only for testing, but also for documenting Calculation Components.
Above that, it is:
  • Easy to use (one simple input screen),
  • Low cost (I did build the function in 2 hours (with the help of open source: PHPLOT)),
  • Quick (the above analyzing took me 3 minutes (including capturing the pictures for my blog)),
  • Simple to read.

And it's FUN to play with!
Now I'll try to find some use for this kind of functionality, and l start thinking how to handle (and present the outcome) for a “two parameter” Calculation Component.

Rob
add comment ( 244 views )   |  0 trackbacks   |  permalink   |   ( 2.9 / 25 )
Does Scrooge exist? 
Wednesday, November 26, 2008, 08:19 PM - ALM, Testing
Posted by Rob Kuijt
Fiction or reality? On a, further nice, kind of conference meeting, I did met Ebenezer Scrooge!
I didn't know that such Scrooge-type testers do exist!

:-((


Scrooge was fully focused on finding bugs, and if I say fully, I mean FULLY !
Scrooge was completely focused on his own world, collecting as much as bugs as possible, and enjoying it with a kind of scary laughter..... I must say his test results seems to be excellent, very fast in creating test cases, and he knows and uses more test techniques, I ever did. And he has, as we say it, a nose for finding bugs. But his eyes did spit fire when I suggested helping the developers to find the bugs earlier in the cycle. "Why should I destroy my own work" Scrooge replied..... I must confess, I didn't have a response right away. I know that most of the (good) testers enjoy finding bugs. But such a fanatic ego-centric type was new and a complete surprise for me. In fact, I think that Scrooge-type testers, especially this fanatic, are a disgrace for the test profession I love.

So I tried to convince him to change his attitude. Of course I didn't succeed the remaining 15 minutes we met. Probably he needs a visit from the ghosts of the past, present and future!
;-)


For the Scrooges among us
For the Scrooges in the test world I have a message: Finding bugs can deliver applause from your surrounding and it may look that your manager is pleased with the extra test hours (he can send a bigger invoice). But in the long term no one (beside you) is happy with a Scrooge attitude. The business users don't get their systems on time, the developers won't help you if you need them, and if the project manager becomes aware of this attitude, he will kick you out (so your manager can't send any invoice at all). So broaden your small ego centric world! Adopt Application Lifecycle Management [ALM] and find bugs as early as possible in a collaborative driven attitude.

For the non-Scrooges
How do we fight this irritating phenomenon? In my opinion, the best testers are the ones helping (actively) the developers to build better software. Luckily some of the big test gurus of this world preach the same opinion (see the entry measuring testers in the weblog of James Whittaker).

Wild thought
Can we make testers responsible for the quality of the software? For instance: Is it possible to award testers depending decreasing defect rates. I think a kind of Collaboration-bonus awarding mean time between failures and/or decreasing defect rates can work! Has someone a suggestion for the formulation of such a performance indicator or Collaboration-bonus?

I'll look for some like-minded friends and give it a try in the coming period. I’m sure this entry will be continued....
And again: Suggestions are very welcome!

Rob
add comment ( 100 views )   |  0 trackbacks   |  permalink   |   ( 3 / 27 )
We can test Verbal Diarrhea! 
Sunday, October 19, 2008, 02:55 PM - ALM, Quality, Testing, Rosario
Posted by Rob Kuijt
We, professional testers, are proud that we can create good test cases from bad requirements.....That's a great achievement.......We can save projects much time by starting with the test process at an early stage in the life cycle........Or not? Who are we doing a favor?

Believe it or not, with test specification techniques like 'Data Combination Test' it is possible to create good test cases from incomplete and/or vague requirements. By combining the (bad) requirements with the knowledge of domain specialists in creative team sessions, it is possible to create great test cases, with even the choice to cover the risks with different test depths levels.

So, if the requirements team runs out of time (or don't understand their own job properly);
No problem;
Send the stuff;
We can start right away with the preparation of the test cases.

It is fun to do, it saves time and we do find many defects with this approach!

GREAT job! or NOT? ............Is deriving good test cases from bad requirements professionalism?

Let's look at the Project level
Besides us testers there are more parties trying to do their jobs on base of the requirements. For instance can the development team build the software? ....Yes, they can! Most teams are very experienced in making assumptions and interpretations, so bad requirements are not a problem.
OK, and the project managers? Can they do their job? Yes, they can...not an easy job, and sometimes a project don't make it or has some delays, but what the hack, that's how it works in the ICT!

GREAT job! or NOT?............What about the customers? Do we solve their problems?

Let's look at the Application Lifecycle Management [ALM] level
Try to answer this question: If the same bad requirements are sent to 10 different projects with the same assignment to make the needed Information System. What is the change that the projects will deliver an Information System that solves the problems of the customer?..............I think that, of course depending on the amount of communication with the customer during the projects, the changes are not very great, maybe some projects will, but most of them will need more than one release....

And that's NOT a GREAT job!
I know....we testers can't solve this problem.... but what, if we stop accepting Verbal Diarrhea as a test base....


What will happen if testers refuse to make test cases from bad requirements?

Or even....what happens if testers create, as a first step of the activity 'test case specification', for instance during the testability review, formal models like process flows or activity diagrams?
Problems like interpretations and/or assumptions rise instantly, and if these problems in the requirements can be solved before the development team start building the software, it would differ a lot!!

In this 'First Model then Test'(FMTT) approach , we can use the following set of models:
  1. Process Flows and/or Activity Diagrams for sequential processes/activities,
  2. CRUD matrices for database manipulations,
  3. Pseudo Code for business rules and validations,
  4. State Transition Diagrams for state dependent processes/activities.

Disadvantages of FMTT:
  • Investment comes before profit: the testability review will take longer,
  • You need to know more about models and modeling,
  • It's different as we usually do,
  • Performing the tests is less complex, less creative.

Advantages of 'First Model then Test' (FMTT):
  • Finding Bugs As Early As Possible,
  • The development and test process is less human dependant,
  • Less interpretations and assumptions are needed during the building and testing,
  • Less bugs to find (some will consider this as a disadvantage....),
  • It's fun to collaborate with the other parties in Application Lifecycle Management [ALM],
  • We testers help developers create higher quality systems.


Afraid to be bored by less complexity?
Try to make the next step: Model Based Development, Testing and/or Estimation!
I've already proved that generating test cases from models is possible (see previous articles on that subject).

Rob Kuijt

add comment ( 90 views )   |  0 trackbacks   |  permalink   |   ( 2.8 / 1369 )

<<First <Back | 1 | 2 | 3 | 4 | 5 | Next> Last>>