Rob Kuijt's Testing Blog
Visual Studio Team System 2010 – Episode 2: No Risk No Test  
Tuesday, February 10, 2009, 08:08 PM - ALM, Quality, Testing, TMap®, Rosario
Posted by Rob Kuijt
In episode 1, Clemens introduced the focus of Visual Studio Team System 2010 on the collaborative effort between the tester and the other roles in the application lifecycle. In this second episode I will start with a short introduction on testing.

Testing is not an aim in itself! Testing is, in fact, a balancing act. What risks must be covered, what results are to be delivered and how much time and money can be spend, based on rational and economic grounds?

The right test strategy will balance among risks and costs .


Testing supplies insight in the difference between the actual and the required status of an object. Where quality is roughly to be described as 'meeting the requirements and expectations', testing delivers information on the quality.

In this, there is no difference for developers who are testing, specialist testers during system testing or generalist testers during the final acceptance test. Choosing the right test strategy is a joined effort between every tester and the other roles in the application lifecycle. This collaboration is needed because, beside insight in the business risks, much (technical and test) knowledge is needed to find the most important defects as early as possible at the lowest price.

The collaboration doesn't stop after choosing the best test strategy. After designing the test cases, also test execution must be organized as a joined effort. Collaboration between the different roles in the application lifecycle is not self-evident. By nature, it looks like developers and testers don't (want to) understand each other. Having this attitude, it is difficult to find out you need each other to deliver software of good quality.

...don’t (want to) understand each other.


The consequence of this virtual (and sometimes real) wall is calamitous for quality. Many bugs arise by doing wrong assumptions and interpretations of the, in generally, unclear specifications and requirements. Reporting, analyzing and resolving these bugs take a lot of time, especially when they prove to be not reproducible or wrong.

An Example: After a development project of 5 man year, testing is done by a team of generalist testers. With much enthusiasm the test team executes the test cases they designed during the building of the system. After 3 months the verdict is given: Negative release advice, because of (8) blocking and (22) big defects, the test team gives the advice NOT to go into production. As you may understand this was a big disappointment for both the project team as the business department. A taskforce was established to keep the delay (damage) in control. After the defect analysis the general feeling became better, after filtering the test faults (20%), the functional wishes (20%) and changes (not passed through to the testers; 30%) the repair costs were approximately 200 hours ( for the remaining 30%). In fact the development team had done a great job. Better communication from both sides, before and during test execution, probably had made the verdict a, celebration worthy, positive release advice. Much, much better than this cold shower!

The above paragraphs show that the collaboration between the tester and developer is important for success. If the collaboration between the different roles in the application lifecycle is not actively stimulated and facilitated, the proverbial wall will arise.

In the next episode Clemens shows us how Visual Studio Team System 2010 will support and stimulate this collaboration...

add comment ( 231 views )   |  0 trackbacks   |  permalink   |   ( 3 / 2235 )
[ALM] & Business Continuity 
Sunday, July 27, 2008, 08:38 AM - ALM, Quality, Testing, TMap®, Rosario
Posted by Rob Kuijt
Application Lifecycle Management [ALM] should ensure that an organization experiences an improved "business as usual" in the event of the implementation of new and/or changed functionality. Other (older) industries can give assurance, so the ICT industry should follow (soon?). This article will, I hope, give some next steps towards adulthood, by giving [ALM] directions how to prevent costly or even lethal disasters caused by bugs.

Business Continuity has for [ALM] two viewpoints:
  • [ALM] is part of the interdisciplinary concept, called Business Continuity Planning (BCP), used to create and validate a practiced logistical plan for how an organization will recover and restore partially or completely interrupted critical function(s) within a predetermined time after a disaster or extended disruption (see wikipedia for more information),
  • [ALM] must give assurance that developing and/or changing applications won't create disasters or extended disruptions! (I won't explain what software bugs can accomplish...).


In this entry I will give some thoughts concerning the second viewpoint:
How to prevent a major disruption caused by the implementation of a new or changed application?

Of Course, the Acceptance Test is very important in our "battle" to prevent the worst to happen. But, as I stated earlier in my article "What's in the Box?", the Acceptance Test alone is not enough! Based on the Business Risks (Business Driven Test Management *) the Master Test strategy in [ALM] must contain at least the following three pillars:
  1. Finding Bugs AEAP (As Early As Possibly)
  2. Bug Prevention by Testing Requirements and Use Cases
  3. Gathering and Analyzing data to do Business Continuity Predictions

*) Business Driven Test Management gives client grip on test process, uses client language, delivers appropriate test coverage on right spot and makes test results visible for client (see TMap.net for more information)

Pillar 1: Finding Bugs AEAP (As Early As Possibly)
Finding bugs as early as possible prevents changes in the software in the last phases of a project.
Recently I participated in an (early) evaluation of an organisation, where the strategy of finding bugs as early as possible was implemented for the complete software engineering department by choosing the Quality Levels, tuning the Test Coverage of the successive test levels and introducing Learning Cycles. They had geat results!
See two earlier articles on this topic:

Problem solved? No, not yet, by diminishing the bugs, it became clear that the number of late change requests and/or wishes for extension of the functionality were disturbing the implementation of the applications. So it became obvious that the next pillar became more important...

Pillar 2: Bug Prevention by Testing Requirements and Use Cases.
If software development is based on inaccurate requirements, then despite well written code, the software will be unsatisfactory. No doubt that the users do want to change the application before it is implemented. Changing an application in the last stages of a project will generate huge risks.


Testing Requirements
Testing the requirements in the early stages of the project will minimize the changes on the application just before implementation. To give the testing of requirements a head start, I derived a checklist from the article "An Early Start to Testing: How to Test Requirements (Suzanne Robertson)"
  1. Does each requirement have a quality measure that can be used to test whether any solution meets the requirement?
  2. Does the specification contain a definition of the meaning of every essential subject matter term within the specification?
  3. Is every reference to a defined term consistent with its definition?
  4. Is the context of the requirements wide enough to cover everything we need to understand?
  5. Have we asked the stakeholders about conscious, unconscious and undreamed of requirements? Can you show that a modeling effort has taken place to discover the unconscious requirements? Can you demonstrate that brainstorming or similar efforts taken place to find the undreamed of requirements?
  6. Is every requirement in the specification relevant to this system?
  7. Does the specification contain solutions posturing as requirements?
  8. Is the stakeholder value defined for each requirement?
  9. Is each requirement uniquely identifiable?
  10. Is each requirement tagged to all parts of the system where it is used? For any change to requirements, can you identify all parts of the system where this change has an effect?


Testing Use Cases
After the requirements are tested they evolve in a functional model (for instance Use Cases) of the required application. To Test the Use Cases you can use the checklist I derived from the article: "Use Cases and Testing (Lee Copeland)"
1)Syntax Testing
  • Complete:
    • Are all use case definition fields filled in? Do we really know what the words mean?
    • Are all of the steps required to implement the use case included?
    • Are all of the ways that things could go right identified and handled properly? Have all combinations been considered?
    • Are all of the ways that things could go wrong identified and handled properly? Have all combinations been considered?
  • Correct:
    • Is the use case name the primary actor's goal expressed as an active verb phrase?
    • Is the use case described at the appropriate black box/white box level?
    • Are the preconditions mandatory? Can they be guaranteed by the system?
    • Does the failed end condition protect the interests of all the stakeholders?
    • Does the success end condition satisfy the interests of all the stakeholders?
    • Does the main success scenario run from the trigger to the delivery of the success end condition?
    • Is the sequence of action steps correct?
    • Is each step stated in the present tense with an active verb as a goal that moves the process forward?
    • Is it clear where and why alternate scenarios depart from the main scenario?
    • Are design decisions (GUI, Database, …) omitted from the use case?
    • Are the use case "generalization," "include," and "extend" relationships used to their fullest extent but used correctly?
  • Consistent:
    • Can the system actually deliver the specified goals?
2) Domain Expert Testing
  • Complete:
    • Are all actors identified? Can you identify a specific person who will play the role of each actor?
    • Is this everything that needs to be developed?
    • Are all external system trigger conditions handled?
    • Have all the words that suggest incompleteness ("some," "etc."…) been removed?
  • Correct:
    • Is this what you really want? Is this all you really want? Is this more than you really want?
  • Consistent:
    • When we build this system according to these use cases, will you be able to determine that we have succeeded?
    • Can the system described actually be built?
3) Traceability Testing
  • Complete:
    • Do the use cases form a story that unfolds from highest to lowest levels?
    • Is there a context-setting, highest-level use case at the outermost design scope for each primary actor?
  • Correct:
    • Are all the system's functional requirements reflected in the use cases?
    • Are all the information sources listed?
  • Consistent:
    • Do the use cases define all the functionality within the scope of the system and nothing outside the scope?
    • Can we trace each use case back to its requirement(s)?
    • Can we trace each use case forward to its class, sequence, and/or state-transition diagrams?


Pillar 3: Gathering Data and Analyzing Trends to do Business Continuity Predictions
It is NOT possible to predict Business Continuity based on the testing process of the concerning project only. It is important to get a better foundation for the decision to implement a changed or new application. In the Rosario TAP we are doing some research for Gathering Data and Analyzing Trends to do Business Continuity Predictions from three viewpoints:
  1. Fault Detection Trends
  2. Change Control Trends (Changes in Requirements, Specifications, LOC’s,...)
  3. Project Control Trends (Estimations, Budget, Overtime,...)

We’ve just started on this topic, so I will keep you posted in later entries.

Collaboration is a critical success factor in preventing a major disruption caused by the implementation of a new or changed application! All parties within [ALM] have to work together in creating good test coverage from the early until the last phases of the projects. I am sure that only when the Quality Levels, Learning Cycles and Metrics are in place, a good Business Continuity risk advice can be given to ensure that an organization experiences an improved "business as usual" in the event of the implementation of new and/or changed functionality.

Rob

add comment ( 218 views )   |  0 trackbacks   |  permalink   |   ( 2.8 / 388 )
Finding bugs AEAP (As Early As Possibly) 
Saturday, June 7, 2008, 07:56 PM - ALM, Quality, Testing, TMap®
Posted by Rob Kuijt
Analyzing and fixing a bug is, without doubt, a loss of time. And the later the bug is found, the greater are the losses. Finding bugs AEAP (As Early As Possibly) becomes more and more important, especially in the complex systems we make nowadays. So, let's join the knowledge of the Developer, the Tester and the Designer and tackle this challenge!


Of course we can train the individual team members to test their own work....but, I know from my own experience, you can be blind for your own short comings.

In the Application Life Cycle we should stimulate Developers, Testers and Designers to work together in the challenge to Find Bugs AEAP (As Early As Possibly). Join their knowledge and they will accomplish better systems in less time!
For instance, let's look into the creating of Unit Test cases. The Unit Test mechanism is great, but if you don't know anything about test coverage it's almost impossible to get the advantages you want.
Let me give an example how the Developer, the Testers and the Designers can work together....

Case
Let’s say a Developer must create a component that contains the following decisions:
if C1 and (C2 or not C3)
then
        if C5 or C4
        then
                "Result_1"
        else
                “Result_2"
        end if
else
        if C3 and C4
        then
                "Result_2"
        else
                "Result_3"
        end if
end if


We want to create Unit Test cases which will Find the Bugs AEAP....That's why we must choose for the best test coverage: Modified Condition/Decision Coverage (MC/DC)


Modified Condition/Decision Coverage
Every point of entry and exit in the program has been invoked at least once, every condition in a decision in the program has taken on all possible outcomes at least once, and each condition has been shown to affect that decision outcome independently. A condition is shown to affect a decision’s outcome independently by varying just that decision while holding fixed all other possible conditions. (wikipedia.org)


It is not easy to create the test cases for the test coverage MC/DC, especially if you want a minimum amount of test cases. Here the Tester comes into the play. With the help of TMap® test specification techniques, he/she creates the (logical) test cases. In this case the Tester delivers the Developer the following 7 (logical) MC/DC test cases.



Depending on the risks, and with the help of Equivalence classes and Boundary value analysis, the physical test cases are derived.
Now, the Developer can start building the test cases into his/her component.

So, by joining their knowledge, the Developer and Tester did create, for this phase in the life cycle, the best test coverage to Find Bugs AEAP . The effect of collaboration can even be greater if the Unit Test cases, before implementing in the code, are reviewed by the Designer.

Reviewing the test cases gives the Designer insight in the assumptions and interpretations of the Developer and Tester.

Finding Bugs during this review is the ultimate AEAP!!!

Creating these test cases manually is, even for experienced testers, pretty difficult. Within Sogeti we use a tool (I made) for this work. (The generation of the test cases of this example took me 5 minutes).



add comment ( 239 views )   |  0 trackbacks   |  permalink   |   ( 3 / 2008 )
On-the-fly testing with Camano 
Monday, May 5, 2008, 04:20 PM - ALM, Testing, TMap®, Rosario
Posted by Rob Kuijt
Generalist Testers (Manual Testers) do like on-the-fly testing! It feels good to be creative and impulsive!! Let's react on the behavior of the system!........... Maybe it's not as efficient and/or effective as structured testing, but it is fun!

... what about "Non Reproducibility"?


Too much fun brings also disadvantages. In complex systems, with many dependencies under the surface, on-the-fly testers aren't able (most of the time), to write reproducible bug reports. And that's a nightmare for the project manager. Non reproducible bugs are time consuming and expensive, so "on-the-fly testing" is banned out of the life of the Generalist Testers and replaced by structured test methods.

"Reproducibility refers to the ability of a test to be accurately reproduced,
or replicated, by someone else working independently" - Wikipedia



On-the-fly Testing...


Nowadays testers must work in formal structures, of course for efficient testing, but especially for generating reproducible bug reports. Writing an accurate bug report is NOT easy. It takes relatively much time, and even then they are not accurate enough, so developers may call for more information, or even worse, they close the bug report with the state "non reproducible". And believe me...That's not funny at all!


Why do I care?


I care because I want to improve the way testing is implemented in the complete application lifecycle [ALM], and besides that,...it's my job! I am process manager of the Managed Testing Services of Sogeti. If I see chances to improve our services, I go for it!
The new test suite of Microsoft (codename Camano) is in my opinion a great chance for improvement. Instead of converting Generalist Testers into technical skilled testers, Microsoft has chosen to support the way Generalist Testers like to work: "Manual Testing"!
Camano (part of Rosario) is the code name given to the Microsoft standalone testing suite for General Testers. Camano supports planning, creation and executing of manual test cases (CTP April 2008: for testing websites). See the blog entry of Randy Bergeron for some of the latest screen dumps.


Camano fights the non reproducible bugs


Generalist Testers must write accurate bug reports, but now they can stop detailed manual logging of their actions. Because the bug reporting of Camano is great! Camano can keep track of the complete manual (structured or not) behavior of the Generalist Tester. So if a tester is a bit enthusiastic and performs more, better or other tests than originally planned, Camano don't mind, the Microsoft Test Runner records everything in the background for the later uses:
  1. Regression testing: the whole script or part of the script.
  2. Export to Visual Studio for the creation of automated scripts (to be performed by technical skilled testers before releasing).
  3. And for bug reporting!! If a tester runs into a bug, the bug report contains not only all the configuration parameters, it also contains all the steps taken before the bug did occur! Combined with the possibility to capture the window, this support of bug reporting is very strong!!

By combining Camano with the flexibility of our structured test approach TMap®, I can re-introduce on-the-fly testing in our test projects!

Structured "on-the-fly" Testing


Also, with Camano it is possible to have the fun of on-the-fly testing and still report reproducible bug reports. Combining Camano with TMap® makes it possible for us to do result driven test assignments (agreements with the project management concerning time, budget and/or test coverage) and still enjoy testing.
To explain the test teams the balance between structured and on-the-fly testing and how to use Camano in the test project, I've written a fictitious case .
The case contains (a description of):
  1. The case specifications: Course Administration application.
  2. Creating the basis structure for test coverage
  3. The choices concerning freedom versus more structure in the Camano steps


Developers gonna like Camano


I'm sure the developers will like Camano. Especially if they find out that the bugs are reported accurately!
Because: Fast bug fixing is almost as good as making no bugs at all!




add comment ( 182 views )   |  0 trackbacks   |  permalink   |   ( 3 / 76 )
What's In the Box? 
Sunday, April 27, 2008, 05:35 PM - Quality, Testing, TMap®
Posted by Rob Kuijt
Today I read a nice article: "What's In the Box?" by Mary Altman.
This article, about speculative fiction, started with a recognizable explanation of the term "black-box theory":
"The term is typically used to describe a device of which we know or care very little about its inner workings. The entire focus is on its input/output behavior—the result rather than the reasoning."

In the article, also Sociologist Bruno Latour gives a description of the "black-box theory":
"The way scientific and technical work is made invisible by its own success. ... Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become."
....
Mary ends her article with the conclusion:
"Most simply: it doesn't matter how it works. It only matters that it works."

...and what about the "black-box theory" in the software industry?


I know that many colleague testers believe this conclusion is also valid in the software industry. So, when they organize an Acceptance Test, they rely completely on the testing target: "It only matters that it works". From the perspective of the end-user they are probably right....because the end-user is not interested in (and has no knowledge of) the inner workings. So many Acceptance Tests rely on a complete black-box strategy!

I’m not one of them!


I think the test target: "It only matters that it works" is too thin. It gives no answers for: "When it works, will it keep on working?", or "How about vendor dependency?", or "Is this software maintainable?", or "Are there any hidden features?", and so on...

I think an Acceptance Test must, besides the testing "that it works", find assurance that the inner workings are validated. I agree that it should not be tested in the Acceptance Test, but still it MATTER HOW IT WORKS!!!
So how can we find assurance that it is tested? Is it possible for the software industry to give the Acceptance Test team the assurance the inner workings are validated and correct?

On these questions I am convinced that the testing community can help the developers. I believe, much effort is already done in delivering software of good quality.
But those efforts are hidden and not tuned into an efficient and effective process. Again: I think the testers can help! When testers and developers join together in describing all the quality measures (they already perform), it will be clear if enough measures are taken and/or what additional actions are needed. Chapter 7 of TMap® Next (Development Tests) gives for this inventory a small, but effective model.
The so called "Basic Quality model" describes the quality measures from four points of view:
  1. Depth of test coverage;
  2. Clarity (description how and when the test coverage is reached);
  3. Provision of proof (what deliverables give proof of the agreed test coverage);
  4. Compliance monitoring (how does the development process perform its internal monitoring).
Together, developers and testers can fill in these four points of view.

Combined with the quality measures of the requirements and design processes, this gives the Acceptance Test team the opportunity to get a "glass-box" impression how the inner workings are validated.

The glass-box effect gives also the possibility for implementing learning cycles. Implementing the "Basic Quality model" makes it possible to investigate the origin of defects, learn from it and help each other to do it better the next time!....I know...that's a little too optimistic...

So let's do it step by step: Creating a kind of glass-box by giving the Acceptance Testers the possibility to be a virtual witness of the development process would be a great first step!

pictures: www.vuurwerkboer.nl

add comment ( 134 views )   |  0 trackbacks   |  permalink   |   ( 3 / 2032 )

<<First <Back | 1 | 2 | 3 | 4 | 5 | Next> Last>>