Rob Kuijt's Testing Blog
We can test Verbal Diarrhea! 
Sunday, October 19, 2008, 02:55 PM - ALM, Quality, Testing, Rosario
Posted by Rob Kuijt
We, professional testers, are proud that we can create good test cases from bad requirements.....That's a great achievement.......We can save projects much time by starting with the test process at an early stage in the life cycle........Or not? Who are we doing a favor?

Believe it or not, with test specification techniques like 'Data Combination Test' it is possible to create good test cases from incomplete and/or vague requirements. By combining the (bad) requirements with the knowledge of domain specialists in creative team sessions, it is possible to create great test cases, with even the choice to cover the risks with different test depths levels.

So, if the requirements team runs out of time (or don't understand their own job properly);
No problem;
Send the stuff;
We can start right away with the preparation of the test cases.

It is fun to do, it saves time and we do find many defects with this approach!

GREAT job! or NOT? ............Is deriving good test cases from bad requirements professionalism?

Let's look at the Project level
Besides us testers there are more parties trying to do their jobs on base of the requirements. For instance can the development team build the software? ....Yes, they can! Most teams are very experienced in making assumptions and interpretations, so bad requirements are not a problem.
OK, and the project managers? Can they do their job? Yes, they can...not an easy job, and sometimes a project don't make it or has some delays, but what the hack, that's how it works in the ICT!

GREAT job! or NOT?............What about the customers? Do we solve their problems?

Let's look at the Application Lifecycle Management [ALM] level
Try to answer this question: If the same bad requirements are sent to 10 different projects with the same assignment to make the needed Information System. What is the change that the projects will deliver an Information System that solves the problems of the customer?..............I think that, of course depending on the amount of communication with the customer during the projects, the changes are not very great, maybe some projects will, but most of them will need more than one release....

And that's NOT a GREAT job!
I know....we testers can't solve this problem.... but what, if we stop accepting Verbal Diarrhea as a test base....


What will happen if testers refuse to make test cases from bad requirements?

Or even....what happens if testers create, as a first step of the activity 'test case specification', for instance during the testability review, formal models like process flows or activity diagrams?
Problems like interpretations and/or assumptions rise instantly, and if these problems in the requirements can be solved before the development team start building the software, it would differ a lot!!

In this 'First Model then Test'(FMTT) approach , we can use the following set of models:
  1. Process Flows and/or Activity Diagrams for sequential processes/activities,
  2. CRUD matrices for database manipulations,
  3. Pseudo Code for business rules and validations,
  4. State Transition Diagrams for state dependent processes/activities.

Disadvantages of FMTT:
  • Investment comes before profit: the testability review will take longer,
  • You need to know more about models and modeling,
  • It's different as we usually do,
  • Performing the tests is less complex, less creative.

Advantages of 'First Model then Test' (FMTT):
  • Finding Bugs As Early As Possible,
  • The development and test process is less human dependant,
  • Less interpretations and assumptions are needed during the building and testing,
  • Less bugs to find (some will consider this as a disadvantage....),
  • It's fun to collaborate with the other parties in Application Lifecycle Management [ALM],
  • We testers help developers create higher quality systems.


Afraid to be bored by less complexity?
Try to make the next step: Model Based Development, Testing and/or Estimation!
I've already proved that generating test cases from models is possible (see previous articles on that subject).

Rob Kuijt

add comment ( 90 views )   |  0 trackbacks   |  permalink   |   ( 2.8 / 1369 )
[ALM] & Business Continuity 
Sunday, July 27, 2008, 08:38 AM - ALM, Quality, Testing, TMap®, Rosario
Posted by Rob Kuijt
Application Lifecycle Management [ALM] should ensure that an organization experiences an improved "business as usual" in the event of the implementation of new and/or changed functionality. Other (older) industries can give assurance, so the ICT industry should follow (soon?). This article will, I hope, give some next steps towards adulthood, by giving [ALM] directions how to prevent costly or even lethal disasters caused by bugs.

Business Continuity has for [ALM] two viewpoints:
  • [ALM] is part of the interdisciplinary concept, called Business Continuity Planning (BCP), used to create and validate a practiced logistical plan for how an organization will recover and restore partially or completely interrupted critical function(s) within a predetermined time after a disaster or extended disruption (see wikipedia for more information),
  • [ALM] must give assurance that developing and/or changing applications won't create disasters or extended disruptions! (I won't explain what software bugs can accomplish...).


In this entry I will give some thoughts concerning the second viewpoint:
How to prevent a major disruption caused by the implementation of a new or changed application?

Of Course, the Acceptance Test is very important in our "battle" to prevent the worst to happen. But, as I stated earlier in my article "What's in the Box?", the Acceptance Test alone is not enough! Based on the Business Risks (Business Driven Test Management *) the Master Test strategy in [ALM] must contain at least the following three pillars:
  1. Finding Bugs AEAP (As Early As Possibly)
  2. Bug Prevention by Testing Requirements and Use Cases
  3. Gathering and Analyzing data to do Business Continuity Predictions

*) Business Driven Test Management gives client grip on test process, uses client language, delivers appropriate test coverage on right spot and makes test results visible for client (see TMap.net for more information)

Pillar 1: Finding Bugs AEAP (As Early As Possibly)
Finding bugs as early as possible prevents changes in the software in the last phases of a project.
Recently I participated in an (early) evaluation of an organisation, where the strategy of finding bugs as early as possible was implemented for the complete software engineering department by choosing the Quality Levels, tuning the Test Coverage of the successive test levels and introducing Learning Cycles. They had geat results!
See two earlier articles on this topic:

Problem solved? No, not yet, by diminishing the bugs, it became clear that the number of late change requests and/or wishes for extension of the functionality were disturbing the implementation of the applications. So it became obvious that the next pillar became more important...

Pillar 2: Bug Prevention by Testing Requirements and Use Cases.
If software development is based on inaccurate requirements, then despite well written code, the software will be unsatisfactory. No doubt that the users do want to change the application before it is implemented. Changing an application in the last stages of a project will generate huge risks.


Testing Requirements
Testing the requirements in the early stages of the project will minimize the changes on the application just before implementation. To give the testing of requirements a head start, I derived a checklist from the article "An Early Start to Testing: How to Test Requirements (Suzanne Robertson)"
  1. Does each requirement have a quality measure that can be used to test whether any solution meets the requirement?
  2. Does the specification contain a definition of the meaning of every essential subject matter term within the specification?
  3. Is every reference to a defined term consistent with its definition?
  4. Is the context of the requirements wide enough to cover everything we need to understand?
  5. Have we asked the stakeholders about conscious, unconscious and undreamed of requirements? Can you show that a modeling effort has taken place to discover the unconscious requirements? Can you demonstrate that brainstorming or similar efforts taken place to find the undreamed of requirements?
  6. Is every requirement in the specification relevant to this system?
  7. Does the specification contain solutions posturing as requirements?
  8. Is the stakeholder value defined for each requirement?
  9. Is each requirement uniquely identifiable?
  10. Is each requirement tagged to all parts of the system where it is used? For any change to requirements, can you identify all parts of the system where this change has an effect?


Testing Use Cases
After the requirements are tested they evolve in a functional model (for instance Use Cases) of the required application. To Test the Use Cases you can use the checklist I derived from the article: "Use Cases and Testing (Lee Copeland)"
1)Syntax Testing
  • Complete:
    • Are all use case definition fields filled in? Do we really know what the words mean?
    • Are all of the steps required to implement the use case included?
    • Are all of the ways that things could go right identified and handled properly? Have all combinations been considered?
    • Are all of the ways that things could go wrong identified and handled properly? Have all combinations been considered?
  • Correct:
    • Is the use case name the primary actor's goal expressed as an active verb phrase?
    • Is the use case described at the appropriate black box/white box level?
    • Are the preconditions mandatory? Can they be guaranteed by the system?
    • Does the failed end condition protect the interests of all the stakeholders?
    • Does the success end condition satisfy the interests of all the stakeholders?
    • Does the main success scenario run from the trigger to the delivery of the success end condition?
    • Is the sequence of action steps correct?
    • Is each step stated in the present tense with an active verb as a goal that moves the process forward?
    • Is it clear where and why alternate scenarios depart from the main scenario?
    • Are design decisions (GUI, Database, …) omitted from the use case?
    • Are the use case "generalization," "include," and "extend" relationships used to their fullest extent but used correctly?
  • Consistent:
    • Can the system actually deliver the specified goals?
2) Domain Expert Testing
  • Complete:
    • Are all actors identified? Can you identify a specific person who will play the role of each actor?
    • Is this everything that needs to be developed?
    • Are all external system trigger conditions handled?
    • Have all the words that suggest incompleteness ("some," "etc."…) been removed?
  • Correct:
    • Is this what you really want? Is this all you really want? Is this more than you really want?
  • Consistent:
    • When we build this system according to these use cases, will you be able to determine that we have succeeded?
    • Can the system described actually be built?
3) Traceability Testing
  • Complete:
    • Do the use cases form a story that unfolds from highest to lowest levels?
    • Is there a context-setting, highest-level use case at the outermost design scope for each primary actor?
  • Correct:
    • Are all the system's functional requirements reflected in the use cases?
    • Are all the information sources listed?
  • Consistent:
    • Do the use cases define all the functionality within the scope of the system and nothing outside the scope?
    • Can we trace each use case back to its requirement(s)?
    • Can we trace each use case forward to its class, sequence, and/or state-transition diagrams?


Pillar 3: Gathering Data and Analyzing Trends to do Business Continuity Predictions
It is NOT possible to predict Business Continuity based on the testing process of the concerning project only. It is important to get a better foundation for the decision to implement a changed or new application. In the Rosario TAP we are doing some research for Gathering Data and Analyzing Trends to do Business Continuity Predictions from three viewpoints:
  1. Fault Detection Trends
  2. Change Control Trends (Changes in Requirements, Specifications, LOC’s,...)
  3. Project Control Trends (Estimations, Budget, Overtime,...)

We’ve just started on this topic, so I will keep you posted in later entries.

Collaboration is a critical success factor in preventing a major disruption caused by the implementation of a new or changed application! All parties within [ALM] have to work together in creating good test coverage from the early until the last phases of the projects. I am sure that only when the Quality Levels, Learning Cycles and Metrics are in place, a good Business Continuity risk advice can be given to ensure that an organization experiences an improved "business as usual" in the event of the implementation of new and/or changed functionality.

Rob

add comment ( 218 views )   |  0 trackbacks   |  permalink   |   ( 2.9 / 706 )
Finding bugs AEAP (As Early As Possibly) 
Saturday, June 7, 2008, 07:56 PM - ALM, Quality, Testing, TMap®
Posted by Rob Kuijt
Analyzing and fixing a bug is, without doubt, a loss of time. And the later the bug is found, the greater are the losses. Finding bugs AEAP (As Early As Possibly) becomes more and more important, especially in the complex systems we make nowadays. So, let's join the knowledge of the Developer, the Tester and the Designer and tackle this challenge!


Of course we can train the individual team members to test their own work....but, I know from my own experience, you can be blind for your own short comings.

In the Application Life Cycle we should stimulate Developers, Testers and Designers to work together in the challenge to Find Bugs AEAP (As Early As Possibly). Join their knowledge and they will accomplish better systems in less time!
For instance, let's look into the creating of Unit Test cases. The Unit Test mechanism is great, but if you don't know anything about test coverage it's almost impossible to get the advantages you want.
Let me give an example how the Developer, the Testers and the Designers can work together....

Case
Let’s say a Developer must create a component that contains the following decisions:
if C1 and (C2 or not C3)
then
        if C5 or C4
        then
                "Result_1"
        else
                “Result_2"
        end if
else
        if C3 and C4
        then
                "Result_2"
        else
                "Result_3"
        end if
end if


We want to create Unit Test cases which will Find the Bugs AEAP....That's why we must choose for the best test coverage: Modified Condition/Decision Coverage (MC/DC)


Modified Condition/Decision Coverage
Every point of entry and exit in the program has been invoked at least once, every condition in a decision in the program has taken on all possible outcomes at least once, and each condition has been shown to affect that decision outcome independently. A condition is shown to affect a decision’s outcome independently by varying just that decision while holding fixed all other possible conditions. (wikipedia.org)


It is not easy to create the test cases for the test coverage MC/DC, especially if you want a minimum amount of test cases. Here the Tester comes into the play. With the help of TMap® test specification techniques, he/she creates the (logical) test cases. In this case the Tester delivers the Developer the following 7 (logical) MC/DC test cases.



Depending on the risks, and with the help of Equivalence classes and Boundary value analysis, the physical test cases are derived.
Now, the Developer can start building the test cases into his/her component.

So, by joining their knowledge, the Developer and Tester did create, for this phase in the life cycle, the best test coverage to Find Bugs AEAP . The effect of collaboration can even be greater if the Unit Test cases, before implementing in the code, are reviewed by the Designer.

Reviewing the test cases gives the Designer insight in the assumptions and interpretations of the Developer and Tester.

Finding Bugs during this review is the ultimate AEAP!!!

Creating these test cases manually is, even for experienced testers, pretty difficult. Within Sogeti we use a tool (I made) for this work. (The generation of the test cases of this example took me 5 minutes).



add comment ( 239 views )   |  0 trackbacks   |  permalink   |   ( 3 / 2294 )
On-the-fly testing with Camano 
Monday, May 5, 2008, 04:20 PM - ALM, Testing, TMap®, Rosario
Posted by Rob Kuijt
Generalist Testers (Manual Testers) do like on-the-fly testing! It feels good to be creative and impulsive!! Let's react on the behavior of the system!........... Maybe it's not as efficient and/or effective as structured testing, but it is fun!

... what about "Non Reproducibility"?


Too much fun brings also disadvantages. In complex systems, with many dependencies under the surface, on-the-fly testers aren't able (most of the time), to write reproducible bug reports. And that's a nightmare for the project manager. Non reproducible bugs are time consuming and expensive, so "on-the-fly testing" is banned out of the life of the Generalist Testers and replaced by structured test methods.

"Reproducibility refers to the ability of a test to be accurately reproduced,
or replicated, by someone else working independently" - Wikipedia



On-the-fly Testing...


Nowadays testers must work in formal structures, of course for efficient testing, but especially for generating reproducible bug reports. Writing an accurate bug report is NOT easy. It takes relatively much time, and even then they are not accurate enough, so developers may call for more information, or even worse, they close the bug report with the state "non reproducible". And believe me...That's not funny at all!


Why do I care?


I care because I want to improve the way testing is implemented in the complete application lifecycle [ALM], and besides that,...it's my job! I am process manager of the Managed Testing Services of Sogeti. If I see chances to improve our services, I go for it!
The new test suite of Microsoft (codename Camano) is in my opinion a great chance for improvement. Instead of converting Generalist Testers into technical skilled testers, Microsoft has chosen to support the way Generalist Testers like to work: "Manual Testing"!
Camano (part of Rosario) is the code name given to the Microsoft standalone testing suite for General Testers. Camano supports planning, creation and executing of manual test cases (CTP April 2008: for testing websites). See the blog entry of Randy Bergeron for some of the latest screen dumps.


Camano fights the non reproducible bugs


Generalist Testers must write accurate bug reports, but now they can stop detailed manual logging of their actions. Because the bug reporting of Camano is great! Camano can keep track of the complete manual (structured or not) behavior of the Generalist Tester. So if a tester is a bit enthusiastic and performs more, better or other tests than originally planned, Camano don't mind, the Microsoft Test Runner records everything in the background for the later uses:
  1. Regression testing: the whole script or part of the script.
  2. Export to Visual Studio for the creation of automated scripts (to be performed by technical skilled testers before releasing).
  3. And for bug reporting!! If a tester runs into a bug, the bug report contains not only all the configuration parameters, it also contains all the steps taken before the bug did occur! Combined with the possibility to capture the window, this support of bug reporting is very strong!!

By combining Camano with the flexibility of our structured test approach TMap®, I can re-introduce on-the-fly testing in our test projects!

Structured "on-the-fly" Testing


Also, with Camano it is possible to have the fun of on-the-fly testing and still report reproducible bug reports. Combining Camano with TMap® makes it possible for us to do result driven test assignments (agreements with the project management concerning time, budget and/or test coverage) and still enjoy testing.
To explain the test teams the balance between structured and on-the-fly testing and how to use Camano in the test project, I've written a fictitious case .
The case contains (a description of):
  1. The case specifications: Course Administration application.
  2. Creating the basis structure for test coverage
  3. The choices concerning freedom versus more structure in the Camano steps


Developers gonna like Camano


I'm sure the developers will like Camano. Especially if they find out that the bugs are reported accurately!
Because: Fast bug fixing is almost as good as making no bugs at all!




add comment ( 182 views )   |  0 trackbacks   |  permalink   |   ( 3 / 394 )
Testing in the Lifecycle [ALM]... a focus on test coverage 
Sunday, April 13, 2008, 10:53 AM - ALM, Testing, TMap®
Posted by Rob Kuijt
When looking at Testing and more specific Test coverage in the Lifecycle [ALM] you can conclude that much effort is done to test as good as possible, but nobody can tell you what Test coverage is achieved in the successive stages of the ALM.

Work must be done in the thinking and communication concerning the quality levels that should be reached! It is proved to be very difficult to choose the thoroughness of testing and it is proved to be even more difficult to explain the executed Test coverage to the colleagues of the next test levels.

With the appearance of the chapter 14 "Test Design Techniques" in TMap® Next , there is now some light on the horizon. In the "old TMap®" the Test coverage was expressed in terms of dynamic and static quality characteristics, coupled with test techniques, which nobody understood. Even the full-time testers had trouble understanding it through and through.

With the introduction of TMap® Next the test coverage is, more friendly and intuitively, expressed in terms like paths, decision points, CRUD (coverage of the basic operations), checklists, and so on...
Now we can explain the chosen Test coverage practically in plain English!

I can give an example how we introduced this type of Test coverage expression in ALM in a project I've done within Sogeti. In the project we planned a serial of 5 successive test levels: Unit Tests (UT), Component Integration Tests (CIT), Technical End-to-end Test (EET), Functional Acceptance Test (FAT) and User Acceptance Test (UAT). Instead of designing the tests on an individual bases we created one overall "tuned" test strategy.



Picture: Clemens Reijen; from his article:
Testing in the Lifecycle [ALM]... a focus on automation


This overall test strategy was designed in three layers:
  • First; for all the test levels we determined the Basic Quality level, which can be seen as the absolute lowest level of Test coverage (labeled: Bronze). Formal escalation is needed to escape from this Basic Quality requirement. And of course the depth of testing is expressed in terms of chapter 14 of TMap® Next.
  • Second ; based on the BDTM-approach (see chapter 3.1 in TMap® Next) risk classes are determined for each combination of characteristic and object part. Characteristic= what must be investigated; Object part= what must be tested. The Test coverage above the Basic Quality level is, for all test levels, determined in a so called Master Test Plan. In my experience it is easy to communicate with stakeholders when the higher Test coverage levels are labeled as Silver and Gold and even Platinum. And again the labels silver, gold and platinum are expressed in chapter 14 terms.

  • Third; we introduced the Learning Cycle. Every time a blocking or costly defect occurred we analyzed the defect and if necessary we modified the Test coverage definition of the test level where the defect should be found.
    Another example of working with the bronze, silver, gold and platinum labels can be found in chapter 7 "Development Tests" of TMap® Next.


ALM (Application Lifecycle Management) regards the process of delivering software as a continuously repeating cycle of inter-related steps: definition, design, development, testing, deployment and management. Each of these steps needs to be carefully monitored and controlled [Wikipedia].
For more definitions see the article about ALM Definitions in the blog of Clemens Reijnen.

TMap® (Test Management approach) is a registered trademark of Sogeti Nederland BV

add comment ( 176 views )   |  0 trackbacks   |  permalink   |   ( 3 / 44 )

<<First <Back | 1 | 2 | 3 | 4 | 5 |