Rob Kuijt's Testing Blog
What's In the Box? 
Sunday, April 27, 2008, 05:35 PM - Quality, Testing, TMap®
Posted by Rob Kuijt
Today I read a nice article: "What's In the Box?" by Mary Altman.
This article, about speculative fiction, started with a recognizable explanation of the term "black-box theory":
"The term is typically used to describe a device of which we know or care very little about its inner workings. The entire focus is on its input/output behavior—the result rather than the reasoning."

In the article, also Sociologist Bruno Latour gives a description of the "black-box theory":
"The way scientific and technical work is made invisible by its own success. ... Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become."
Mary ends her article with the conclusion:
"Most simply: it doesn't matter how it works. It only matters that it works."

...and what about the "black-box theory" in the software industry?

I know that many colleague testers believe this conclusion is also valid in the software industry. So, when they organize an Acceptance Test, they rely completely on the testing target: "It only matters that it works". From the perspective of the end-user they are probably right....because the end-user is not interested in (and has no knowledge of) the inner workings. So many Acceptance Tests rely on a complete black-box strategy!

I’m not one of them!

I think the test target: "It only matters that it works" is too thin. It gives no answers for: "When it works, will it keep on working?", or "How about vendor dependency?", or "Is this software maintainable?", or "Are there any hidden features?", and so on...

I think an Acceptance Test must, besides the testing "that it works", find assurance that the inner workings are validated. I agree that it should not be tested in the Acceptance Test, but still it MATTER HOW IT WORKS!!!
So how can we find assurance that it is tested? Is it possible for the software industry to give the Acceptance Test team the assurance the inner workings are validated and correct?

On these questions I am convinced that the testing community can help the developers. I believe, much effort is already done in delivering software of good quality.
But those efforts are hidden and not tuned into an efficient and effective process. Again: I think the testers can help! When testers and developers join together in describing all the quality measures (they already perform), it will be clear if enough measures are taken and/or what additional actions are needed. Chapter 7 of TMap® Next (Development Tests) gives for this inventory a small, but effective model.
The so called "Basic Quality model" describes the quality measures from four points of view:
  1. Depth of test coverage;
  2. Clarity (description how and when the test coverage is reached);
  3. Provision of proof (what deliverables give proof of the agreed test coverage);
  4. Compliance monitoring (how does the development process perform its internal monitoring).
Together, developers and testers can fill in these four points of view.

Combined with the quality measures of the requirements and design processes, this gives the Acceptance Test team the opportunity to get a "glass-box" impression how the inner workings are validated.

The glass-box effect gives also the possibility for implementing learning cycles. Implementing the "Basic Quality model" makes it possible to investigate the origin of defects, learn from it and help each other to do it better the next time!....I know...that's a little too optimistic...

So let's do it step by step: Creating a kind of glass-box by giving the Acceptance Testers the possibility to be a virtual witness of the development process would be a great first step!


add comment ( 107 views )   |  0 trackbacks   |  permalink   |   ( 3 / 3387 )
Nobody Wants to Deliver Crap! 
Saturday, April 5, 2008, 04:01 PM - Quality
Posted by Rob Kuijt

Nobody Wants to Deliver Crap!

In the nearly 30 years I am working in the ICT world, I've seldom met someone who deliberately made errors to annoy his contractor, manager or whoever…. Everybody I met, made his or her best effort to do their job as good as possible!
Nevertheless it is obvious that after joining the individual parts, their (and also mine) team efforts mostly aren’t good enough. Lack of Quality is a costly and sometimes even a lethal problem.

Some examples:
• A poorly programmed ground-based altitude warning system was partly responsible for the 1997 Korean Air crash in Guam that killed 228 people. ( )

• Faulty software in anti-lock brakes forced the recall of 39,000 trucks and tractors and 6,000 school buses in 2000. ( )

• Software bugs, or errors, are so prevalent and so detrimental that they cost the U.S. economy an estimated $59.5 billion annually, or about 0.6 percent of the gross domestic product, according to a newly released study commissioned by the Department of Commerce's National Institute of Standards and Technology (NIST).( June 28, 2002) ( )

• During a meeting of the Mars Exploration Program Analysis Group Meeting in Washington Dc, NASA's John McNamee, Mars Exploration Program addressed the issue of the recent failure of the Mars Global Surveyor (MGS) spacecraft. Apparently incorrect software doomed the spacecraft.(January 10, 2007) ( )

Numerous quality experts did try to solve the problem of bad quality software with a huge amount of quality methods, so I won't try to add one of my own.....No , I have an opinion about Quality in a much, much smaller scale. Let's look at the following:

I think it is strange that, one ICT-assignment parallel given to, let's say, 20 individuals, results in many different outcomes; the change that their work has the same outcome is practically NILL!
And another thing I am amazed about: If you ask those 20 individuals, whether they are certain about the quality of their work. Mostly they can’t predict the results of the connecting test or review at all. Most of the time you get answers like: "I hope I did well" or "I think they won't find too many errors".

I hope she will not yell at me

I'm glad my dentist, my general practitioner or car mechanic don't work the same way!!
I think, if you look at the way the individual, in general, is doing his work in the ICT industry, huge improvements can be made, by investing in baby steps!
Assuming that nobody wants to deliver crap: how can I help an individual (let's call him Mr. A) to do his work the right way?

I think Mr. A can be helped with 5 Simple Rules in the 3 stages of his ICT activity:
1) The assignment
2) Support and/or coaching on the job
3) Delivery and feedback

1)The Assignment

Simple Rule 1: The Assignment for the individual must be SMARTER.
SMARTER is a mnemonic used in project management at the objective setting stage. It is a way of evaluating if the objectives that are being set are appropriate for the individual project . ( wikipedia )
A SMARTER objective is one that is Specific, Measurable, Achievable, Relevant, Time-bound, Evaluated and Rewarding.

The assignment must give ALL the information that is needed to do the job the Right Way the First Time. Including the required knowledge and experience, the reference to the instructions for the techniques and tools, the acceptance criteria (a mandatory list with quality checks), and a contact for asking questions (obligatorily documented; for instance by e-mail).
Before Mr. A starts his assignment he MUST know exactly what he must do to complete the job and gets his applause. It is very important that Mr. A realizes that only he can judge if the assignment is SMARTER "enough".

Simple Rule 2: Prevent Mr. A to do the best he can! Good is good enough!
An extra note on the "Achievable"-part:

Nobody can build a Rolls Royce for the price of a 2CV

An assignment concerning Functionality, Time, Money and Quality must be in balance! It must be clear at the start what must be done if the balance is disturbed!

2) Support and / or coaching on the job

Simple Rule 3: Fight the biggest enemies of Quality: Assumptions and Interpretations.
To fight the damage of assumptions, Mr. A can be helped in two ways:
First, we must Mr. A help to make the correct interpretations, so it is very important to give a proper kick-off. If Mr. A doesn't know how the product, he is creating, will be used, he never can make the right interpretations.
Second, Mr. A must learn to ask questions when he is not exactly sure about the choices he has to make during his activity.

3) Delivery and feedback

Simple Rule 4: Delivery must be done in a way that Mr. A is forced to know the quality of his product.
Delivery can only be allowed if the product is compliant to the exit criteria. Mr. A must make a statement; he did perform the complete set of quality-checks (exit criteria).

Simple Rule 5: Feedback must be done in a way Mr. A can learn from his mistakes.
If in a later stage an error has been found, feedback must be given in two ways.
First to create the learning cycle, Mr. A must give, after analyzing the problem, an explanation why the error has slipped through his own detection.
Second, of course, the error must be corrected.

I don't know if it works for Mr. A, but I do know the 5 simple rules helped me a lot! Feel free to try them. Suggestion: try them step by steps....

I'm curious.... Can 5 Simple Rules affect the ICT Quality problem?
In other words: Can 5 Simple Rules make the ICT world stop running.... to learn baby steps?

In theory....every little bit can help...
In practice.....probably not....but I'll see...

In the mean time... I am satisfied making progress with my colleague Developers and Testers in my own surrounding...

And for the rest of the world.....don't worry.....

you've never time to do IT right...

but always time to do IT over!

Rob Kuijt
1 comment ( 58 views )   |  0 trackbacks   |  permalink   |   ( 3 / 3069 )

<<First <Back | 1 | 2 | 3 | 4 | 5 |