Application Test Design Patterns
Common Test PatternsThe usual test-design process is very repetitive, yet many testing problems have generic elements that you can reuse easily in the creation of later test designs. You can capture these patterns, and use them to simplify the creation of later test designs.
One problem that still limits effectiveness and efficiency in the testing of applications is that a usual development-and-test cycle involves a lot of creation of test-design details from scratch, and in a somewhat ad-hoc manner, whenever a new set of application features is defined. The primary tool often is an empty test-design template, which includes categories of test issues to address, and enumeration of test specifications or test-case details that are based on feature functional specifications.
If viewed across a series of project cycles, test-design activities are one area in which there is a lot of repetition: Test engineers start with a blank page in a test-design specification and fill it in detail by detail, when many of these issues are generic and common to nearly every application-testing scenario.
A goal of the test-design process should be to reduce ad-hoc reinvention of test designs by capturing and reusing common test patterns.
The recognition of test patterns is loosely similar to design patterns that are used in software development. (There are also some classic object-oriented design patterns that might be suitable for test-automation design, but we do not address those patterns here directly.) Creating test designs by reusing patterns can cover repetitive, common core verification of many aspects of an application. This frees the test engineer to focus more time and thought on truly unique aspects that are particular to an application, without having to waste effort on definition of the more generic portions of the test design.
Just as developers who are writing software strive to minimize the code that they reinvent or create from scratch by using existing functionality for common, generic, or repeated operations, so too should test engineers strive to minimize the amount of time that they spend on defining test cases and verification steps for functionality that is common to every application with which they are presented. Test engineers should be able to reuse the full and rich set of tests that previously have been identified for similar situations.
The test-design processes of expert testers often rely, in part, on experience about things that have gone wrong in the past. However, this past experience is not included in any functional specification for a product, and therefore is not called out explicitly to verify. The experience of an expert tester can be captured and turned into valuable test-design heuristics.
Testing is organized often around failure modes, so that enumeration of failure modes is a valuable portion of test design. Lack of anticipation of new, novel, or infrequent failure modes is the prime source of test-escapement problems, where bugs are not detected during testing phases, but affect customers who use the application. Often, these bugs are the result of failure modes that could have been anticipated—based upon past product-testing experience—but the experience was not captured, shared, or known to a novice tester. An easy way to avoid this problem is to write generic specifications for common failure modes, convert them into test conditions to verify or validate, and concatenate this to any test-design template.
The key part of this analysis process is to create as many separate categories of things that can be "covered" as possible. Iterate in each category, to build lists of test items that should be covered. Common categories to use as a starting point for the process are user-scenario coverage, specification-requirement coverage, feature coverage, UI control coverage, field coverage, form coverage, output/report coverage, error coverage, code coverage, condition coverage, and event coverage.
The concept can be extended to many other areas. Identification of a number of test categories aids the analysis and design process that often is applied to requirements review for test identification. Having a formalized set of categories allows the test engineer to decompose requirements into testable form, by dividing the problem into testable fragments or test matrices across a number of dimensions.
The lack of variation sometimes can improve the stability of the test code somewhat, but at the detriment of good test coverage. Effectiveness of testing—as measured in the detection of new bugs—is improved significantly by the addition of variation, each test pass using different values of data within the same equivalence class. Automation should be designed so that a test-case writer can provide a common equivalence class or data type as a parameter, and the automation framework will vary the data randomly within that class and apply it for each test run. (The automation framework should record the seed values or actual data that was used, to allow reruns and retesting with the same data for debugging purposes.)
In some cases, working with dynamically selected input data can be problematic because determining the expected outcome might be difficult (the Oracle problem). However, in many cases, there are tests that can be selected and verified automatically, even without requiring sophisticated approaches to determine the correct test output for a particular input.
Test-automation strategies too often are based solely on the automation of regression tests, putting all of the eggs in that basket alone. This is analogous to buying insurance to protect against failures that are due to regression: It is expensive, and it mitigates failures in the areas that the insurance (automation) covers, but it does not reduce or prevent the problem in uninsured (unautomated) portions of the application. Wise insurance-purchasing strategies usually are based on the selection of the optimum amount, and not on the spending of all available resources on a gold-plated, zero-deductible insurance policy. In testing terms, some teams spend too much time on automation—at the expense of lost mindshare that could be dedicated to other types of testing, and putting their projects at increased risk of detecting only a small portion of the real problems.
The cost/benefit ratio of test automation can be tilted in a positive direction by concentrating on adding stability to development processes, identifying sources of regression failures, and fixing them systematically. When pursued with vigor, it soon becomes inappropriate to over-invest in a regression test-automation strategy, and much more important to invest in other more leveraged forms of test automation. Otherwise, the dominant problem changes quickly from creating tests to find bugs to trying to manage and maintain myriad redundant, overlapping, obsolete, or incorrect regression test cases.
A regression-focused approach can result in automated tests that are too static. The cost/benefit ratio is marginal for these tests, when all of the maintenance and management costs are taken into account; they do not find many bugs. If test systems are designed to challenge dynamically the full range of data-dependent failures, alternate invocation modes, and number iterations, they can accomplish much more than regression testing.
By increasing the abstraction of test automation to "find all strings on the form and test them with all generic tests (apply random data variations until an error occurs or 100,000 conditions are tried)" and running the tests continually, testing is elevated from a marginally valuable static regression suite to a generic tool that has a much higher probability of adding value to the project. (This has the added benefit of providing technical opportunities for extending Software Test Engineers, instead of just chaining them to a desk to maintain legacy automation.)
The other benefit of higher-level abstraction and making tests more generic is efficiency; adding test cases should be simple and quick, and not require hours of custom test-case coding of minutiae. Having test automation designed at the level of "test this control()"—instead of 50 lines of "click here, type this, wait for this window"—empowers a broad range of testers to automate tests and focus efforts on planning meaningful tests, instead of having to slog through the implementation details of simplistic scenarios that are coded against poorly abstracted test frameworks.
Testers should build test designs around reusable test patterns that are common to a large number of application-testing problems. The inclusion of these patterns as a standard part of the test-design template reduces the time that is spent on test design, by eliminating the need to start with a blank test-design specification. Reuse of these patterns has the added benefit of codifying expert-tester knowledge, so as to increase the likelihood of catching common failures.
source: Mark Folkerts, Tim Lamey, and John Evans - http://msdn.microsoft.com/en-us/testing/cc514239.aspx