Application Test Design Patterns
Reuse Test Patterns
There are many useful approaches that are being applied to well-designed tests and can contribute to the reuse of test design:
Strong Data Typing
One of the benefits of reusable test design is that you can take advantage of the convergence of these concepts and apply the results, so that they improve the cost/benefit ratio of test efforts on future projects. All of these techniques have elements to contribute to raising test-design abstraction—improving reuse, reducing reinvention, elevating the sophistication of tests, and extending the reach of Software Test Engineers.
Test-Design Reuse Summary
- Reduce ad-hoc reinvention of test designs by capturing and reusing common test patterns.
- Recognize and abstract test issues, and capture them in a form that can be reapplied as a higher level chunk of activity, instead of always dropping down to the detailed instance level.
- Treat past test designs essentially as individual instantiations of a generic test-design "class."
- Institutionalize, "package," or "productize" the results and experience of past test-design work into a reusable deliverable that can be applied to similar test efforts.
- Create checklists of generalized "things to test" that will be a resource for reuse on future versions or applications. These generalized tests augment the separate list of feature details (most often, business rules) and associated tests that truly are unique to a specific project.
- Enumerate common and generic failure modes as items to verify, and make them part of test-design templates.
- Enumerate and document explicit test "coverage" categories, so that they can be incorporated into analysis and test design.
- Ensure that each category contains as complete a set of tests as possible to ensure coverage of the category. Divide the results into generic and unique subsets of tests. Reuse the generic portions of these lists for future efforts.
- Test-design specification templates should be expanded beyond empty-document sections to fill in, to include generic validation rules, test-case checklists, common failure modes, and libraries of generic tests.
- Avoid the pesticide paradox by using a test infrastructure that provides for flexible data-oriented testing at the equivalence-class or data-type level, instead of supporting only specific, handcrafted test cases. Varying the test data and the paths of execution will increase the probability of detecting new bugs.
The following are a few frequently occurring test-design patterns that are suitable for reuse.
- Identify a record or field upon which to operate (based on input name and parameter info).
- Generate randomized item from equivalence classes.
- Verify nonexistence.
- Add item.
- Read and verify existence of identical unchanged data.
- Modify and verify matching modified data.
- Delete and verify removal of item.
- Identify item with type characteristics (for example, a data field) at an abstract level; this should not be limited to simple data types, but should include common business data types (for example, telephone number, address, ZIP code, customer, Social Security number, calendar date, and so on).
- Enumerate the generic business rules that are associated with the type.
- Define equivalence partitions and boundaries for the values for each business rule.
- Select test-case values from each equivalence class.
- Enumerate and select an input item.
- Select a "valid" equivalence partition.
- Apply a lookup or random generation of a value within that partition, and use it for test.
- Identify a system response to verify.
- Identify a stress axis (for example, number of items, size of items, frequency of request, concurrency of requests, complexity of data structures, and dimensionality of input).
- Identify a starting level of stress.
- Verify and measure system response.
- Increase stress, and repeat cycle until system response fails.
- Switch to another stress axis.
- Increase the level of that axis until failure.
- Additionally, add concurrent stress axes.
- Increase the number of concurrent axes until failure.
- Enumerate past human-error modes.
- Select a mode that has observed recurrence.
- Identify a scope in which the failure mode might apply, and routinely test for that failure until you are convinced that it is not manifested.
- Identify and define files and file semantics to be evaluated.
- Enumerate failure modes for files.
- Identify system response for each failure mode to verify (create an Oracle, list).
- Select each failure mode, and apply it in turn to the designated file.
- Verify response.
- Validate modality.
- Validate proper display and behavior when on top or behind.
Testers should build test designs around reusable test patterns that are common to a large number of application-testing problems. The inclusion of these patterns as a standard part of the test-design template reduces the time that is spent on test design, by eliminating the need to start with a blank test-design specification. Reuse of these patterns has the added benefit of codifying expert-tester knowledge, so as to increase the likelihood of catching common failures.
source: Mark Folkerts, Tim Lamey, and John Evans - http://msdn.microsoft.com/en-us/testing/cc514239.aspx