Leveraging a process is the basis to defining and executing a test strategy. This allows the development team to focus on repeatability, stability, speed, and results. While researching the landscape of test design techniques, I came across three very good articles that outline a clearly defined set of test design strategies (articles from Invensis, Art of Testing, and Test Automation Resources). These articles outline techniques based on static code and compiled dynamic code; manual and automated testing; black box, white box, and experience based testing. Below is a breakdown of each of the test design techniques. Some of these have been discussed in my articles on Testing 101 and 200 Level Testing. Below is a summary of each, but follow up with each of the original articles to get more details, as well as examples.
Static Test Design Techniques
- Walk through – a formal step by step review of all the features and documentation by the authors to better understand the software.
- Informal review – as stated, these are more informal discussions to gather information without the documentation or code.
- Technical review – more of a peer review of the application.
- Audit – a formal review comparing code to documentation by an external source.
- Inspection – a formal review by trained moderators, documenting defects in code and documentation through a detailed process.
- Management review – a review of the project documents – project plan, budget, metrics, objectives and results, etc.
With Help of Tools
- Analysis of coding standards (using compiler) – comparing the code against a set of rules, conventions, and standards defined within a tool or document.
- Analysis of code metrics – analysis of things like cyclomatic numbers, complexity, nesting, lines of code, code coverage, etc.
- Analysis of code structure – an analysis of the application by following the flow of data or paths through the code. Also analyzes the structure of the data and the code itself.
Dynamic Test Design Techniques
Specification-based or Black-Box techniques
- Boundary Value Analysis – test all field input values at the boundaries – highest, lowest, etc.
- Decision Table Testing – Also called Classification Tree Method, build a decision tree for the logic of the application, and write tests for each of them.
- State Transition Diagrams – test each of the states of the application, particularly workflow steps.
- Equivalence Partitioning – reduce your number of tests by determining ones that test the same thing, and return the same results.
- Use Case Testing – define scenarios based on business functionality or user functionality.
- Combinatorial Testing – Randomly selected values, all possible values, each choice in at least one test, all-pairs or pair-wise or n-wise combinations, etc.
Structure-based or White-Box techniques
- Statement Coverage or Line Coverage – similar t code metrics, analyzing the amount of code that has been exercised by tests.
- Condition Coverage or Predicate Coverage – all conditions (i.e. true or false) are tested.
- Decision Coverage or Branch Coverage – all conditions in each decision table are tested.
- Multiple Condition Coverage – all values in all conditions are tested.
- Exploratory Testing – similar to an informal review, this testing is based on a general understanding of the application, product, domain, company, etc. and the experience and intuition of the tester.
- Error Guessing or Fault Attack – leveraging prior experience and expertise, guess where the cracks are in the application, and focusing the testing there.
How to Choose the Right Technique
Once you have a general understanding of test design techniques, choosing the right approach is the most critical next step. Here are some of the decision points to pick the right one:
- Application Type – based on requirements for the domain as well as mobile vs. web applications.
- Regulatory standards – must follow conventional rules based on IT, countries, government agencies, etc.
- Customer’s requirements– based on relationships or contracts with customers.
- Risk Level and Type – This includes business risk, legal risk, compliance risk, brand risk, etc.
- Objectives – Focus on the objectives of your testing.
- Test Expertise – knowledge of the application, availability of documentation, familiarity with the techniques, etc.
- Time and budget – What will provide the biggest value that fits your schedule.
- SDLC – Waterfall, Agile, Scrum, Kanban, Extreme… each affects which technique will fit.
- Defect History – What kind of bugs have you found already for this app, in other apps, across the domain, etc.