1. ISTQB Certified Tester Foundation Level
This document contains my personal notes for the ISTQB Certified Tester Foundation Level. The notes are based on the syllabus version 4.0.1 from the 16. August 2023.
2. Table of Contents
- 1. ISTQB Certified Tester Foundation Level
- 2. Table of Contents
- 3. Personal Notes
- 4. Fundamentals of Testing (1)
- 4.1. What is Testing? (1.1)
- 4.2. Why is Testing Necessary? (1.2)
- 4.3. Testing Principles (1.3)
- 4.4. Test Activities, Testware and Test Roles (1.4)
- 4.5. Essential Skills and Good Practices in Testing (1.5)
- 5. Testing Throughout the Software Development Lifecycle (2)
- 5.1. Testing in the Context of a Software Development Lifecycle (2.1)
- 5.2. Test Levels and Test Types (2.2)
- 5.3. Maintenance Testing (2.3)
- 6. Static Testing (3)
- 6.1. Static Testing Basics (3.1)
- 6.2. Value of Static Testing (3.1.2)
- 6.3. Feedback and Review Process (3.2)
- 7. Test Analysis and Design (4)
- 7.1. Test Techniques Overview (4.1)
- 7.2. Black-Box Test Techniques (4.2)
- 7.3. White-Box Test Techniques (4.3)
- 7.4. Experience-based Test Techniques (4.4)
- 7.5. Collaborative User Story Writing (4.5)
- 8. Managing the Test Activities (5)
- 8.1. Test Planning (5.1)
- 8.2. Risk Management (5.2)
- 8.3. Test Monitoring, Test Control and Test Completion (5.3)
3. Personal Notes
The German Testing Board provides an extensive glossary of testing terms on their website.
When writing documentation, ask yourself
- if the documentation will be read by anyone
- who will read the documentation and what their knowledge level is
- how much detail is necessary
- what the maintenance effort will be.
Best Practices
- Creation of test cases for all tests that can/should be executed at some point. This allows an overview of the desired coverage vs. the current coverage.
- Use a RACI diagram to define the tasks within the team and who is responsible for which role will take on what tasks.
- The Gherkin style is used by Product Owners and then implemented by the developers. It allows abstraction of the test cases that are then understood by everyone.
- Requirements engineering is used for the definition of requirements. There are templates available that can be used to define requirements in a structured way. It is crucial to use a consistent language and unambiguous terms.
- Functional tests always need to be tested with black-box tests. Although back-box tests can also test other types of tests.
- How to convey bad news/feedback n a good way? Insert a abstraction level e.g. "I represent my team/project/company: How should feedback be formulated to respect this?"
- Verification -> Doing the things right. Validation -> Doing the right things.
- Tests show the absence of defects, not the correctness of the software.
Mindmap about Testing
mindmap
root((Software Engineering))
Quality Management
Quality Control - analytical
Test
static - defect, verification
manual review
formal
walkthrough
technical
inspection
management - audit
informal
tool based static analysis
control flow anomalies through cyclomatic complexity
dynamic - failure, validation
black-box testing - specification based
equivalence partitioning
boundary value analysis
decision table testing
state transition testing
white-box testing - structure based
code coverage
grey-box testing - experience based
exploratory testing
error guessing
checklist based testing
e.g. evidence
Quality Assurance - constructive
e.g. process model, personnel education
Project-, Configuration-, Risk-, Release Management
Further Reading
- Sophisten - Nuremberg
- The pragmatic programmer
4. Fundamentals of Testing (1)
Keywords
Key - EN | Key - DE | Value |
---|---|---|
Coverage | Überdeckungsgrad | The degree, expressed as a percentage, to which certain coverage elements have been determined or executed by a test suite. |
Coverage Element | Überdeckungselement | Property or a combination of properties derived from one or more test conditions using a test method. |
Debugging | Debugging | The process of finding, analyzing and removing the causes of failures in software. |
Defect | Fehlerzustand | An imperfection or defect in a work product that does not meet requirements or specifications. |
Error | Fehlerhandlung | A human action that leads to a wrong result. |
Failure | Fehlerwirkung | A result in which a component or system does not fulfil a required function within certain limits. |
Quality | Qualität | The extend to which a component, system or process fulfils the specified requirements and/or the needs and expectations of users/customers. |
Quality Assurance | Qualitätssicherung | Part of quality management aimed at creating confidence that quality requirements are being met. |
Root Cause | Grundursache | A source of error, the elimination of which reduces or eliminates the occurrence of the error type. |
Test Analysis | Testanalyse | The activity of identifying test conditions by analysing the test basis. |
Test Basis | Testbasis | All information that can be used as a basis for test analysis and test design. |
Test Case | Testfall | A set of preconditions, inputs, actions (if applicable), expected results and postconditions developed on the basis of test conditions. |
Test Completion | Testabschluss | The activity that makes testware available for later use, leaves test environments in a satisfactory state, and communicates the test results to the relevant stakeholders. |
Test Condition | Testbedingungen | An aspect of the test basis that is relevant for achieving certain test objectives. |
Test Control | Teststeuerung | The activity that develops and applies corrective actions to put a test object on the right course when it deviates from the plan. |
Test Data | Testdaten | Data created or selected to fulfil the execution requirements and inputs for the execution of one or more test cases. |
Test Design | Testentwurf | Derivation and specification of test cases from test conditions. |
Test Execution | Testdurchführung | The activity of executing a test for a component or system that produces actual results. |
Test Implementation | Testrealisierung | The activity that prepares the test resources required for test execution on the basis of the test analysis and design. |
Test Monitoring | Testüberwachung | The activity that checks the status of test activities, identifies any deviations from the plan or expectation and reports the status to stakeholders. |
Test Object | Testobjeckt | The work result to be tested. |
Test Objective | Testziel | The reason or purpose of testing. |
Test Planning | Testplanung | An activity in the test process for creating and updating the test concept. |
Test Procedure | Testverfahren | A sequence of test cases in execution order and any associated actions that may be required to set up the initial preconditions and any post-execution activities. |
Test Result | Testergebnis | The consequence/result of carrying out a test. |
Testing | Testen | The process comprising all static and dynamic life cycle activities concerned with the planning, preparation and evaluation of a component or system and associated work products to determine whether they fulfil the specified requirements, to demonstrate that they are fit for purpose and to identify defects. |
Testware | Testmittel | The work products created during the testing process and used to plan, design, execute, analyse and report on the tests. |
Validation | Validierung | Confirmation by testing and by providing objective evidence that the requirements for a specific purpose or application are met. |
Verification | Verifikation | Confirmation by examination and by submitting objective evidence that the specified requirements have been met. |
4.1. What is Testing? (1.1)
Software testing is a set of activities to discover defects and evaluate the quality of software artifacts. A common misconception about testing is that testing focuses entirely on verifying the test object. Whilst testing involves verification, i.e., checking whether the system meets specified requirements (Doing the things right), it also involves validation, which means checking whether the system meets users’ and other stakeholders’ needs in its operational environment (Doing the right things).
Testing may be dynamic or static. Dynamic testing involves the execution of software, while static testing does not. Static testing includes reviews and static analysis. Dynamic testing uses different types of test techniques and test approaches to derive test cases.
Testing is not only a technical activity. It also needs to be properly planned, managed, estimated, monitored and controlled. Testers use tools, but it is important to remember that testing is largely an intellectual activity, requiring the testers to have specialized knowledge, use analytical skills and apply critical thinking and systems thinking.
4.1.1. Test Objectives (1.1.1)
The typical test objectives are:
- Evaluating work products such as requirements, user stories, designs, and code
- Triggering failures and finding defects
- Ensuring required coverage of a test object
- Reducing the level of risk of inadequate software quality
- Verifying whether specified requirements have been fulfilled
- Verifying that a test object complies with contractual, legal, and regulatory requirements
- Providing information to stakeholders to allow them to make informed decisions
- Building confidence in the quality of the test object
- Validating whether the test object is complete and works as expected by the stakeholders
Objectives of testing can vary, depending upon the context, which includes the work product being tested, the test level, risks, the software development lifecycle (SDLC) being followed, and factors related to the business context:
- During a component test, the aim may be to find as many faults as possible so that they can be rectified immediately.
- During an acceptance test, the aim may be to check the functionality of the system and ensure that it meets the requirements.
4.1.2. Testing and Debugging (1.1.2)
Testing identifies error effects on the system under test (SUT), while debugging finds, analyzes and corrects the causal error condition.
Error post-tests are tests that are performed after debugging to ensure that the error condition has been correctly corrected.
4.2. Why is Testing Necessary? (1.2)
Testing is a type of quality control and contributes to the achievement of the agreed objectives (time, scope, quality, budget).
4.2.1. Testing's Contributions to Success (1.2.1)
Testing is a cost-effective means of detecting defects, which can then be corrected, resulting in a higher quality test object. Testing also provides a means of directly assessing the quality of a test object.
Testing
- supports the detection of errors in software
- uncovers defects in the specifications of the software
- reduces the risk of error effects during operation
- contributes to the quality of the software system
- can be contractually defined or correspond to industry-specific requirements
4.2.2. Testing and Quality Assurance QA (1.2.2)
Testing is a form of quality control.
Quality control is a product-oriented, corrective approach that focuses on those activities that support the achievement of an appropriate level of quality.
Quality assurance is a process-oriented, preventive approach that focuses on the implementation and improvement of processes. It assumes that a good process, if carried out correctly, will produce a good product. Quality assurance relates to both the development and testing process and is the responsibility of all project participants.
Test results are used by quality assurance and quality control. In quality control, they are used to correct defects, while in quality assurance they provide feedback on how well the development and testing processes are working.
4.2.3. Errors, Defects, Failures, and Root Causes (1.2.3)
Humans commit error actions that lead to error conditions, which in turn trigger error effects.
When an error condition is executed in the code, the system may not do what it should do or do something it should not do, resulting in an error effect. Some error conditions, when executed, always result in an error effect, while others only result in an error effect under certain circumstances, and still others never result in an error effect.
Faulty actions and fault conditions are not the only cause of failure effects. Fault effects can also be caused by environmental conditions, e.g. when radiation or electromagnetic fields cause fault conditions in the firmware.
A root cause is a significant reason for the occurrence of a problem (e.g. a situation that leads to an incorrect action). Root causes are determined by a root cause analysis, which is normally performed when a fault effect occurs or a fault condition is detected.
4.3. Testing Principles (1.3)
- Testing shows the presence, not the absence of defects. Testing can show that defects are present in the test object, but cannot prove that defects do not exist. Testing reduces the probability that defects will go undetected in the test object, but even if no defects are found, testing cannot prove the correctness of the test object.
- Exhaustive testing is impossible. It is not possible to test everything except in trivial cases. Instead of trying to test completely, test procedures, test case prioritization and risk-based testing should be used to target the testing effort.
- Early testing saves time and money. Defects that are eliminated early in the process do not cause later defects in derived work products. Quality costs are reduced as fewer defects occur later in the software development lifecycle. To find defects early, both static tests and dynamic tests should be started as early as possible.
- Defects cluster together. A small number of components in a system usually contain most of the detected faults or are responsible for most of the operational failures.
- Tests wear out. When the same tests are repeated many times, they become increasingly ineffective at detecting new defects. To overcome this effect, existing tests and test data may need to be modified and new tests written. In some cases, however, repeating the same tests can lead to a positive result (regression testing).
- Testing is context dependent. There is no universally applicable approach to testing. Testing is practiced differently in different contexts.
- Absence-of-defects fallacy. It is a mistake to expect that verifying software ensures the success of a system. Thoroughly testing all specified requirements and fixing any defects found could still result in a system that does not meet user needs and expectations, that does not help achieve the customer's business goals, and that is inferior to other competing systems. In addition to verification, validation should also be performed.
4.4. Test Activities, Testware and Test Roles (1.4)
4.4.1. Test Activities and Tasks (1.4.1)
Test planning consists of defining the test objectives and then selecting an approach that best achieves the objectives within the constraints imposed by the overall context.
Test monitoring involves the ongoing checking of all test activities and the comparison of actual progress against the plan. Test control involves taking the actions necessary to meet the objectives of testing.
Test analysis includes analyzing the test basis to identify testable features and to define and prioritize associated test conditions, together with the related risks and risk levels. The test basis and the test objects are also evaluated to identify defects they may contain and to assess their testability. Test analysis is often supported by the use of test techniques. Test analysis answers the question “what to test?” in terms of measurable coverage criteria.
Test design includes elaborating the test conditions into test cases and other testware (e.g., test charters). This activity often involves the identification of coverage items, which serve as a guide to specify test case inputs. Test techniques can be used to support this activity. Test design also includes defining the test data requirements, designing the test environment and identifying any other required infrastructure and tools. Test design answers the question “how to test?”.
Test implementation includes creating or acquiring the testware necessary for test execution (e.g., test data). Test cases can be organized into test procedures and are often assembled into test suites. Manual and automated test scripts are created. Test procedures are prioritized and arranged within a test execution schedule for efficient test execution. The test environment is built and verified to be set up correctly.
Test execution includes running the tests in accordance with the test execution schedule (test runs). Test execution may be manual or automated. Test execution can take many forms, including continuous testing or pair testing sessions. Actual test results are compared with the expected results. The test results are logged. Anomalies are analyzed to identify their likely causes. This analysis allows us to report the anomalies based on the failures observed.
Test completion activities usually occur at project milestones (e.g., release, end of iteration, test level completion) for any unresolved defects, change requests or product backlog items created. Any testware that may be useful in the future is identified and archived or handed over to the appropriate teams. The test environment is shut down to an agreed state. The test activities are analyzed to identify lessons learned and improvements for future iterations, releases, or projects. A test completion report is created and communicated to the stakeholders.
4.4.2. Testware (1.4.3)
Testware is created as output work products from the test activities.
The following list of work products is not exhaustive:
- Test planning work products include: test plan, test schedule, risk register, and entry and exit criteria. Risk register is a list of risks together with risk likelihood, risk impact and information about risk mitigation. Test schedule, risk register and entry and exit criteria are often a part of the test plan.
- Test monitoring and control work products include: test progress reports, documentation of control directives and risk information.
- Test analysis work products include: (prioritized) test conditions (e.g., acceptance criteria), and defect reports regarding defects in the test basis (if not fixed directly).
- Test design work products include: (prioritized) test cases, test charters, coverage items, test data requirements and test environment requirements.
- Test implementation work products include: test procedures, automated test scripts, test suites, test data, test execution schedule, and test environment elements. Examples of test environment elements include: stubs, drivers, simulators, and service virtualization.
- Test execution work products include: test logs, and defect reports.
- Test completion work products include: test completion report, action items for improvement of subsequent projects or iterations, documented lessons learned, and change requests (e.g., as product backlog items).
4.4.3. Traceability between the Test Basis and Testware (1.4.4)
In order to implement effective test monitoring and control, it is important to establish and maintain traceability throughout the test process between the test basis elements, testware associated with these elements (e.g., test conditions, risks, test cases), test results, and detected defects. Accurate traceability supports coverage evaluation, so it is very useful if measurable coverage criteria are defined in the test basis.
- Traceability of test cases to requirements can verify that the requirements are covered by test cases.
- Traceability of test results to risks can be used to evaluate the level of residual risk in a test object.
In addition to evaluating coverage, good traceability makes it possible to determine the impact of changes, facilitate test audits, and helps meet IT governance criteria. Good traceability also makes test progress and completion reports more easily understandable by including the status of test basis elements. This can also assist in communicating the technical aspects of testing to stakeholders in an understandable manner. Traceability provides information to assess product quality, process capability, and project progress against business goals.
Benefits of traceability:
- Analyze the impact of changes
- Making tests auditable
- Improving the comprehensibility of test progress reports
- Communicating the technical aspects of testing to stakeholders so they can better understand them
- Providing information to assess product quality, process capability and project progress
4.4.4. Roles in Testing (1.4.5)
Test management/process responsibility and testing.
The test management role assumes overall responsibility for the test process, the test team and the management of test activities. The test management role focuses mainly on the activities of test planning, test monitoring and control as well as test completion.
The role of testing assumes overall responsibility for the operational aspect of testing. The role of testing focuses mainly on the activities of test analysis, test design, test realization and test execution.
4.5. Essential Skills and Good Practices in Testing (1.5)
4.5.1. Generic Skills Required for Testing (1.5.1)
Testers are often the bearers of bad news. It is a common human trait to blame the bearer of bad news. This makes communication skills crucial for testers.
4.5.2. Whole Team Approach (1.5.2)
In the whole-team approach any team member with the necessary knowledge and skills can perform any task, and everyone is responsible for quality.
Testers work closely with other team members to ensure that the desired quality levels are achieved. This includes collaborating with business representatives to help them create suitable acceptance tests and working with developers to agree on the test strategy and decide on test automation approaches. Testers can thus transfer testing knowledge to other team members and influence the development of the product.
4.5.3. Independence of Testing (1.5.3)
A certain degree of independence makes the tester more effective at finding defects due to differences between the author’s and the tester’s cognitive biases.
The main benefit of independence of testing is that independent testers are likely to recognize different kinds of failures and defects compared to developers because of their different backgrounds, technical perspectives, and biases. Moreover, an independent tester can verify, challenge, or disprove assumptions made by stakeholders during specification and implementation of the system.
However, there are also some drawbacks. Independent testers may be isolated from the development team, which may lead to a lack of collaboration, communication problems, or an adversarial relationship with the development team. Developers may lose a sense of responsibility for quality. Independent testers may be seen as a bottleneck or be blamed for delays in release.
5. Testing Throughout the Software Development Lifecycle (2)
Keywords
Key - EN | Key - DE | Value |
---|---|---|
Acceptance Testing | Abnahmetests | Formal test related to user needs, requirements and business processes that is performed to determine whether or not a system meets the acceptance criteria and to allow the user, customer or other authorized entity to decide whether or not the system should be accepted. |
Component Integration Testing | Kompnentenintegrationstest | Tests to detect errors in the interface and interactions between integrated components. |
Component Testing | Komponententest | The testing of individual hardware or software components. |
Confirmation Testing | Fehlernachtest | A type of change-related test that is performed after a bug has been fixed to confirm that a bug effect caused by that bug does not reoccur. |
Functional Testing | Funktionaler Test | Tests performed to assess the compliance of a component or system with the functional requirements. |
Integration Testing | Integrationstest | Tests that are performed to detect errors in the interfaces and in the interactions between integrated components or systems. |
Maintenance Testing | Wartungstest | Testing the changes to an operational system or the impact of a changed environment on an operational system after the system has been deployed. |
Non-Functional Testing | Nicht-funktionaler Test | Tests performed to evaluate the compliance of a component or system with non-functional requirements. |
Factory Acceptance Test | Betrieblicher Abnahmetest | A type of acceptance test carried out to determine whether operational and/or system management staff can accept a system. |
Regression Testing | Regressionstest | Testing of a previously tested component or system after a change to ensure that the changes made have not introduced or disclosed errors in unchanged areas of the software. |
System Integration Testing | Systemintegrationstest | Testing the combination and interaction of systems. |
System Testing | Sytemtest | Testing an integrated system to check that it meets the specified requirements. |
Test Environment | Testumgebung | An environment that contains hardware, measuring devices, simulators, software tools and other supporting elements required to perform a test. |
Test Level | Teststufe | A specific instantiation of a test process. |
Test Object | Testobjekt | The component or system to be tested. |
Test Type | Testart | A group of test activities based on specific test objectives with the purpose of testing a component or system for specific characteristics. |
User Acceptance Test | Anwenderabnahmetest | Acceptance tests that are carried out in a real or simulated operating environment by the intended users and focus on their needs, requirements and business processes. |
White-Box Testing | White-box-Test | Tests based on a system of the internal structure of the components or system. |
5.1. Testing in the Context of a Software Development Lifecycle (2.1)
5.1.1. Software Development Lifecycle and Good Testing Practices (2.1.2)
Good testing practices, independent of the chosen software development lifecycle model, include the following:
- For every software development activity, there is a corresponding test activity, so that all development activities are subject to quality control
- Different test levels (see chapter 2.2.1) have specific and different test objectives, which allows for testing to be appropriately comprehensive while avoiding redundancy
- Test analysis and design for a given test level begins during the corresponding development phase of the software development lifecycle, so that testing can adhere to the principle of early testing
- Testers are involved in reviewing work products as soon as drafts of this documentation are available, so that this earlier testing and defect detection can support the shift-left strategy
5.1.2. Testing as a Driver for Software Development (2.1.3)
TDD, ATDD and BDD are similar development approaches, where tests are defined as a means of directing development. Each of these approaches implements the principle of early testing and follows a shift-left approach, since the tests are defined before the code is written. They support an iterative development model. These approaches are characterized as follows:
Test-Driven Development (TDD):
- Directs the coding through test cases (instead of extensive software design)
- Tests are written first, then the code is written to satisfy the tests, and then the tests and code are refactored
Acceptance Test-Driven Development (ATDD):
- Derives tests from acceptance criteria as part of the system design process
- Tests are written before the part of the application is developed to satisfy the tests
Behavior-Driven Development (BDD):
- Expresses the desired behavior of an application with test cases written in a simple form of natural language, which is easy to understand by stakeholders – usually using the Given/When/Then format.
- Test cases are then automatically translated into executable tests
5.1.3. Shift-Left Approach (2.1.5)
The principle of early testing is sometimes referred to as shift-left, as it is an approach where testing is performed earlier in the software development lifecycle. Shift-left usually means that testing should start earlier e.g. not when the code is implemented or the components are integrated.
5.2. Test Levels and Test Types (2.2)
Test levels are groups of test activities that are organized and managed together. Each test level is an instance of the test process, performed in relation to software at a given stage of development, from individual components to complete systems or, where applicable, systems of systems.
Test types are groups of test activities related to specific quality characteristics and most of those test activities can be performed at every test level.
5.2.1. Test Levels (2.2.1)
- Component testing (also known as unit testing) focuses on testing components in isolation. It often requires specific support, such as test harnesses or unit test frameworks. Component testing is normally performed by developers in their development environments.
- Component integration testing (also known as unit integration testing) focuses on testing the interfaces and interactions between components. Component integration testing is heavily dependent on the integration strategy approaches like bottom-up, top-down or big-bang.
- System testing focuses on the overall behavior and capabilities of an entire system or product, often including functional testing of end-to-end tasks and the non-functional testing of quality characteristics. For some non-functional quality characteristics, it is preferable to test them on a complete system in a representative test environment (e.g., usability). Using simulations of sub-systems is also possible. System testing may be performed by an independent test team, and is related to specifications for the system.
- System integration testing focuses on testing the interfaces of the system under test and other systems and external services. System integration testing requires suitable test environments preferably similar to the operational environment.
- Acceptance testing focuses on validation and on demonstrating readiness for deployment, which means that the system fulfills the user’s business needs. Ideally, acceptance testing should be performed by the intended users. The main forms of acceptance testing are: user acceptance testing (UAT), operational acceptance testing, contractual and regulatory acceptance testing, alpha testing and beta testing.
5.2.2. Test Types (2.2.2)
Functional testing evaluates the functions that a component or system should perform. The functions are “what” the test object should do. The main objective of functional testing is checking the functional completeness, functional correctness and functional appropriateness.
Non-functional testing evaluates attributes other than functional characteristics of a component or system. Non-functional testing is the testing of “how well the system behaves”. The main objective of non-functional testing is checking the non-functional software quality characteristics.
The ISO/IEC 25010 standard provides the following classification of the non-functional software quality characteristics:
- Performance efficiency
- Compatibility
- Usability
- Reliability
- Security
- Maintainability
- Portability
Black-box testing is specification-based and derives tests from documentation external to the test object. The main objective of black-box testing is checking the system's behavior against its specifications.
White-box testing is structure-based and derives tests from the system's implementation or internal structure (e.g., code, architecture, work flows, and data flows). The main objective of white-box testing is to cover the underlying structure by the tests to the acceptable level.
All the four above-mentioned test types can be applied to all test levels, although the focus will be different at each level. Different test techniques can be used to derive test conditions and test cases for all the mentioned test types.
5.2.3. Confirmation Testing and Regression Testing (2.2.3)
Confirmation testing confirms that an original defect has been successfully fixed. Depending on the risk,one can test the fixed version of the software in several ways, including:
- executing all test cases that previously have failed due to the defect, or, also by
- adding new tests to cover any changes that were needed to fix the defect
Regression testing confirms that no adverse consequences have been caused by a change, including a fix that has already been confirmation tested. These adverse consequences could affect the same component where the change was made, other components in the same system, or even other connected systems. Regression testing may not be restricted to the test object itself but can also be related to the environment. It is advisable first to perform an impact analysis to optimize the extent of the regression testing. Impact analysis shows which parts of the software could be affected.
Confirmation testing and/or regression testing for the test object are needed on all test levels if defects are fixed and/or changes are made on these test levels.
5.3. Maintenance Testing (2.3)
There are different categories of maintenance, it can be corrective, adaptive to changes in the environment or improve performance or maintainability, so maintenance can involve planned releases/deployments and unplanned releases/deployments (hot fixes).
Testing the changes to a system in production includes both evaluating the success of the implementation of the change and the checking for possible regressions in parts of the system that remain unchanged (which is usually most of the system).
The scope of maintenance testing typically depends on:
- The degree of risk of the change
- The size of the existing system
- The size of the change
The triggers for maintenance and maintenance testing can be classified as follows:
- Modifications, such as planned enhancements (i.e., release-based), corrective changes or hot fixes.
- Upgrades or migrations of the operational environment, such as from one platform to another, which can require tests associated with the new environment as well as of the changed software, or tests of data conversion when data from another application is migrated into the system being maintained.
- Retirement, such as when an application reaches the end of its life. When a system is retired, this can require testing of data archiving if long data-retention periods are required. Testing of restore and retrieval procedures after archiving may also be needed in the event that certain data is required during the archiving period.
6. Static Testing (3)
Keywords
Key - EN | Key - DE | Value |
---|---|---|
Anomaly | Anomalie | Discrepancy caused by deviations from (justified) expectations of the software product. The expectations can be based on a requirements specification, design specifications, user documentation, standards, certain ideas or other experiences. Anomalies can also, but not only, be uncovered through reviews, testing, analyses, compilation or the use of the software product or its documentation. |
Dynamic Testing | Dynamischer Test | Testing, which includes the execution of the test element. |
Formal Review | Formales Review | A review that follows a defined process and delivers a formally documented result. |
Informal Review | Informelles Review | A review that does not follow a defined process and does not provide a formally documented result. |
Inspection | Inspektion | A formal type of review that aims to identify findings in a work product and provides measurements to improve the review process and the software development process. |
Review | Review | A type of static test in which a work result or process is evaluated by one or more persons in order to recognise error conditions or achieve improvements. |
Static Analysis | Statische Analyse | The process of evaluating a test object (component or system) based on its form, structure, content or documentation without executing it. |
Static Testing | Statischer Test | Testing that does not include the execution of a test unit. |
Technical Review | Technisches Review | A formal review by technical experts who examine the quality of a work product and identify deviations from specifications and standards. |
Walkthrough | Walkthrough | A type of review in which an author guides the review participants through a deliverable and the participants ask questions and comment on potential findings. |
6.1. Static Testing Basics (3.1)
In contrast to dynamic testing, in static testing the software under test does not need to be executed. Code, process specification, system architecture specification or other work products are evaluated through manual examination (e.g., reviews) or with the help of a tool (e.g., static analysis). Test objectives include improving quality, detecting defects and assessing characteristics like readability, completeness, correctness, testability and consistency.
6.2. Value of Static Testing (3.1.2)
Static testing provides the ability to evaluate the quality of, and to build confidence in work products. By verifying the documented requirements, the stakeholders can also make sure that these requirements describe their actual needs. Since static testing can be performed early in the software development lifecycle, a shared understanding can be created among the involved stakeholders. Communication will also be improved between the involved stakeholders. For this reason, it is recommended to involve a wide variety of stakeholders in static testing.
6.2.1. Differences between Static Testing and Dynamic Testing (3.1.3)
Static testing and dynamic testing practices complement each other. They have similar objectives, such as supporting the detection of defects in work products (see section 1.1.1), but there are also some differences, such as:
- Static and dynamic testing (with analysis of failures) can both lead to the detection of defects, however there are some defect types that can only be found by either static or dynamic testing.
- Static testing finds defects directly, while dynamic testing causes failures from which the associated defects are determined through subsequent analysis
- Static testing may more easily detect defects that lay on paths through the code that are rarely executed or hard to reach using dynamic testing
- Static testing can be applied to non-executable work products, while dynamic testing can only be applied to executable work products
- Static testing can be used to measure quality characteristics that are not dependent on executing code (e.g., maintainability), while dynamic testing can be used to measure quality characteristics that are dependent on executing code (e.g., performance efficiency)
6.3. Feedback and Review Process (3.2)
Early and frequent feedback allows for the early communication of potential quality problems. If there is little stakeholder involvement during the software development lifecycle, the product being developed might not meet the stakeholder’s original or current vision. A failure to deliver what the stakeholder wants can result in costly rework, missed deadlines, blame games, and might even lead to complete project failure.
Frequent stakeholder feedback throughout the software development lifecycle can prevent misunderstandings about requirements and ensure that changes to requirements are understood and implemented earlier This helps the development team to improve their understanding of what they are building. It allows them to focus on those features that deliver the most value to the stakeholders and that have the most positive impact on identified risks.
6.3.1. Review Types (3.2.3)
There exist many review types, such as:
Some commonly used review types are:
- Informal review. Informal reviews do not follow a defined process and do not require a formal documented output. The main objective is detecting anomalies.
- Walkthrough. A walkthrough, which is led by the author, can serve many objectives, such as evaluating quality and building confidence in the work product, educating reviewers, gaining consensus, generating new ideas, motivating and enabling authors to improve and detecting anomalies. Reviewers might perform an individual review before the walkthrough, but this is not required.
- Technical Review. A technical review is performed by technically qualified reviewers and led by a moderator. The objectives of a technical review are to gain consensus and make decisions regarding a technical problem, but also to detect anomalies, evaluate quality and build confidence in the work product, generate new ideas, and to motivate and enable authors to improve.
- Inspection. As inspections are the most formal type of review, they follow the complete generic process (see section 3.2.2). The main objective is to find the maximum number of anomalies. Other objectives are to evaluate quality, build confidence in the work product, and to motivate and enable authors to improve. Metrics are collected and used to improve the software development lifecycle, including the inspection process. In inspections, the author cannot act as the review leader or scribe.
7. Test Analysis and Design (4)
Keywords
Key - EN | Key - DE | Value |
---|---|---|
Acceptance Criteria | Abnahmekriterien | The criteria that a component or system must meet in order to be accepted by a user, customer or other authorised body. |
Acceptance Test-Driven Development | Abnahmetestgetriebene Entwicklung | A collaborative development approach in which the team and the customer use the customer's language to understand their requirements as the basis for testing a component or system. |
Black-Box Test Technique | Black-Box-Testverfahren | A test procedure based on analysing the specification of a component or system. |
Boundary Value Analysis | Grenzwertanalyse | A black box test procedure in which test cases are designed on the basis of limit values. |
Branch Coverage | Zweigüberdeckung | Branch coverage is a requirement that, for each branch in the program (e.g., if statements, loops), each branch have been executed at least once during testing. |
Checklist-Based Testing | Checklistenbasierter Test | Checklist-based testing is a structured software testing approach that uses a predefined list of tasks or steps to guide the testing process. This technique is often employed for manual testing, but it can also be incorporated into automated testing frameworks. |
Collaboration-Based Test Approach | Auf Zusammenarbeit basierender Testansatz | Collaborative testing is a software testing method in which stakeholders from many departments, such as development, and customer support, participate in the testing process at all phases of product development. |
Coverage | Überdeckung | The degree, expressed as a percentage, to which certain coverage elements were utilised by a test suite. |
Coverage Item | Überdeckungselement | A property or combination of properties derived from one or more test conditions using a test procedure. |
Decision Table Testing | Entscheidungstabellentest | A black-box test procedure in which test cases are designed with regard to the execution of combinations of conditions and the resulting actions of a decision table. |
Equivalence Partitioning | Äquivalenzklassenbildung | A black-box test procedure in which the test cases are designed with regard to the execution of equivalence classes, whereby one representative of each equivalence class is used. |
Error Guessing | Intuitive Testfallermittlung | A test procedure in which tests are derived on the basis of the tester's knowledge of previous defect effects or on the basis of general knowledge of defect effects. |
Experience-Based Test Technique | Erfahrungsbasierte Testverfahren | Testing based on the tester's experience, knowledge and intuition. |
Exploratory Testing | Explorativer Test | A testing approach in which testers dynamically design and execute tests based on their knowledge, exploration of the test object and the results of previous tests. |
State Transition Testing | Zustandsübergangstests | A black box test procedure in which test cases are designed to execute elements of a state transition model. |
Statement Coverage | Anweisungsüberdeckung | The coverage of executable instructions. |
Test Technique | Testverfahren | A procedure for defining test conditions, designing test cases and specifying test data. |
White-Box Test Technique | White-Box-Testverfahren | A test procedure based solely on the internal structure of a component or system. |
7.1. Test Techniques Overview (4.1)
Test techniques support the tester in test analysis (what to test) and in test design (how to test). Test techniques help to develop a relatively small, but sufficient, set of test cases in a systematic way. Test techniques also help the tester to define test conditions, identify coverage items, and identify test data during the test analysis and design.
Black-box test techniques (also known as specification-based techniques) are based on an analysis of the specified behavior of the test object without reference to its internal structure. Therefore, the test cases are independent of how the software is implemented. Consequently, if the implementation changes, but the required behavior stays the same, then the test cases are still useful.
White-box test techniques (also known as structure-based techniques) are based on an analysis of the test object’s internal structure and processing. As the test cases are dependent on how the software is designed, they can only be created after the design or implementation of the test object.
Experience-based test techniques effectively use the knowledge and experience of testers for the design and implementation of test cases. The effectiveness of these techniques depends heavily on the tester’s skills. Experience-based test techniques can detect defects that may be missed using the black-box and white-box test techniques. Hence, experience-based test techniques are complementary to the black-box and white-box test techniques.
7.2. Black-Box Test Techniques (4.2)
Commonly used black-box test techniques discussed in the following sections are:
- Equivalence Partitioning
- Boundary Value Analysis
- Decision Table Testing
- State Transition Testing
7.2.1. Equivalence Partitioning (4.2.1)
Equivalence Partitioning (EP) divides data into partitions based on the expectation that all elements within a partition are processed similarly by the test object. Goal: Test one value per partition; if a defect is detected, any value in that partition should reveal the same defect.
Partitions:
- Valid Partitions: Contain values that should be processed by the test object.
- Invalid Partitions: Contain values that should be ignored or rejected.
Partition Characteristics:
- Can be identified for any data element (inputs, outputs, configurations, internal values, time values, interface parameters).
- May be continuous or discrete, ordered or unordered, finite or infinite.
- Must be non-overlapping and non-empty.
Practical Considerations
- Complexity: Understanding how the test object treats different values can be challenging; careful partitioning is crucial.
- Definitions of Valid/Invalid: Can vary between teams and organizations.
Coverage
- Coverage Items: Equivalence partitions.
- 100% Coverage: Achieved by testing each identified partition at least once.
- Coverage Calculation: Number of partitions tested / Total number of partitions × 100%.
- Multiple Sets of Partitions: Test objects with more than one input parameter may require testing across multiple partition sets.
Coverage Criteria
- Each Choice Coverage: Ensures each partition from each set is exercised at least once but does not consider combinations of partitions.
- This cheat sheet can help software test developers quickly reference key points about Equivalence Partitioning in the Certified Tester Foundation Level training program.
7.2.2. Boundary Value Analysis (4.2.2)
Boundary Value Analysis (BVA) is a technique based on exercising the boundaries of equivalence partitions. Therefore, BVA can only be used for ordered partitions. The minimum and maximum values of a partition are its boundary values. In the case of BVA, if two elements belong to the same partition, all elements between them must also belong to that partition.
This syllabus covers two versions of the BVA: 2-value and 3-value BVA. They differ in terms of coverage items per boundary that need to be exercised to achieve 100% coverage.
In 2-value BVA, for each boundary value there are two coverage items: this boundary value and its closest neighbor belonging to the adjacent partition. To achieve 100% coverage with 2-value BVA, test cases must exercise all coverage items, i.e., all identified boundary values. Coverage is measured as the number of boundary values that were exercised, divided by the total number of identified boundary values, and is expressed as a percentage.
In 3-value BVA, for each boundary value there are three coverage items: this boundary value and both its neighbors. Therefore, in 3-value BVA some of the coverage items may not be boundary values. To achieve 100% coverage with 3-value BVA, test cases must exercise all coverage items, i.e., identified boundary values and their neighbors. Coverage is measured as the number of boundary values and their neighbors exercised, divided by the total number of identified boundary values and their neighbors, and is expressed as a percentage.
7.2.3. Decision Table Testing (4.2.3)
Decision tables are used for testing the implementation of system requirements that specify how different combinations of conditions result in different outcomes. Decision tables are an effective way of recording complex logic.
The strength of decision table testing is that it provides a systematic approach to identify all the combinations of conditions, some of which might otherwise be overlooked. It also helps to find any gaps or contradictions in the requirements. If there are many conditions, exercising all the decision rules may be time-consuming, since the number of rules grows exponentially with the number of conditions. In such a case, to reduce the number of rules that need to be exercised, a minimized decision table or a risk-based approach may be used.
7.2.4. State Transition Testing (4.2.4)
A state transition diagram models the behavior of a system by showing its possible states and valid state transitions. A transition is initiated by an event, which may be additionally qualified by a guard condition. The transitions are assumed to be instantaneous and may sometimes result in the software taking action. The common transition labeling syntax is as follows: “event [guard condition] / action”. Guard conditions and actions can be omitted if they do not exist or are irrelevant for the tester.
There exist many coverage criteria for state transition testing. This syllabus discusses three of them.
In all states coverage, the coverage items are the states. To achieve 100% all states coverage, test cases must ensure that all the states are visited. Coverage is measured as the number of visited states divided by the total number of states, and is expressed as a percentage. In valid transitions coverage (also called 0-switch coverage), the coverage items are single valid transitions. To achieve 100% valid transitions coverage, test cases must exercise all the valid transitions. Coverage is measured as the number of exercised valid transitions divided by the total number of valid transitions, and is expressed as a percentage. In all transitions coverage, the coverage items are all the transitions shown in a state table. To achieve 100% all transitions coverage, test cases must exercise all the valid transitions and attempt to execute invalid transitions. Testing only one invalid transition in a single test case helps to avoid fault masking, i.e., a situation in which one defect prevents the detection of another. Coverage is measured as the number of valid and invalid transitions exercised or attempted to be covered by executed test cases, divided by the total number of valid and invalid transitions, and is expressed as a percentage. All states coverage is weaker than valid transitions coverage, because it can typically be achieved without exercising all the transitions. Valid transitions coverage is the most widely used coverage criterion. Achieving full valid transitions coverage guarantees full all states coverage. Achieving full all transitions coverage guarantees both full all states coverage and full valid transitions coverage and should be a minimum requirement for mission and safety-critical software.
7.3. White-Box Test Techniques (4.3)
Because of their popularity and simplicity, this section focuses on two code-related white-box test techniques:
- Statement testing
- Branch testing
7.3.1. Statement Testing and Statement Coverage (4.3.1)
In statement testing, the coverage items are executable statements. The aim is to design test cases that exercise statements in the code until an acceptable level of coverage is achieved.
7.3.2. Branch Testing and Branch Coverage (4.3.2)
A branch is a transfer of control between two nodes in the control flow graph, which shows the possible sequences in which source code statements are executed in the test object. Each transfer of control can be either unconditional (i.e., straight-line code) or conditional (i.e., a decision outcome). In branch testing the coverage items are branches and the aim is to design test cases to exercise branches in the code until an acceptable level of coverage is achieved. Coverage is measured as the number of branches exercised by the test cases divided by the total number of branches, and is expressed as a percentage.
Branch coverage subsumes statement coverage.
7.3.3. The Value of White-box Testing (4.3.3)
A fundamental strength that all white-box techniques share is that the entire software implementation is taken into account during testing, which facilitates defect detection even when the software specification is vague, outdated or incomplete. A corresponding weakness is that if the software does not implement one or more requirements, white box testing may not detect the resulting defects of omission.
7.4. Experience-based Test Techniques (4.4)
Commonly used experience-based test techniques discussed in the following sections are:
- Error guessing
- Exploratory testing
- Checklist-based testing
7.4.1. Error Guessing (4.4.1)
Error guessing is a technique used to anticipate the occurrence of errors, defects, and failures, based on the tester’s knowledge, including:
- How the application has worked in the past
- The types of errors the developers tend to make and the types of defects that result from these errors
- The types of failures that have occurred in other, similar applications
7.4.2. Exploratory Testing (4.4.2)
In exploratory testing, tests are simultaneously designed, executed, and evaluated while the tester learns about the test object. The testing is used to learn more about the test object, to explore it more deeply with focused tests, and to create tests for untested areas.
Exploratory testing is useful when there are few or inadequate specifications or there is significant time pressure on the testing. Exploratory testing is also useful to complement other more formal test techniques. Exploratory testing will be more effective if the tester is experienced, has domain knowledge and has a high degree of essential skills, like analytical skills, curiosity and creativeness.
7.4.3. Checklist-Based Testing (4.3.3)
In checklist-based testing, a tester designs, implements, and executes tests to cover test conditions from a checklist. Checklists can be built based on experience, knowledge about what is important for the user, or an understanding of why and how software fails. Checklists should not contain items that can be checked automatically, items better suited as entry/exit criteria, or items that are too general. In the absence of detailed test cases, checklist-based testing can provide guidelines and some degree of consistency for the testing. If the checklists are high-level, some variability in the actual testing is likely to occur, resulting in potentially greater coverage but less repeatability.
7.5. Collaborative User Story Writing (4.5)
7.5.1. Acceptance Test-driven Development ATDD (4.5.3)
Acceptance Test-driven Development is a test-first approach. Test cases are created prior to implementing the user story. The test cases are created by team members with different perspectives, e.g., customers, developers, and testers. Test cases may be executed manually or automated. The first step is a specification workshop where the user story and (if not yet defined) its acceptance criteria are analyzed, discussed, and written by the team members. Incompleteness, ambiguities, or defects in the user story are resolved during this process. The next step is to create the test cases. This can be done by the team as a whole or by the tester individually. The test cases are based on the acceptance criteria and can be seen as examples of how the software works. This will help the team implement the user story correctly.
8. Managing the Test Activities (5)
Keywords
Key - EN | Key - DE | Value |
---|---|---|
Defect Management | Fehlermanagement | The process of detecting, recording, classifying, investigating, resolving and closing fault conditions. |
Defect Report | Fehlerbericht | The documentation of the occurrence, type and status of a fault condition. |
Entry Criteria | Eingangskriterien | The set of conditions for the official start of a particular task. |
Exit Criteria | Endekriterien | The set of conditions for the official completion of a specific task. |
Product Risk | Produktrisiko | A risk that impairs the quality of a product. |
Project Risk | Projektrisiko | A risk that impairs the success of the project. |
Risk | Risiko | A factor that could have negative consequences in the future. |
Risk Analysis | Risikoanalyse | The general process of risk identification and risk assessment. |
Risk Assessment | Risikobewertung | The process of assessing identified risks and determining the risk level. |
Risk Control | Risikosteuerung | Involves implementing strategies and actions to mitigate, manage, or eliminate risks identified during testing to ensure the testing process meets its objectives and remains effective. |
Risk Identification | Risikoidentifizierung | Identifying, recognising and describing risks. |
Risk Level | Risikostufe | The qualitative or quantitative measure of a risk, defined by the extent of damage and probability of occurrence. |
Risk Management | Risikomanagement | The process for handling risks. |
Risk Mitigation | Risikominderung | The process by which decisions are made and protective measures are implemented to reduce the risk to a predetermined level or to keep it at a certain level. |
Risk Monitoring | Risikoüberwachung | The ongoing process of identifying, assessing, and tracking potential risks in testing activities to ensure timely, quality-driven outcomes. |
Risk-Based Testing | Risikobasierter Test | A test procedure in which the management, selection, prioritisation and application of test activities and resources are based on corresponding risk types and risk levels. |
Test Approach | Testansatz | The implementation of a test strategy in a specific test. |
Test Completion Report | Testabschlussbericht | A type of test report that is generated when completion milestones are reached and provides an assessment of the corresponding test elements based on the end criteria. |
Test Control | Teststeuerung | The activity that develops and applies corrective actions to bring a test project back on track when it deviates from the plan. |
Test Monitoring | Testüberwachung | The activity that checks the status of test activities, identifies any deviations from the plan or expectation and reports the status to stakeholders. |
Test Plan | Testkonzept | The documentation of the test objectives as well as the measures and scheduling to achieve them for the purpose of coordinating test activities. |
Test Planning | Testplanung | An activity in the test process for creating and updating the test plan. |
Test Progress Report | Testfortschrittsbericht | A type of test report that is prepared at regular intervals and provides information on the progress of test activities in relation to a defined basis for comparison and on risks, as well as on alternatives if a decision is required. |
Test Pyramid | Testpyramide | A graphical model showing the ratio of the test scopes of the individual test levels, with more scope at the bottom than at the top. |
Testing Quadrants | Testquadranten | Conceptual framework that divides testing into four quadrants, each representing a different type of testing activity based on its purpose and focus. |
8.1. Test Planning (5.1)
8.1.1. Purpose and Content of a Test Plan (5.1.1)
A test plan describes the objectives, resources and processes for a test project. A test plan:
- Documents the means and schedule for achieving test objectives
- Helps to ensure that the performed test activities will meet the established criteria
- Serves as a means of communication with team members and other stakeholders
- Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why the testing will deviate from them)
The typical content of a test plan includes:
- Context of testing (e.g., scope, test objectives, constraints, test basis)
- Assumptions and constraints of the test project
- Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)
- Communication (e.g., forms and frequency of communication, documentation templates)
- Risk register (e.g., product risks, project risks)
- Test approach (e.g., test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the organizational test policy and test strategy)
- Budget and schedule
8.1.2. Entry Criteria and Exit Criteria (5.1.3)
Entry criteria define the preconditions for undertaking a given activity. If entry criteria are not met, it is likely that the activity will prove to be more difficult, time-consuming, costly, and riskier. Exit criteria define what must be achieved in order to declare an activity completed. Entry criteria and exit criteria should be defined for each test level, and will differ based on the test objectives.
8.1.3. Estimation Techniques (5.1.4)
Test effort estimation involves predicting the amount of test-related work needed to meet the objectives of a test project.
- Estimation based on ratios
- Exploration
- Wideband Delphi
- Three-point estimation
8.1.4. Test Case Prioritization (5.1.5)
Once the test cases and test procedures are specified and assembled into test suites, these test suites can be arranged in a test execution schedule that defines the order in which they are to be run.
- Risk-based prioritization, where the order of test execution is based on the results of risk analysis.
- Coverage-based prioritization, where the order of test execution is based on coverage.
- Requirements-based prioritization, where the order of test execution is based on the priorities of the requirements traced back to the corresponding test cases.
8.1.5. Testing Quadrants (5.1.7)
The testing quadrants group the test levels with the appropriate test types, activities, test techniques and work products in the Agile software development. The model supports test management in visualizing these to ensure that all appropriate test types and test levels are included in the software development lifecycle and in understanding that some test types are more relevant to certain test levels than others. This model also provides a way to differentiate and describe the types of tests to all stakeholders, including developers, testers, and business representatives. In this model, tests can be business facing or technology facing. Tests can also support the team (i.e., guide the development) or critique the product (i.e., measure its behavior against the expectations). The combination of these two viewpoints determines the four quadrants:
- Quadrant Q1 (technology facing, support the team). This quadrant contains component and component integration tests. These tests should be automated and included in the CI process.
- Quadrant Q2 (business facing, support the team). This quadrant contains functional tests, examples, user story tests, user experience prototypes, API testing, and simulations. These tests check the acceptance criteria and can be manual or automated.
- Quadrant Q3 (business facing, critique the product). This quadrant contains exploratory testing, usability testing, user acceptance testing. These tests are user-oriented and often manual.
- Quadrant Q4 (technology facing, critique the product). This quadrant contains smoke tests and non-functional tests (except usability tests). These tests are often automated.
8.2. Risk Management (5.2)
Risk management allows the organizations to increase the likelihood of achieving objectives, improve the quality of their products and increase the stakeholders’ confidence and trust. The main risk management activities are:
- Risk analysis (consisting of risk identification and risk assessment)
- Risk control (consisting of risk mitigation and risk monitoring)
8.2.1. Risk Definition and Risk Attributes (5.2.1)
Risk is a potential event, hazard, threat, or situation whose occurrence causes an adverse effect. A risk can be characterized by two factors:
- Risk likelihood – the probability of the risk occurrence (greater than zero and less than one)
- Risk impact (harm) – the consequences of this occurrence
8.2.2. Project Risks and Product Risks (5.2.2)
In software testing one is generally concerned with two types of risks: project risks and product risks.
Project risks are related to the management and control of the project. Project risks include:
- Organizational issues (e.g., delays in work products deliveries, inaccurate estimates, cost-cutting)
- People issues (e.g., insufficient skills, conflicts, communication problems, shortage of staff)
- Technical issues (e.g., scope creep, poor tool support)
- Supplier issues (e.g., third-party delivery failure, bankruptcy of the supporting company)
Project risks, when they occur, may have an impact on the project schedule, budget or scope, which affects the project's ability to achieve its objectives.
Product risks are related to the product quality characteristics. Examples of product risks include: missing or wrong functionality, incorrect calculations, runtime errors, poor architecture, inefficient algorithms, inadequate response time, poor user experience, security vulnerabilities.
Product risks, when they occur, may result in various negative consequences, including:
- User dissatisfaction
- Loss of revenue, trust, reputation
- Damage to third parties
- High maintenance costs, overload of the helpdesk
- Criminal penalties
- In extreme cases, physical damage, injuries or even death
8.3. Test Monitoring, Test Control and Test Completion (5.3)
Test monitoring is concerned with gathering information about testing. This information is used to assess test progress and to measure whether the test exit criteria or the test tasks associated with the exit criteria are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.
Test control uses the information from test monitoring to provide, in a form of the control directives, guidance and the necessary corrective actions to achieve the most effective and efficient testing.
Examples of control directives include:
- Reprioritizing tests when an identified risk becomes an issue
- Re-evaluating whether a test item meets entry criteria or exit criteria due to rework
- Adjusting the test schedule to address a delay in the delivery of the test environment
- Adding new resources when and where needed
Test completion collects data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a test level is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is released, or a maintenance release is completed.
8.3.1. Metrics used in Testing (5.3.1)
Test metrics are gathered to show progress against the planned schedule and budget, the current quality of the test object, and the effectiveness of the test activities with respect to the objectives or an iteration goal. Test monitoring gathers a variety of metrics to support the test control and test completion.
Common test metrics include:
- Project progress metrics (e.g., task completion, resource usage, test effort)
- Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)
- Product quality metrics (e.g., availability, response time, mean time to failure)
- Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection percentage)
- Risk metrics (e.g., residual risk level)
- Coverage metrics (e.g., requirements coverage, code coverage)
- Cost metrics (e.g., cost of testing, organizational cost of quality)
8.3.2. Purpose, Content and Audience for Test Reports (5.3.2)
Test reporting summarizes and communicates test information during and after testing. Test progress reports support the ongoing control of the testing and must provide enough information to make modifications to the test schedule, resources, or test plan, when such changes are needed due to deviation from the plan or changed circumstances. Test completion reports summarize a specific stage of testing (e.g., test level, test cycle, iteration) and can give information for subsequent testing.
During test monitoring and control, the test team generates test progress reports for stakeholders to keep them informed. Test progress reports are usually generated on a regular basis (e.g., daily, weekly, etc.) and include:
- Test period
- Test progress (e.g., ahead or behind schedule), including any notable deviations
- Impediments for testing, and their workarounds
- Test metrics (see section 5.3.1 for examples)
- New and changed risks within testing period
- Testing planned for the next period
A test completion report is prepared during test completion, when a project, test level, or test type is complete and when, ideally, its exit criteria have been met. This report uses test progress reports and other data. Typical test completion reports include:
- Test summary
- Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit criteria)
- Deviations from the test plan (e.g., differences from the planned schedule, duration, and effort).
- Testing impediments and workarounds
- Test metrics based on test progress reports
- Unmitigated risks, defects not fixed
- Lessons learned that are relevant to the testing