Understanding Some Important Test Metrics
Software testing metrics play a crucial role in assessing the effectiveness, efficiency, and quality of the testing process. These metrics provide valuable insights into various aspects of software testing, enabling teams to measure and monitor their testing activities, identify areas for improvement, and make data-driven decisions. By quantifying different attributes and performance indicators, software testing metrics help teams evaluate the progress, coverage, defect management, and overall reliability of the software under test. Understanding and utilizing these metrics is essential for ensuring successful software testing and delivering high-quality products to end-users.
Some Important Test Metrics
-
Derivative Metrics:
- Definition: Metrics that identify areas of improvement in the testing process.
- Example: Number of test cases modified due to changing requirements.
-
Defect Density:
-
Definition: Number of defects found per unit of size (usually per thousand lines of code).
-
Formula: Defect Density = (Number of Defects) / (Size of the Release/Module)
For example, if you have identified 50 defects in a module with 10,000 lines of code, the defect density would be calculated as:
Defect Density = 50 / 10,000 = 0.005 (or 0.5%)
This means that there are 0.5 defects per 100 lines of code in the module, providing a measure of the density of defects in the software.
-
-
Defect Leakage:
-
Definition: Percentage of defects found by users after release that were not identified during testing.
-
Formula: Defect Leakage = ((Total Number of Defects Found in UAT / Total Number of Defects Found Before UAT)) x 100
-
Example: If there were 10 defects reported by users after the software release, and a total of 100 defects were identified before UAT
Defect Leakage = (10 / 100) x 100 = 10%
-
-
Defect Removal Efficiency (DRE):
-
Definition: Measure of the effectiveness of defect removal during testing.
-
Formula: DRE = Number of Defects Resolved / Total Number of Defects at the Moment of Measurement
-
Example: If there were 500 defects resolved by the development team out of a total of 800 defects identified during testing, the Defect Removal Efficiency would be calculated as:
DRE = (500 / 800) = 0.625 (or 62.5%) This means that the development team was able to resolve and remove 62.5% of the defects identified during testing
-
-
Defect Category:
- Definition: Distribution of defects based on different quality attributes (usability, performance, functionality, stability, reliability, etc.).
- Formula: Defect Category = Defects Belonging to a Particular Category / Total Number of Defects
- Example: Let's say you have identified 20 defects related to usability out of a total of 100 defects found during testing. To calculate the Defect Category for usability:
Defect Category (Usability) = (20 / 100) = 0.2 (or 20%)
This indicates that 20% of the total defects discovered are related to usability issues in the software
-
Defect Severity Index (DSI):
- Definition: Measure of the impact of defects on software development or operation.
- Formula: DSI = (Sum of (Defect * Severity Level)) / Total Number of Defects
- Example: Let's consider a scenario where you have identified 100 defects with varying severity levels. The severity levels are assigned as follows:
Critical: Severity level of 5 High: Severity level of 4 Medium: Severity level of 3 Low: Severity level of 2 If you have 20 critical defects, 30 high defects, 40 medium defects, and 10 low defects, the Defect Severity Index can be calculated as:
DSI = ((20 * 5) + (30 * 4) + (40 * 3) + (10 * 2)) / 100 = 3.6
The resulting DSI value of 3.6 indicates the average severity level of the identified defects. Higher DSI values suggest a higher overall impact and severity of defects,
-
Review Efficiency:
- Definition: Measure of the effectiveness of reviews in identifying defects.
- Formula: Review Efficiency = Total Number of Review Defects / (Total Number of Review Defects + Total Number of Testing Defects) x 100
- Example: Let's consider a scenario where during the review process, you identified 20 defects, and during testing, you identified 50 defects. To calculate the Review Efficiency:
Review Efficiency = (20) / (20 + 50) x 100 = 28.57%
The resulting Review Efficiency of 28.57% indicates that out of all the defects found, 28.57% were detected during the review process.
-
Test Case Effectiveness:
-
Definition: Measure of the ability of test cases to detect defects.
-
Formula: Test Case Effectiveness = (Number of Defects Detected / Number of Test Cases Run) x 100
-
Example: Let's say you executed 100 test cases and identified 80 defects during testing. To calculate Test Case Effectiveness:
Test Case Effectiveness = (80 / 100) x 100 = 80%
The resulting Test Case Effectiveness of 80% indicates that 80% of the test cases were able to detect defects in the software.
-
-
Test Case Productivity:
-
Definition: Measure of the number of test cases created per unit of effort.
-
Formula: Test Case Productivity = (Number of Test Cases / Efforts Spent for Test Case Preparation)
-
Example: L's consider a scenario where you created 200 test cases and spent 40 hours on test case preparation. To calculate Test Case Productivity:
Test Case Productivity = (200) / (40) = 5 test cases per hour
-
-
Test Coverage:
-
Definition: Measure of the extent to which the software's functionality is covered by tests.
-
Formula: Test Coverage = Number of Detected Faults / Number of Predicted Defects
-
Example: If you detected 75 faults out of 100 predicted defects, the Test Coverage would be:
Test Coverage = (75 / 100) = 0.75 (or 75%)
This indicates that 75% of the predicted defects were detected during testing, highlighting the coverage achieved.
-
-
Requirement Coverage:
-
Definition: Measure of the extent to which the requirements are covered by tests.
-
Formula: Requirement Coverage = (Number of Requirements Covered / Total Number of Requirements) x 100
-
Example: If you covered 80 requirements out of a total of 100 requirements, the Requirement Coverage would be:
Requirement Coverage = (80 / 100) x 100 = 80%
This indicates that 80% of the requirements were covered by the test cases executed.
-
-
Test Design Coverage:
-
Definition: Measure of the percentage of requirements covered by test cases.
-
Formula: Test Design Coverage = (Total Number of Requirements Mapped to Test Cases / Total Number of Requirements) x 100
-
Example: If you have mapped test cases to 60 out of 80 total requirements, the Test Design Coverage would be:
Test Design Coverage = (60 / 80) x 100 = 75%
This indicates that 75% of the requirements have been covered by the test cases designed.
-
-
Test Execution Coverage:
-
Definition: Measure of the percentage of test cases executed out of planned test cases.
-
Formula: Test Execution Coverage = (Total Number of Executed Test Cases / Total Number of Planned Test Cases) x 100
-
Example: If you executed 200 test cases out of 250 planned test cases, the Test Execution Coverage would be:
Test Execution Coverage = (200 / 250) x 100 = 80%
This indicates that 80% of the planned test cases have been executed.
-
-
Test Tracking & Efficiency:
- Definition: Metrics related to tracking the progress and efficiency of testing activities.
- Examples: Passed Test Cases Coverage, Failed Test Case Coverage, Test Cases Blocked, Fixed Defects Percentage, Accepted Defects Percentage, Defects Rejected Percentage, Defects Deferred Percentage, Critical Defects Percentage, Average Time Taken to Rectify Defects, Test Effort Percentage, Number of Test Run Per Time Period, Test Design Efficiency, Bug Find Rate, Number of Bugs Per Test, Average Time to Test a Bug Fix.
Test Tracking & Efficiency is a software testing metric that focuses on tracking the progress of testing activities and evaluating the efficiency of the testing process. It provides insights into how well the testing efforts are managed, monitored, and optimized throughout the testing lifecycle.
-
Passed Test Cases Coverage: It measures the percentage of test cases that have passed during testing. Formula: (Number of Passed Tests) / (Total Number of Tests Executed) x 100
Example - Let's say you executed 150 test cases and 135 of them passed successfully. Passed Test Cases Coverage = (135 / 150) x 100 = 90%. This indicates that 90% of the executed test cases produced the expected results, demonstrating a high level of reliability in the software.
-
Failed Test Case Coverage: It measures the percentage of failed test cases. Formula: (Number of Failed Tests) / (Total Number of Test Cases Failed) x 100
Example- If 30 test cases out of 150 failed during execution, Failed Test Case Coverage = (30 / 150) x 100 = 20%. This shows that 20% of the executed test cases did not produce the expected results, highlighting areas that require further investigation and defect resolution.
-
Test Cases Blocked: It determines the percentage of test cases that are blocked or unable to be executed. Formula: (Number of Blocked Tests) / (Total Number of Tests Executed) x 100
Example- Suppose during testing, 10 test cases were blocked due to unavailability of necessary resources or dependencies. If the total number of executed test cases is 150. Test Cases Blocked percentage = (10 / 150) x 100 = 6.67%. This metric helps identify any hindrances or issues that prevent the execution of certain test cases.
-
Fixed Defects Percentage: It measures the percentage of defects that have been fixed. Formula: (Defects Fixed) / (Total Number of Defects Reported) x 100
Example - Let's assume that out of 100 reported defects, 80 have been fixed by the development team. The Fixed Defects Percentage = (80 / 100) x 100 = 80%. This metric shows the effectiveness of defect resolution efforts and the progress made in fixing reported issues.
-
Accepted Defects Percentage: It measures the percentage of defects that have been accepted by the development team. Formula: (Defects Accepted as Valid) / (Total Number of Defects Reported) x 100
Example - If the development team accepts 70 out of the 100 reported defects. Accepted Defects Percentage = (70 / 100) x 100 = 70%. This metric reflects the proportion of reported defects that are deemed valid and require action from the development team.
-
Defects Rejected Percentage: It measures the percentage of defects that have been rejected by the development team. Formula: (Number of Defects Rejected by Development Team) / (Total Number of Defects Reported) x 100
Example - Suppose the development team rejects 20 out of the 100 reported defects. The Defects Rejected Percentage = (20 / 100) x 100 = 20%. This metric indicates the portion of reported defects that are considered invalid or not reproducible.
-
Defects Deferred Percentage: It determines the percentage of defects that have been deferred for future releases. Formula: (Defects Deferred for Future Releases) / (Total Number of Defects Reported) x 100
Example - If 30 defects are deferred for future releases out of the 100 reported defects. Defects Deferred Percentage = (30 / 100) x 100 = 30%. This metric shows the proportion of reported defects that are scheduled to be addressed in subsequent releases.
-
Critical Defects Percentage: It measures the percentage of critical defects among all reported defects. Formula: (Number of Critical Defects) / (Total Number of Defects Reported) x 100
Example- Let's assume that out of the 100 reported defects, 10 are classified as critical. The Critical Defects Percentage = (10 / 100) x 100 = 10%. This metric helps highlight the severity and impact of defects on the software's functionality and performance.
-
Average Time Taken to Rectify Defects: It calculates the average time taken by the development and testing team to rectify defects. Formula: (Total Time Taken for Bug Fixes) / (Number of Bugs)
Example - If the total time taken for bug fixes is 100 hours and there were 20 bugs fixed. The Average Time Taken to Rectify Defects = 100 / 20 = 5 hours per bug on average. This metric provides insights into the efficiency of the defect resolution process.
-
Test Effort Percentage: It compares the estimated test effort with the actual effort invested in testing. Formula: (Actual Test Effort) / (Estimated Test Effort) x 100
Example - Suppose the estimated test effort for a project was 200 hours, but the actual test effort invested was 180 hours. The Test Effort Percentage = (180 / 200) x 100 = 90%. This metric compares the estimated effort with the actual effort invested, indicating the level of accuracy in planning and resource allocation.
-
Test Effectiveness:
-
Definition: Metrics that measure the ability of testing to detect defects and assess the quality of the test set.
-
Formula: Test Effectiveness (TEF) = (Total Number of Defects Injected + Total Number of Defects Found) / Total Number of Defects Escaped
-
Example - Let's consider a scenario where 500 defects were injected or found during testing, and 50 defects escaped or were missed. To calculate Test Effectiveness:
Test Effectiveness = ((500 + 500) / 50) x 100 = 2000%
-
-
Test Economic Metrics:
- Definition: Metrics related to the cost and budget of testing activities.
- Examples: Total Allocated Cost of Testing, Actual Cost of Testing, Variance from Estimated Budget, Variance from Schedule, Cost per Bug Fix, Cost of Not Testing.
-
Test Team Metrics:
- Definition: Metrics related to the performance and distribution of work within the test team.
- Examples: Returned Defects Distributed Team Member-wise, Open Defects Distributed to Retest per Test Team Member, Test Cases Allocated to Each Test Team Member, Number of Test Cases Executed by Each Test Team Member.
In conclusion, software testing metrics provide essential quantitative measurements that enable teams to evaluate and improve their testing efforts. By tracking metrics such as defect density, defect leakage, defect removal efficiency, and test coverage, teams gain valuable insights into the effectiveness of their testing strategies and identify areas that require attention. Other metrics, such as test case effectiveness, test case productivity, and review efficiency, offer a deeper understanding of the quality and efficiency of the testing process. Ultimately, leveraging software testing metrics empowers teams to enhance their testing practices, optimize resource allocation, and deliver software products of superior quality that meet customer expectations.