The Coming Divide in Defense: Federation or Fallout Under DoDI 5000.97
Share: The defense industry is entering a pivotal phase of transformation. With the release of DoDI 5000.97, the U.S. Department of Defense (DoD) has made

No-code integration platform for rich bi-directional sync

Zero downtime migration to tool of your choice

Keep Historical Data, Without Slowing Down Your Tools

Migrate or restructure Azure DevOps

Real-time, context-rich data lake for AI or analytics
By Role
Accelerate delivery with clear insights

Accelerate delivery with clear insights

Transform smarter with a connected digital thread

Confident transitions for every enterprise change
By Initiative

Operational readiness through connected engineering

Modernize and move to cloud without disruption

Build a compliant digital thread for complex environments

Build the foundation for smarter AI
By Domain

No-code integration across teams and systems

Enable collaboration between IT, support, and business teams

Connect PLM & engineering teams for smarter products

Ensure regulatory compliance from start to release

Explore the latest in technology best practices

Success stories from the field

Actionable insights for your business challenges.

See solutions in action

Learn, plan, and execute with confidence

Official announcements and updates

Join discussions that drive results

Stay ahead with curated insights
Share: The defense industry is entering a pivotal phase of transformation. With the release of DoDI 5000.97, the U.S. Department of Defense (DoD) has made
The software industry is experiencing unprecedented growth, driven by digital transformation. Software quality has thus become a strategic imperative.
The 15th World Quality Report underscores this shift, highlighting the growing emphasis on quality engineering and its integration into core business operations.
With a focus on delivering value rather than volume, 67% of companies are prioritizing quality assurance (QA) as a cornerstone of their operations.
To thrive in today’s quality-first software landscape, a lot comes down to setting the right benchmarks for measuring success.
In this blog, we take a deep-dive to bring you up to speed with the software quality metrics that you should essentially measure – and the critical need for Quality Gap Intelligence to maximize your end-to-end SDLC potential.
The three core dimensions of software quality are:
To achieve optimal software quality, organizations must adopt a holistic approach that incorporates the following strategies:
Pro Tip: To achieve a truly holistic view of software quality, it’s essential to embed quality metrics directly within existing development and work management tools. By visualizing quality data alongside development progress, teams can proactively address quality issues from the get-go.
What is it
The percentage of code executed by test cases.
Calculation
Number of lines of code covered by tests / Total number of executable lines of code
Interpretation
Higher coverage generally indicates better test effectiveness, but it doesn’t guarantee quality. Aim for high coverage in critical areas.
Improvement Strategies
Prioritize test case creation for uncovered areas, use code coverage tools, and refactor code for better testability.
Pro Tip: Measure test coverage for new or modified code. This targets testing efforts on areas most likely to introduce defects, reducing overall test execution time as well.
What is it
The number of defects found per file/module/feature.
Ways to measure
Defect density can be calculated per kilocycle of logic (KLOC) or per thousand lines of code (LOC). For example, if a software module has 1,000 lines of code and 10 defects, the defect density is 10/1,000 = 0.01 defects per KLOC.
Industry standard for defect density:
1 Defect/1000 Lines of Code.
Interpretation
High defect density indicates potential issues with requirements or testing. It helps product teams determine which features to release based on risk.
Improvement Strategies
Clarify requirements, enhance test case design, and conduct early defect prevention activities.
Pro Tip: By incorporating work item IDs into commit messages, software teams can trace the number of times a file and a line was touched for ‘Bug’ type workitems.
This practice establishes a direct link between code changes and the corresponding defects or user stories, allowing for efficient root cause analysis.
What is it
The number of defects introduced per code change.
Calculation
Number of defects / Number of code changes.
Interpretation
Improvement Strategies
Conduct thorough root-cause analysis, increase unit test coverage, and adopt intelligent impact analysis to identify change-caused defects early.
Pro Tip: Get defects per change by team, module, developer, reviewer, etc. The best way to ensure traceability is by linking work items to defects on a project, portfolio level.
What is it
Test effort refers to the overall resources (time, personnel, tools) invested in the testing process. It encompasses activities like test planning, design, execution, and analysis.
Test reliability means the consistency and dependability of test results.
Calculation
Interpretation
High test effort or low reliability indicates potential inefficiencies or test case issues.
Some questions to ask to measure test reliability:
Pro Tip: With Requirement-Test traceability – Measure test effort and test reliability by product modules, key functionalities, by product teams etc.
What is it
The ability of test cases to identify defects.
Calculation
(Number of defects found by test cases / Total number of test cases) * 100
Interpretation
Low effectiveness indicates poor test case design or inadequate test coverage.
Improvement Strategies
Improve test case design, enhance test case review, and incorporate user feedback.
Pro Tip: Analyse test case history to understand how often test cases are revised, how often new test cases are added, etc.
What is it
Defect leakage is the percentage of defects that are not caught by the testing team but are found by end-users or customers after the application is delivered.
How to calculate defect leakage
(Total numbers of defects in UAT/ Total number of defects found before UAT) x 100.
Main causes for defect leakage
Insufficient code coverage, generic pass/fail tests, cutting corners while testing, missing test cases.
Improvement Strategies
Strengthen test coverage, improve test environment management, and conduct thorough production monitoring.
Interestingly, a study by IBM shows that the cost of fixing a defect multiplies as it progresses through the development lifecycle.
Design phase
The cost to fix a defect is typically around $1.
Testing phase
The cost to fix a defect jumps to over $10.
Post-release
Fixing a defect after the software is released can cost over $100.
This further emphasizes the critical importance of early defect detection and prevention.
Pro Tip: Analyze historical data to pinpoint areas where defects consistently slip through the testing net. Also, calculate the percentage of defects found in production compared to those discovered in pre-production environments.
DevOps Research and Assessment (DORA) has established a benchmark for measuring software delivery performance. Its four key metrics — deployment frequency, lead time for changes, change failure rate, and mean time to restore service — provide insights into a team’s speed, stability, and ability to recover from failures.
Pro Tip: Make DORA metrics actionable by tracing lead time for changes, change failure rate and mean time to recover back to assignee, teams, product areas etc. Adjust risk indicators for tests and source code areas depending on history of changes that contributed to increased failure rate.
Change impact analysis refers to how code alterations affect the system. It helps identify risks, manage dependencies, and ensure smooth deployments minimizing disruptions. Developers and testers can give an objective spin to test planning if they know how a change impacts existing functionalities.
Change Impact focuses on ensuring that changes to the codebase do not negatively impact existing functionalities. To prioritize risky areas by measuring the coverage of impacted functionalities – teams can prioritize potential regression areas and reduce testing efforts accordingly.
Scope churn refers to the instability or frequent changes in project requirements or features during a release cycle. High scope churn can negatively impact project timelines, budgets, and quality.
Code override occurs when multiple developers modify the same code section. This can increase the complexity of code changes and the potential for introducing defects.
Measures source code areas that are changed frequently, has long dev and test cycles, is touched by multiple developers, has history of high product defects and so on. Using this metric ensures you are investing in the most painful technical debt and prioritizing resources effectively.
To effectively harness the power of these metrics and transform software quality, organizations need a unified, intelligent platform. This is where Quality Gap Intelligence (QGI) solutions like OpsHub Insights come into play.
OpsHub Insights empowers teams to:
Optimize E2E Test Coverage
Insights provides granular visibility into code changes and their associated impact. By measuring test coverage for both the entire codebase and specific modifications, teams can proactively mitigate gaps in test coverage.
Minimize Defect Leakage
By analyzing defect trends and historical data, Insights pinpoints functionalities prone to defects. It correlates defects with specific code modules to help teams minimize defects slipping through to production.
Identify and Mitigate Risks
Reduce Scope Churn
Accelerate Test Execution for Faster Releases
Minimize Defects, Accelerate Releases
Muskaan works as a Content and SEO Strategist at OpsHub. Her interests include devising content marketing strategies for SaaS enterprises, brand strategy and the convergence of product-first thinking with emerging tech and communication.