Coverage Tracking
Code coverage is not a new concept. The classic challenge with using it as a product quality metric is that “80% coverage” is a technical benchmark — but what most businesses actually care about is functionality. Unless your customers are software developers, they probably aren’t worrying about the lines of code exercised by your tests. They care if the features work.
Dot·requirements coverage tells you if your features are being tested.
Instead of measuring lines of code, it tracks which requirements have tests. When requirements live in one place and tests live in another, it’s easy to lose track. A requirement gets identified, someone says “we should test that,” and then… maybe they did, maybe they didn’t. Coverage tracking closes this gap by connecting requirements to the tests that validate them.
Why Coverage Matters
Coverage isn’t just a metric for developers. It’s a trust signal between people who define what the system should do and people who build it.
For product managers and stakeholders, coverage answers:
- Are the requirements I wrote actually being tested?
- Which requirements are at risk (untested or stale)?
- Do I need to follow up, or can I trust the implementation?
For developers, coverage answers:
- Did I miss any requirements in my tests?
- Which requirements need attention before this ships?
- What’s the current state of test coverage for this feature?
The goal isn’t 100% coverage for its own sake. It’s visibility — knowing what’s tested and what isn’t, so you can make informed decisions.
What Coverage Shows
Coverage tracking tells you:
- Which requirements have tests — a test invoked
requirement('LOGIN-1')somewhere - When requirements were last tested — today, last week, or not since October
- Where the tests live — which file and line number reference each requirement
- Branch context — coverage on
mainvs.feature/authmay differ
What coverage doesn’t tell you by itself is whether the tests are any good. But that’s where test review comes in.
Local Coverage
After your tests run, you see a coverage report in the console:
=== Requirements Coverage Report ===
Total Requirements: 12
Tested Requirements: 8
Untested Requirements: 4
Coverage: 66.7%
Tested Requirements:
- LOGIN-1 (requirementHeader): A registered user can log in...
- LOGIN-1.0 (when): Valid credentials authenticate...
Untested Requirements:
- CHECKOUT-1 (requirementHeader): A customer can complete...
====================================This works out of the box once you’ve set up the test harness. Each requirement() call in your tests is tracked, and the report shows what’s covered and what’s missing.
Setup: Add prepare() to your test framework’s global setup and finalize() to global teardown. This is a one-time configuration — see Getting Started for Developers for examples.
Cloud Coverage
Local reports are useful, but cloud coverage gives you a persistent, team-visible record.
What cloud coverage adds:
- Historical tracking — see when requirements were tested, not just if
- Branch awareness — coverage on
mainvs.feature/authtracked separately - Team visibility — everyone sees the same coverage data in the web app
- Staleness detection — requirements tested more than 7 days ago are flagged
Web dashboard: The Coverage page in the web app shows all requirements organized hierarchically. Each requirement shows its test status (tested, untested, or stale), and you can expand to see which test file covers it and when it last ran.
How it works: After tests complete, the finalize() function automatically reports coverage to the cloud if you have credentials configured (DOTREQUIREMENTS_PROJECT_ID and DOTREQUIREMENTS_PROJECT_SECRET in your .env.local).
Closing the Loop: Test Review
Coverage tells you a test exists. But does the test actually validate what the requirement specifies?
It’s easy to game coverage metrics — call requirement('LOGIN-1') in a test that checks something unrelated, and the requirement shows as “tested.” We’d hope developers wouldn’t do this on purpose, but AI assistants sometimes do when trying to satisfy coverage goals. And even well-intentioned tests can drift from what the requirement actually says.
Test review closes this gap. It’s an AI-powered check that validates whether your tests actually test what your requirements specify:
- Does the test setup match the requirement’s preconditions (“arrange”)?
- Do the test actions match the requirement’s triggers (“act”)?
- Do the assertions verify the requirement’s outcomes (“assert”)?
If a test calls requirement('LOGIN-1') but it actually just asserts that true === true, test review will flag it.
Using test review: In your AI assistant, use the review_test MCP tool after writing tests. It reads your test file, loads the referenced requirements, and returns feedback on gaps or mismatches.
This feedback loop prevents a situation where requirements coverage metrics look good but don’t reflect reality. Coverage plus test review gives you confidence that requirements are actually being validated.
How It Works
The coverage system has three parts:
1. Tracking during tests
When your test calls requirement('LOGIN-1'), the harness records:
- Which requirement was referenced
- Which file and line made the call
- When it happened
This data is written to a local cache (.requirements/.cache/) as tests run.
2. Local reporting
After tests complete, finalize() aggregates the tracking data and prints the coverage report. This happens automatically in your test framework’s teardown.
3. Cloud sync
If credentials are configured, finalize() sends coverage to the cloud. The sync is smart about deduplication — it only reports changes, not every test run.
Test run
↓
requirement() calls tracked
↓
finalize() runs
↓
Local report printed
↓
Cloud sync (if configured)Next Steps
- Set up the test harness — Getting Started for Developers walks through configuration
- Use AI for test review — Getting Started for AI-First Builders covers MCP tools including
review_test - Understand what to test — Requirements explains requirement structure and referencing