Local MCP
The MCP (Model Context Protocol) server enables AI coding assistants to work with your requirements—searching, validating, composing, and tracking coverage without leaving your development environment.
Setup
Run the interactive setup:
dotreq mcp-setupThis will:
- Prompt you to select your AI assistant
- Configure the MCP server for that assistant
- Add workflow guidance to your project’s context file
After setup, restart your AI assistant to load the new configuration.
Supported Assistants
| Assistant | Configuration |
|---|---|
| Claude Code | Auto-configured via claude mcp add-json |
| Claude Desktop | Edits config at ~/Library/Application Support/Claude/ |
| Cursor | Edits .cursor/mcp.json in project |
| Google Antigravity | Edits ~/.gemini/antigravity/mcp_config.json |
| OpenAI Codex | Auto-configured via codex mcp add |
| GitHub Copilot | Manual config needed; adds AGENTS.md guidance |
Workflow Guidance
During setup, you can add workflow guidance to your project’s context file. This provides your AI assistant with always-on instructions for:
- The workflow — when to capture requirements, the 7-step process
- Requirements syntax — enough to write valid
.requirements.mdfiles - Test usage — how to reference requirements with
requirement() - MCP tools — which tools to use for exploration, authoring, and testing
The guidance is added to your platform’s native context file:
| Platform | Context File |
|---|---|
| Claude Code | CLAUDE.md |
| Cursor, Codex, GitHub Copilot | AGENTS.md |
| Google Antigravity | GEMINI.md |
The section is wrapped in HTML comment markers (<!-- dotrequirements:start/end -->) for idempotent updates—running mcp-setup again will update the section without affecting the rest of your context file.
Available Tools
Exploration
| Tool | Description |
|---|---|
list_all_requirements | Get summary of all requirements in the project |
get_requirement | Get a requirement tree with test coverage info |
search_requirements | Search by text or regex across IDs, content, and labels |
get_requirements_by_test | See which requirements a test file references |
get_tests_by_requirement | Find tests that reference a specific requirement |
list_untested_requirements | Find requirements without test coverage |
Authoring
| Tool | Description |
|---|---|
create_requirement_document | Get a Markdown template with format examples |
validate_requirements | Check file syntax offline (no network required) |
style_check | AI-powered style feedback on requirements or tests |
push_requirements | Push local changes to cloud (shows diff preview first) |
Coverage
| Tool | Description |
|---|---|
get_requirement_coverage | Get coverage data for a specific requirement |
get_project_coverage_summary | Project-wide coverage stats with branch filtering |
Test Review
| Tool | Description |
|---|---|
review_test | Semantic validation that tests actually check what requirements specify |
Test review validates that:
- Setup matches GIVEN conditions
- Actions match WHEN triggers
- Assertions match THEN outcomes
This catches tests that reference requirements but don’t actually validate them—especially valuable when AI assistants write tests.
Diagnostic
| Tool | Description |
|---|---|
debug_mcp_environment | Debug MCP server configuration |
Tool Details
search_requirements
Search by text or regex pattern:
search_requirements({ query: "login" })
search_requirements({ query: "AUTH-.*", useRegex: true })Searches requirement IDs, content, and labels.
get_requirement
Get a requirement and all its children with coverage info:
get_requirement({ id: "AUTH-LOGIN-1" })
get_requirement({ id: "AUTH-LOGIN-1.0" })Returns the full tree from the specified node, plus which tests reference each requirement.
validate_requirements
Check file syntax without network access:
validate_requirements({ filePath: ".requirements/auth.requirements.md" })Returns detailed error messages if validation fails. Use this before pushing to catch format issues early.
style_check
AI-powered feedback on writing style and clarity:
style_check({ filePath: ".requirements/auth.requirements.md" })
style_check({ filePath: "src/auth.test.ts" })Works on both requirements files (*.requirements.md) and test files (*.test.*, *.spec.*).
For requirements files, specific requirements can be checked instead of the entire file:
style_check({
filePath: ".requirements/auth.requirements.md",
requirementKeys: ["AUTH-LOGIN-1", "AUTH-LOGIN-2"]
})When requirementKeys is provided, only those requirement blocks are extracted and checked. This is useful for focusing feedback on recently added or modified requirements.
push_requirements
Two-step push with diff preview:
// First call: see what will change
push_requirements({})
// Second call: execute the push
push_requirements({ confirmed: true })Can push all files or a specific file:
push_requirements({ filePath: ".requirements/auth.requirements.md", confirmed: true })review_test
Comprehensive test review checking both style and semantic correctness:
review_test({ testFilePath: "src/auth.test.ts" })Loads referenced requirements and validates that test implementation actually checks what the requirement specifies—not just that it references the requirement.
Manual Configuration
If your AI assistant isn’t supported by mcp-setup, configure it manually:
MCP Server Command
npx -y @popoverai/dotrequirements mcpOr if globally installed:
dotreq mcpExample Configuration
Most assistants use a JSON configuration like:
{
"mcpServers": {
"dotrequirements": {
"command": "npx",
"args": ["-y", "@popoverai/dotrequirements", "mcp"]
}
}
}The exact location and format varies by assistant—consult your assistant’s MCP documentation.
What Works Offline
These MCP tools work without cloud authentication:
list_all_requirementsget_requirementsearch_requirementsget_requirements_by_testget_tests_by_requirementlist_untested_requirementscreate_requirement_documentvalidate_requirements
These tools require cloud authentication:
push_requirementsstyle_checkreview_testget_requirement_coverageget_project_coverage_summary