Spec-Driven Development
Spec-driven development is a workflow where every iteration includes requirements, tests, and implementation—all three, connected. Requirements aren’t a phase you do once at the start of a project; they’re part of each change, however small.
The term has gained traction as AI coding assistants have made it possible to build software faster than ever—and easier than ever to lose track of what you’re building. SDD is one answer to that problem: add just enough structure to stay oriented, without reverting to heavyweight processes.
The Problem
Code tells you how the system works. Tests tell you that it works. But neither tells you what it should do—or why.
This matters more than it used to. With AI assistants accelerating development, teams ship faster than ever. But speed amplifies the cost of ambiguity. A misunderstood requirement becomes a feature that needs rework. An undocumented edge case becomes a bug that’s hard to diagnose. The system grows, and nobody can confidently answer: “Is this behavior intentional, or accidental?”
Test coverage metrics don’t help as much as they should. 90% coverage sounds reassuring, but coverage measures execution, not correctness. You can have perfect coverage and still be testing the wrong things.
The Shape of Good Iterations
Every iteration that changes system behavior should include:
- Requirements — What should happen, stated clearly enough to test
- Tests — Proof that it happens
- Implementation — The code that makes it happen
The key is that these three are connected. Tests reference requirements directly, so coverage is automatic and meaningful. When a test fails, you know which expected behavior is broken. When requirements change, you know which tests need updating.
Requirements Come First
Start with requirements. Before you write code or tests, articulate what the system should do.
This doesn’t mean waterfall-style “requirements gathering” that takes weeks. It means taking a few minutes to write down expected behavior before you build. A lightweight spec, not a heavy process.
Why requirements first?
- Clarity before code — Writing down what should happen forces you to think it through. Ambiguities surface early, when they’re cheap to resolve.
- Shared understanding — Requirements are readable by anyone. Code and tests aren’t. Product managers, designers, and future-you can all understand what the system is supposed to do.
- AI alignment — When an AI assistant implements a feature, requirements tell it what “done” looks like. Without them, the AI is guessing.
After Requirements: Two Valid Paths
Once you have requirements, there are two ways to proceed:
Test-driven (TDD): Write tests first, then implementation. Tests fail initially, then pass as you build. This works well when the interface is clear and you want tight feedback loops.
Implement-first: Build the feature, then write tests. This works well when you’re exploring or prototyping—when you need to see the shape of the solution before you can test it precisely.
Both paths are valid. The important thing is that you end up with all three: requirements that state what should happen, tests that prove it happens, and implementation that makes it happen.
Build-Specify-Test Is Also Valid
Sometimes you need to explore before you can specify. You have an idea, but you’re not sure exactly what the system should do until you’ve built a prototype.
In this case, flip the order:
- Build — Prototype the feature
- Specify — Write requirements based on what you learned
- Test — Write tests referencing those requirements
You still end up with the same triad. The difference is when clarity emerges. For well-understood problems, specify first. For exploratory work, build first—but don’t skip the specification step. Capture what you learned so it’s testable going forward.
What About Technical Design?
Technical design is a different concern than behavioral requirements—but it’s still worth writing down, especially if you’re working with AI coding agents.
Before implementation, have your agent write down:
- Which functions, classes, and/or components it will add or modify
- How they interact: data contracts, interfaces, API boundaries
- What’s out of scope: the boundaries it won’t cross
This isn’t the same as specifying expected behavior. “Users can reset their password via email” is a behavioral requirement. “Use a queue for email delivery” is a design decision. Both are useful to capture, but they serve different purposes.
Where does technical design live?
The .requirements.md format is just Markdown at the end of the day. You can use these files to document technical decisions alongside behavioral requirements—headings, prose, diagrams, whatever helps.
Whether to use an actual dotrequirements block for technical constraints depends on whether they’re testable. A performance requirement you plan to exercise in your test suite (“API responses complete in under 200ms”) fits naturally in a requirement block. A description of your service architecture is better as plain Markdown—it’s documentation, not something you’ll assert against.
What This Looks Like in Practice
A typical iteration:
- Write requirements — Create or update a
.requirements.mdfile with the expected behavior. - Implement — Build the feature (or write tests first if you prefer TDD)
- Write tests — Reference requirements with
requirement('REQ-ID')in test descriptions - Verify — Run tests, confirm coverage, ensure requirements are validated
For teams using AI assistants, the assistant can help at each step: drafting requirements, checking style, implementing code, writing tests, and reviewing that tests actually validate what requirements specify.
When to Skip Requirements
Pure refactoring—same behavior, different code—doesn’t need new requirements. If you’re restructuring code without changing what the system does, existing requirements and tests should still pass. That’s the point.
But if you’re changing behavior at all—fixing a bug, adding a feature, modifying an edge case—capture the expected behavior first. Even a one-line requirement is better than none.
Starting Mid-Project
The best time to start spec-driven development is now.
You don’t need to stop and write requirements for everything that already exists. Instead, adopt the workflow going forward: the next feature you build, the next bug you fix, the next behavior you change. A codebase with one testable requirement is one requirement better than a codebase with zero.
Over time, coverage grows naturally. The parts of the system that change most often end up with the most requirements, which is exactly where you need them.
Teams with large existing codebases often wish they had more requirements describing how things already work. Dot•requirements can help with this. AI coding agents are actually pretty good at going back and writing requirements for existing code, and dot•requirements files give those requirements an easily maintained home. Every codebase is different, but “please write behavioural requirements for the code in this directory, and invoke those requirements where appropriate in the tests” is worth spiking.
Next Steps
- Learn the format — See Requirements for how to write
.requirements.mdfiles - Explore before specifying — See Discovery for collaborative exploration
- Set up your project — See Getting Started to configure requirements tracking