Course Overview

AI coding tools generate code fast—but they generate whatever code matches your vague input. Most developers using Claude Code, GitHub Copilot, Cursor, or Junie discover this the hard way: the AI happily produces plausible-looking code that doesn’t actually solve the problem. The gap between “getting code” and “getting correct code” comes down to one skill most developers never learned—writing specifications that constrain AI toward the right solution.

This training teaches systematic specification for AI-assisted development. You’ll learn the spec-driven stack (requirements → plan → tasks → tests), practice transforming vague ideas into testable acceptance criteria, and master three-stage test development to maintain control when AI wants to run ahead.

The approach is iterative, not waterfall. You’ll learn when to update specs and when to just code, how to use AI itself for decomposition and critique, and how to live with change without chaos. The iceberg model helps calibrate how much detail each layer needs: detail what’s immediate, sketch what’s next, leave the rest fuzzy.

By the end, you’ll write requirements that serve as reliable inputs to tight TDD workflows with AI—transforming inconsistent results into predictable, correct implementations.

Learning Objectives

  • Transform vague feature requests into structured, testable specifications
  • Apply the spec-driven stack: requirements.md → plan.md → tasks.md → test specifications
  • Write acceptance criteria that constrain AI toward correct solutions
  • Use three-stage test development (ideas → outlines → implementations) to maintain control
  • Leverage AI tools to decompose and critique requirements before implementation
  • Decide when to update specifications versus proceeding directly to code

Topics Covered

  1. Conceptual Foundations — Shared vocabulary (user stories, acceptance criteria, DoR, NFRs) and elements of good specifications
  2. The Problem: Garbage In, Garbage Out — Experiencing firsthand that vague requirements produce vague results regardless of prompting skill
  3. The Spec-Driven Stack — From requirements through plans, tasks, and test specifications; levels of precision (concept vs. structure vs. tasks)
  4. From Specification to Development — Connecting tasks.md to test ideas and implementation; the bridge to TDD workflows
  5. From Linear to Iterative — The iceberg model, feedback loops, spec drift, and knowing when “good enough” is a skill
  6. Integrating External Inputs — Customer feedback, brownfield constraints, NFRs, and tool-assisted decomposition

What You Get

  • Hands-on practice with realistic specification scenarios using your AI tool of choice
  • A fully decomposed specification (requirements.md → plan.md → tasks.md) for a realistic feature
  • AI-critiqued version of your specifications with identified gaps
  • Practice writing acceptance criteria that constrain AI behaviour
  • Exposure to multiple tooling approaches (manual, spec-kit, BMAD)
  • Personal decision framework: when to specify more vs. iterate faster