Course Overview

You know how to use an AI coding tool. You know TDD. You know specification-driven development. But when you sit down on Monday morning, these three skills don’t magically compose into a workflow. Under time pressure, you revert to prompting and hoping. You over-specify trivial changes and under-specify complex ones. Your specs drift from reality and nobody notices.

This two-day workshop closes the integration gap. Through extended hands-on exercises – deliberately messy, non-linear, and realistic – you practise combining AI tool fluency, test-driven development, and specification-driven development into a coherent way of working. Not clean-room demos: requirement changes mid-flight, dependencies that behave unexpectedly, specification gaps that AI fills silently.

The workshop is built on hard-won experience from over a hundred iterations of a real specification-driven AI workflow. Every failure mode we encountered – specification gaps, horizontal decomposition traps, tested-but-unwired code, unnoticed drift – becomes a teaching moment you experience firsthand and learn to navigate.

You leave with more than knowledge: a customised tool environment (CLAUDE.md, skills, hooks) you built during the exercises, and the calibration to know when to write a full specification, when to write just tests, and when to just code.

Learning Objectives

  • Navigate non-linear development projects using the integrated TDD+SDD+AI cycle, including mid-flight changes and dead ends
  • Detect when AI has silently filled a specification gap and surface the decision for human judgment
  • Calibrate the level of specification needed for any given change (from “just code it” to “full spec stack”)
  • Apply feedback loops between specification layers: updating specs when tests reveal problems, revising plans when implementation surprises arise
  • Verify that AI-generated code is actually wired into production paths, not just tested in isolation
  • Build reusable tool configurations that encode your workflow into your development environment

Topics Covered

  1. Scale Calibration - When to write a full specification, when to write just tests, when to just code. Judgment, not formula.
  2. Vertical Slices Over Horizontal Layers - Why “write all specs, then all tests, then implement” fails, and how AI tools actively reinforce this trap
  3. The Specification Gap Problem - How AI silently fills gaps with plausible defaults, and systematic detection techniques
  4. Feedback Loops and Drift Detection - Navigating the bidirectional flow between specifications, tests, and implementation. Recognising when specs and reality diverge – including when AI initiates the drift.
  5. The Brownfield Reality - Working with existing code, partial specs, and undocumented behaviour. Reverse specification to establish baselines.
  6. Review Patterns and the Trust Gradient - What to trust, verify, or test exhaustively in AI-generated code, calibrated by consequence of failure
  7. Context Persistence - Why specifications and tests become more important with AI, not less: they’re your persistent interface to a stateless tool
  8. Tool Customisation as Workflow Encoding - Building CLAUDE.md configurations, skills, and hooks at the point of need, not speculatively

What You Get

  • Intensive hands-on practice with extended exercises designed to surface real integration challenges
  • A customised tool environment (CLAUDE.md, skills, hooks) built during the workshop, ready for your own projects
  • Practical techniques for detecting specification gaps in AI-generated code
  • A personal calibration framework for deciding the right level of specification
  • Certificate of completion