AgileFlow

/story

PreviousNext

Create a user story with acceptance criteria

/story

Create a new user story with acceptance criteria, test stubs, and status tracking.

Quick Start

/agileflow:story EPIC=<EP-ID> STORY=<US-ID> TITLE=<text> OWNER=<id>

Parameters

ParameterRequiredDefaultDescription
EPICYes-Parent epic ID (e.g., EP-0001)
STORYYes-Story identifier (e.g., US-0001)
TITLEYes-Story title describing the feature
OWNERYes-Owner name or agent ID (e.g., AG-UI, AG-API)
ESTIMATENo0.5dTime estimate (e.g., 0.5d, 1d, 2d)
ACNo-Acceptance criteria in Given/When/Then format
DEPENDENCIESNo-Comma-separated list of dependent story IDs (e.g., US-0001,US-0002)
TDDNoSmart defaultEnable TDD mode to generate test stubs from acceptance criteria

Examples

Basic Story Creation

/agileflow:story EPIC=EP-0010 STORY=US-0001 TITLE="User registration form" OWNER=AG-UI

Creates a story file at docs/06-stories/EP-0010/US-0001-user-registration-form.md with default estimate (0.5d) and status (ready).

Story with Acceptance Criteria

/agileflow:story EPIC=EP-0010 STORY=US-0002 TITLE="Validate email address" OWNER=AG-API ESTIMATE=1d AC="Given a valid email, When form submitted, Then confirmation email sent"

Creates the story with specific acceptance criteria that guide development and testing.

Story with Dependencies

/agileflow:story EPIC=EP-0010 STORY=US-0003 TITLE="Password reset flow" OWNER=AG-API ESTIMATE=1d DEPENDENCIES=US-0001,US-0002

Creates the story and marks it as dependent on previous stories. These dependencies are visualized in /deps command.

Story with TDD Mode

/agileflow:story EPIC=EP-0010 STORY=US-0004 TITLE="User login" OWNER=AG-API AC="Given user on login page, When valid credentials entered, Then dashboard shown"

For code owners like AG-API, TDD mode is automatically enabled. This generates a test file at __tests__/US-0004.test.ts with pending tests derived from the acceptance criteria.

TDD Mode

TDD (Test-Driven Development) mode generates framework-specific test stubs from your acceptance criteria. Tests are created before implementation, following the principle that tests should specify behavior, not just verify it.

Smart Defaults

TDD mode uses smart defaults based on the story owner:

Owner TypeTDD DefaultRationale
AG-API, AG-UI, AG-DATABASEtrueCode-focused, tests critical
AG-TESTING, AG-SECURITY, AG-PERFORMANCEtrueQuality-focused
AG-DOCUMENTATION, AG-RESEARCH, AG-PRODUCTfalseNon-code work
AG-DEVOPS, AG-CIfalseInfrastructure, config

You can always override the default:

  • TDD=true - Force TDD mode on
  • TDD=false - Force TDD mode off

Generated Test Structure

For Jest/Vitest (TypeScript):

describe('US-0004: User login', () => {
  // AC1: valid login shows dashboard
  describe('valid login shows dashboard', () => {
    it.skip('should dashboard shown', () => {
      // Given: user on login page
      // When: valid credentials entered
      // Then: dashboard shown
      expect(true).toBe(true); // TODO: Implement
    });
  });
});

TDD Workflow

  1. Create story with TDD - Tests are generated from acceptance criteria
  2. Clear context - Start a fresh Claude session
  3. Implement to pass tests - Tell agent: "Make all tests in __tests__/US-0004.test.ts pass"
  4. Agent implements blindly - Without knowing how tests were generated
  5. Tests validate behavior - Implementation genuinely satisfies requirements

This separation ensures tests truly specify behavior rather than just verifying what was written.

Output

The command creates:

  1. Story file (docs/06-stories/EP-XXXX/US-XXXX-<slug>.md)

    • Story metadata (ID, epic, owner, estimate, status)
    • Description and acceptance criteria
    • Architecture context (populated during development)
    • Testing strategy
    • TDD badge (if TDD mode enabled)
  2. Test stub (docs/07-testing/test-cases/US-XXXX.md)

    • Test structure aligned with acceptance criteria
    • Placeholder for test implementation
    • Links to story for context
  3. TDD test file (if TDD mode enabled)

    • Jest/Vitest: __tests__/US-XXXX.test.ts
    • pytest: tests/test_US-XXXX.py
    • Go: <package>/US-XXXX_test.go
  4. Status tracking (docs/09-agents/status.json)

    • Story entry with owner, estimate, and status
    • Dependency links to other stories
    • Epic assignment
    • TDD mode flag and test file path (if TDD enabled)
  5. Agent communication (docs/09-agents/bus/log.jsonl)

    • Assignment message showing story created and assigned

Workflow

  1. Create the story - Run /story with epic, ID, title, and owner
  2. Add acceptance criteria - Use /story-validate to refine and test AC
  3. Validate completeness - Run /story-validate STORY=<US-ID> before development
  4. Develop the story - Owner implements according to acceptance criteria
  5. Run tests - Use /verify to run automated tests
  6. Request review - Update status to in-review when ready

Acceptance Criteria Format

Use the Given/When/Then format for clear, testable criteria:

- **Given** a user on the registration page
  **When** they enter valid email and password
  **Then** an account is created and confirmation email sent
 
- **Given** they submit an invalid email
  **When** they click submit
  **Then** form shows "Invalid email" error

Acceptance criteria should be:

  • Specific: Reference actual fields and values
  • Testable: Can be verified with automated or manual tests
  • Independent: Each criterion tests one thing
  • Complete: Cover happy path and major edge cases

Story Estimates

Use these standard estimates:

EstimateWhen to Use
0.5dSimple CRUD, basic UI component, quick fix
1dStandard feature with validation and basic tests
1.5dComplex logic, multiple validations, or integration
2dSignificant refactoring or major feature component
>2dBreak into smaller stories

Integration with Other Commands

  • After story creation: Use /story-validate to ensure readiness
  • Before development: Use /babysit for interactive implementation help
  • During development: Use /verify to run tests
  • For code review: Use /review for AI-powered suggestions
  • For pull requests: Use /pr to auto-generate PR description

Best Practices

  • One story = one feature - Keep stories focused and independently deliverable
  • Write clear acceptance criteria - These guide development and testing
  • Use TDD for code stories - Let tests specify behavior before implementation
  • Clear context before implementing TDD tests - Start fresh so agent doesn't "know" tests
  • Estimate realistically - Consider complexity and team experience
  • Link dependencies explicitly - Helps with sprint planning and risk assessment
  • Validate before development - Use /story-validate to catch issues early
  • /epic - Create the parent epic
  • /story-validate - Validate story completeness before development
  • /sprint - Select stories for sprint planning
  • /auto - Auto-generate stories from PRDs or specs
  • /babysit - Get interactive help during implementation
  • /verify - Run tests and update test status
  • /tests - Set up test infrastructure for TDD workflow