/feedback
Collect feedback from agents and humans on stories, epics, and sprints for continuous process improvement.
Quick Start
/agileflow:feedback SCOPE=story STORY=US-0042Parameters
| Parameter | Required | Default | Description |
|---|---|---|---|
SCOPE | No | story | Feedback scope: story, epic, or sprint |
STORY | Conditionally | - | Story ID (required if SCOPE=story) |
EPIC | Conditionally | - | Epic ID (required if SCOPE=epic) |
ANONYMOUS | No | no | Allow anonymous feedback (yes/no) |
Examples
Story Completion Feedback
/agileflow:feedback SCOPE=story STORY=US-0042After marking a story done, collect:
- AC clarity rating (1-5)
- Dependency resolution rating (1-5)
- Estimate accuracy rating (1-5)
- Implementation smoothness rating (1-5)
- What went well
- What could be improved
Epic Retrospective Feedback
/agileflow:feedback SCOPE=epic EPIC=EP-0010After epic completion, gather:
- Success metrics against epic goals
- What went well during execution
- What didn't go well
- Surprises and learnings
- Actions for next epic
Sprint Retrospective Feedback
/agileflow:feedback SCOPE=sprintAt sprint end, collect team feedback:
- Continue (keep doing)
- Stop (no longer useful)
- Start (new practices)
- Experiments to try
- Blockers removed this sprint
Anonymous Feedback
/agileflow:feedback SCOPE=story STORY=US-0045 ANONYMOUS=yesCollects feedback without attribution for sensitive topics.
Output
Feedback Files
Saves feedback to:
docs/08-project/feedback/<YYYYMMDD>-<story-or-epic-id>.mdExample Story Feedback
## Story Feedback: US-0042
**Completed by**: AG-API
**Date**: 2025-12-22
### Ratings (1-5)
- AC clarity: 5 (crystal clear)
- Dependencies resolved: 4 (one minor blocker)
- Estimate accuracy: 5 (spot on)
- Implementation smoothness: 4 (smooth)
### What Went Well
- Clear acceptance criteria with examples
- All tests passed on first run
- Good documentation
### What Could Be Improved
- Database schema migration took longer than expected
### Blockers Encountered
- None significant
### Learnings/Insights
- JSON schema validation saved hours of debuggingFeedback Types
1. Story Completion Feedback
Ratings for:
- AC Clarity - Were acceptance criteria clear? (1-5)
- Dependencies - Were blockers resolved? (1-5)
- Estimate Accuracy - Was estimation accurate? (1-5)
- Implementation - How smooth was the process? (1-5)
- Testing - Were tests adequate? (1-5)
- Documentation - Was documentation sufficient? (1-5)
Plus open-ended:
- What went well? (2-3 bullets)
- What could be improved? (2-3 bullets)
- Blockers encountered?
- Learnings/insights?
2. Agent Performance Feedback
Tracks effectiveness:
- Stories completed
- Stories blocked
- Average completion time
- Strengths observed
- Areas for improvement
- Recommendations
3. Epic Retrospective
After epic completion:
- Success metrics vs goals
- What went well?
- What didn't go well?
- Surprises and learnings
- Actions for next epic
4. Sprint Retrospective
At sprint end:
- Continue (keep doing)
- Stop (no longer useful)
- Start (new practices)
- Experiments to try
- Blockers removed
- Recurring issues
Metrics Tracked
The feedback system tracks:
| Metric | Target | Purpose |
|---|---|---|
| Story clarity score | >4.0 | AC quality |
| Estimate accuracy | within 50% | Planning improvement |
| Blocker frequency | under 20% of stories | Dependency management |
| Test coverage average | >85% | Code quality |
| Completion velocity | Trending up | Team throughput |
Analysis & Insights
Pattern Detection
Auto-detects patterns:
- Unclear AC - Stories with clarity scores under 3 → Improve template
- Poor estimates - Large variance → Revise estimation guide
- Frequent blockers - Stories often blocked → Improve dependency tracking
- Low test coverage - Tests inadequate → Enforce standards earlier
Actionable Improvements
Auto-generates:
- Improvement stories for recurring issues
- Process recommendations for detected problems
- Recognition for wins and high performers
- ADRs for architectural learnings
Retrospective Reports
Generates comprehensive summaries:
- Overall sentiment (improving/declining/stable)
- Top wins with celebration
- Top challenges
- Recommended actions
- Team insights and learnings
Workflow
- Auto-prompt at trigger points:
- Story status changes to "done"
- Epic reaches 100% completion
- Sprint end date reached
- Present feedback form with pre-filled context
- Ask: "Provide feedback now? (YES/NO/LATER)"
- If YES: Collect ratings and comments interactively
- Save to feedback directory
- Analyze patterns across all feedback
- Suggest improvement stories for issues
Use When
- Story completion - Immediately after marking done
- Epic retrospective - After epic completion
- Sprint end - Gather team learnings
- Process improvement - Monthly review of feedback patterns
- Performance review - Quarterly agent evaluations
- Team retrospectives - Formal sprint ceremonies
Auto-Triggers
Feedback prompts appear:
- When story status changes to "completed"
- When epic reaches 100% completion
- At configured sprint end dates
- Can be manually requested any time
Integration
Feedback flows into:
- Retrospectives - Insights included in sprint retros
- Metrics - Ratings tracked over time
- Story creation - Auto-generate improvement stories
- ADRs - Document learnings as architectural decisions
- Agent profiles - Performance data updates rosters
Related Commands
On This Page
/feedbackQuick StartParametersExamplesStory Completion FeedbackEpic Retrospective FeedbackSprint Retrospective FeedbackAnonymous FeedbackOutputFeedback FilesExample Story FeedbackFeedback Types1. Story Completion Feedback2. Agent Performance Feedback3. Epic Retrospective4. Sprint RetrospectiveMetrics TrackedAnalysis & InsightsPattern DetectionActionable ImprovementsRetrospective ReportsWorkflowUse WhenAuto-TriggersIntegrationRelated Commands