/compress
Reduce status.json file size by removing verbose story fields while preserving all critical tracking metadata and keeping full story details in markdown files.
Quick Start
/agileflow:compressPurpose
This command solves a common problem: when status.json grows large (especially on big projects with 100+ stories), it can exceed token limits and slow down agent processing. The compress command:
- Removes verbose fields (descriptions, AC, architecture context, etc.)
- Keeps only essential tracking metadata (story ID, status, owner, timestamps)
- Reduces file size by 80-90% typically
- Preserves full story content in markdown files
- Maintains complete project history
- Ensures agents can read status.json efficiently
Parameters
This command has no parameters - it operates directly on docs/09-agents/status.json.
Usage
Simply run:
/agileflow:compressThe command will:
- Validate status.json exists and is valid JSON
- Show before stats (size, lines, story count)
- Create automatic backup
- Remove verbose fields
- Write compressed version
- Show after stats and savings
What Gets Removed
These verbose fields are stripped from status.json:
- description # Story description and context
- acceptanceCriteria # AC bullet points
- architectureContext # Architecture details
- technicalNotes # Implementation hints
- testingStrategy # Test approach
- devAgentRecord # Implementation notes
- previousStoryInsights # Lessons from previous story
- (all other large text fields)These are safely preserved in docs/06-stories/ markdown files instead.
What Gets Kept
Only essential tracking metadata remains in status.json:
- story_id # Story identifier
- epic # Parent epic
- title # Story title
- owner # Assigned agent
- status # Current status (ready/in-progress/blocked/done)
- estimate # Time estimate
- created # Creation timestamp
- updated # Last update timestamp
- completed_at # Completion timestamp
- dependencies # Dependent story IDs
- branch # Git branch name
- summary # Short summary
- last_update # Last modification message
- assigned_at # Assignment timestampExample Output
๐๏ธ AgileFlow Status Compression
Purpose: Strip verbose fields from status.json
Target: Keep only essential tracking metadata
๐พ Backup created: docs/09-agents/status.json.backup
๐ Before Compression:
Stories: 145
Size: 384KB
Lines: 12,847
โ
Compression complete!
๐ After Compression:
Stories: 145 (unchanged)
Size: 384KB โ 42KB
Lines: 12,847 โ 1,203
Saved: 89% (342KB)
โ
Estimated tokens: ~10,500 (safely under 25000 limit)
๐ Status Summary:
ready: 23 stories
in-progress: 8 stories
blocked: 2 stories
done: 112 stories
๐พ To restore original: cp docs/09-agents/status.json.backup docs/09-agents/status.jsonWhen to Use
Use compression when:
-
status.json exceeds 25,000 tokens
- Agents fail with "file content exceeds maximum allowed tokens"
-
File is slow to process
- Too large for context windows
- Causes timeout issues
-
After major epic completion
- Many completed stories with verbose records
- Project has grown significantly
-
Quarterly maintenance
- Regular cleanup to keep tracking lean
Alternative: Combine with Archival
For even better results, combine compression with archival:
# Step 1: Archive completed stories older than 3 days
bash .agileflow/scripts/archive-completed-stories.sh 3
# Step 2: Compress remaining stories
/agileflow:compressThis two-step process achieves maximum reduction while maintaining history.
Safety & Backups
Automatic Backup
Before compression, the command creates a backup:
docs/09-agents/status.json.backupNo Data Loss
- Full story content remains in
docs/06-stories/markdown files - Story markdown files are NOT modified
- Only status.json index is compressed
- Can always restore from backup
Restore if Needed
cp docs/09-agents/status.json.backup docs/09-agents/status.jsonThe backup contains the complete original file.
Why This Architecture?
Separation of Concerns
status.json = lightweight tracking index
- "What stories exist?"
- "Who owns each story?"
- "What's the current status?"
- "When was it updated?"
docs/06-stories/ = full story content
- Complete descriptions
- Acceptance criteria
- Architecture decisions
- Implementation notes
- Test strategies
docs/09-agents/archive/ = historical data
- Completed stories older than threshold
- Monthly organization
- Full retrieval capability
Benefits
- Fast Indexing - Agents read status.json quickly
- Complete Content - Full details in story markdown
- Searchable - Can grep story files for content
- Versionable - All files are git-tracked
- Archivable - Old stories don't bloat tracking index
Token Estimation
The command estimates token usage:
Token estimation formula: tokens โ bytes / 4
Before: 384KB ร 1024 bytes/KB รท 4 = 98,304 tokens
After: 42KB ร 1024 bytes/KB รท 4 = 10,752 tokens
Saved: 87,552 tokens!Target: Keep status.json under 25,000 tokens (100KB compressed).
Workflow Integration
With Agent Bus
The status.json compression doesn't affect:
- Agent message bus (
docs/09-agents/bus/log.jsonl) - Agent communication and coordination
- Story status tracking
With CI/CD
Compression can be added to CI automation:
name: Maintenance
on:
schedule:
- cron: '0 2 * * 0' # Sunday 2am UTC
jobs:
compress:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Archive and compress
run: |
bash .agileflow/scripts/archive-completed-stories.sh 3
/agileflow:compress
- name: Commit if changed
run: |
git config user.name "github-actions"
git config user.email "actions@github.com"
git add docs/09-agents/
git diff --quiet && git diff --staged --quiet || \
git commit -m "chore(maintenance): archive and compress status"
git pushCommon Scenarios
Scenario 1: Project Growing Large
Problem: status.json is 250KB with 150 stories Solution:
/agileflow:compressResult: Reduces to 30KB, easily under token limits
Scenario 2: Many Completed Stories
Problem: 120 completed, 30 active stories; status.json bloated Solution:
# Archive stories completed >7 days ago
bash .agileflow/scripts/archive-completed-stories.sh 7
# Then compress
/agileflow:compressResult: Moves old stories to archive, compresses remaining
Scenario 3: Regular Maintenance
Problem: Want to keep status.json lean automatically Solution: Add to CI schedule (monthly compression) Result: Always under token limits, fast agent processing
Troubleshooting
"status.json still too large after compression"
Try more aggressive archival:
# Archive stories >3 days old instead of default 7
bash .agileflow/scripts/archive-completed-stories.sh 3
/agileflow:compress"Lost critical information after compression"
Restore from backup:
cp docs/09-agents/status.json.backup docs/09-agents/status.jsonCheck if story markdown files have the details in docs/06-stories/.
"Can't find story details"
Story content is in markdown files:
find docs/06-stories/ -name "US-*.md" | xargs grep "your search term"Best Practices
- Regular Compression - Run monthly or quarterly
- Archive Aggressively - Move old completed stories first
- Back Up Before - Compression creates backup automatically
- Verify After - Check token estimate in output
- Commit Changes - Add compressed status to git
- Document Policy - Record when/why compression runs
- Test Restoration - Verify backup works if needed
Related Commands
/agileflow:status- View current story statuses/agileflow:validate- Check AgileFlow system health- Archive script:
bash .agileflow/scripts/archive-completed-stories.sh- Move completed stories to archive
Technical Details
Files Involved
| File | Purpose |
|---|---|
docs/09-agents/status.json | Lightweight tracking index (compressed) |
docs/09-agents/status.json.backup | Automatic backup before compression |
docs/06-stories/ | Full story markdown files (unchanged) |
docs/09-agents/archive/YYYY-MM.json | Historical completed stories |
Implementation
The compression script uses jq for JSON processing:
# Extract only essential fields for each story
jq '[.[] | {story_id, epic, title, owner, status, estimate, created, updated, completed_at, dependencies, branch, summary, last_update, assigned_at}]' status.json > status-compressed.jsonToken Calculation
Rough estimation used in output:
- 1 token โ 4 characters/bytes (approximate)
- 100KB file โ 25,000 tokens
- Target: < 100KB to stay well under limits
FAQ
On This Page
/compressQuick StartPurposeParametersUsageWhat Gets RemovedWhat Gets KeptExample OutputWhen to UseUse compression when:Alternative: Combine with ArchivalSafety & BackupsAutomatic BackupNo Data LossRestore if NeededWhy This Architecture?Separation of ConcernsBenefitsToken EstimationWorkflow IntegrationWith Agent BusWith CI/CDCommon ScenariosScenario 1: Project Growing LargeScenario 2: Many Completed StoriesScenario 3: Regular MaintenanceTroubleshooting"status.json still too large after compression""Lost critical information after compression""Can't find story details"Best PracticesRelated CommandsTechnical DetailsFiles InvolvedImplementationToken CalculationFAQ