diff --git a/AMBIENT_JSON_SCHEMA.md b/AMBIENT_JSON_SCHEMA.md index b4f3538..32b5b2b 100644 --- a/AMBIENT_JSON_SCHEMA.md +++ b/AMBIENT_JSON_SCHEMA.md @@ -80,63 +80,19 @@ interface AmbientConfig { **Should Include**: 1. **Role definition**: "You are a [role]..." -2. **Available slash commands**: `/command` with descriptions -3. **Workflow phases**: Step-by-step methodology -4. **Output locations**: Where to write artifacts (e.g., `artifacts/specsmith/`) -5. **Agent orchestration**: Which specialized agents to invoke and when -6. **API integrations**: Instructions for Jira/GitHub/etc. -7. **Best practices**: Conventions and quality standards -8. **Error handling**: How to handle failures +2. **Workspace navigation**: Standard file locations and tool selection rules +3. **Workflow entry point**: Point to the skill(s) that contain the methodology (e.g., "Read and execute `.claude/skills/my-skill/SKILL.md`") +4. **Output locations**: Where to write artifacts (e.g., `artifacts/my-workflow/`) +5. **Error handling**: How to handle failures -**Example Structure**: +**Note**: Keep the systemPrompt focused on role, navigation, and entry points. Move detailed methodology into `.claude/skills/` files. This keeps the ambient.json readable and makes the methodology easier to maintain. The agent already knows how to use tools, read files, and follow instructions — the systemPrompt just needs to tell it *what* to do and *where* things are. -```json -"systemPrompt": "You are Specsmith, a spec-driven development assistant. - -## Available Commands - -- `/spec.interview` - Start interactive feature interview -- `/spec.speedrun` - Quick planning mode -- `/validate` - Validate implementation plan - -## Workflow Phases - -### Phase 1: Interview -Conduct structured interview with user... - -### Phase 2: Planning -Generate implementation plan... - -## Specialized Agents - -Invoke these agents as needed: -- **Quinn (Architect)**: System design and architecture -- **Maya (Engineer)**: Implementation details -- **Alex (QA)**: Testing strategy +**Example**: -## Output Structure - -All artifacts go in `artifacts/specsmith/`: -- `interview-notes.md` - Interview Q&A -- `PLAN.md` - Implementation plan -- `validation-report.md` - Validation results - -## Best Practices - -1. Always validate user requirements -2. Consider edge cases early -3. Generate testable acceptance criteria -..." +```json +"systemPrompt": "You are a Sprint Health Analyst.\n\nStandard file locations:\n- Skill: .claude/skills/sprint-report/SKILL.md\n- Template: templates/report.html\n- Outputs: artifacts/sprint-report/\n\nOnce the user provides context, read and execute the sprint-report skill." ``` -**Real-World Example** (from Specsmith): - -- 5 workflow phases defined -- 5 specialized agent personas (Quinn, Maya, Alex, Casey, Dana) -- Multiple slash commands -- Detailed artifact structure -- ~3000+ characters - --- ### `startupPrompt` (string, required) @@ -146,18 +102,18 @@ All artifacts go in `artifacts/specsmith/`: **Guidelines**: - Write as an instruction to the agent (e.g., "Greet the user and introduce yourself as...") -- Tell the agent what information to include in its greeting (available commands, purpose, etc.) +- Tell the agent what information to include in its greeting - Keep it concise -- 1-3 sentences directing the agent's behavior - Do NOT write it as a greeting the user would see directly **Examples**: ```json -"startupPrompt": "Greet the user and introduce yourself as a spec-driven development assistant. Mention the available commands (/spec.interview, /spec.speedrun, /validate) and suggest starting with /spec.interview." +"startupPrompt": "Greet the user as a Sprint Health Analyst. Ask for their data source, team name, sprint details, audience, and preferred output format." -"startupPrompt": "Introduce yourself as a bug fix assistant. Briefly explain that you help triage, analyze, and fix bugs systematically. Mention the /fix command and ask the user to describe their bug." +"startupPrompt": "Introduce yourself as a bug fix assistant. Ask the user to describe the bug or provide an issue URL." -"startupPrompt": "Greet the user and explain that you help collect user feedback through structured interviews. Mention Jira and GitHub integration and suggest using /interview to start." +"startupPrompt": "Greet the user and explain that you help collect user feedback through structured interviews. Ask what product area they want to cover." ``` --- @@ -220,29 +176,11 @@ workflow-repository/ └── [other workflow files] ``` -### Loading Code - -The platform loads ambient.json at startup: - -**File**: `platform/components/runners/ambient-runner/ambient_runner/platform/config.py` - -```python -def load_ambient_config(cwd_path: str) -> dict: - """Load ambient.json configuration from workflow directory.""" - config_path = Path(cwd_path) / ".ambient" / "ambient.json" - if not config_path.exists(): - return {} - with open(config_path, 'r') as f: - config = json.load(f) - logger.info(f"Loaded ambient.json: name={config.get('name')}") - return config -``` - -### Usage +### How the Platform Uses ambient.json -1. **System prompt injection** (`prompts.py`): `systemPrompt` is appended to the workspace context prompt -2. **Startup directive** (`app.py`): `startupPrompt` sent to agent as hidden user message at session start -3. **Workflow metadata API** (`content.py`): `name`, `description`, and other fields returned via `/content/workflow-metadata` endpoint +1. **System prompt injection**: `systemPrompt` is appended to the workspace context prompt +2. **Startup directive**: `startupPrompt` is sent to the agent as a hidden user message at session start +3. **Workflow metadata API**: `name`, `description`, and other fields are returned via the `/content/workflow-metadata` endpoint --- @@ -285,31 +223,13 @@ def load_ambient_config(cwd_path: str) -> dict: ```json { - "name": "Feature Planning Workflow", - "description": "Plan features through structured interviews and generate implementation specs", - "systemPrompt": "You are a feature planning assistant.\n\n## Commands\n- /interview - Start interview\n- /plan - Generate plan\n\n## Output\nWrite all artifacts to artifacts/planning/", - "startupPrompt": "Greet the user and introduce yourself as a feature planning assistant. Mention the /interview and /plan commands and suggest starting with /interview.", - "results": { - "Interview Notes": "artifacts/planning/interview.md", - "Implementation Plan": "artifacts/planning/plan.md" - } -} -``` - -### Comprehensive Example (Specsmith-style) - -```json -{ - "name": "Specsmith Workflow", - "description": "Transform feature ideas into implementation-ready plans through structured interviews with multi-agent collaboration", - "systemPrompt": "You are Specsmith, a spec-driven development assistant...\n\n[Extensive system prompt with phases, agents, commands, output structure]\n\n## Phase 1: Interview\n...\n\n## Specialized Agents\n- Quinn (Architect)\n- Maya (Engineer)\n- Alex (QA)\n...", - "startupPrompt": "Greet the user as Specsmith. Explain that you transform feature ideas into implementation-ready plans. List the commands: /spec.interview, /spec.speedrun, /validate. Suggest starting with /spec.interview.", + "name": "Sprint Health Report", + "description": "Generates sprint health reports from Jira data with risk ratings, anti-pattern detection, and coaching recommendations.", + "systemPrompt": "You are a Sprint Health Analyst...\n\nWORKSPACE NAVIGATION:\n- Skill: .claude/skills/sprint-report/SKILL.md\n- Template: templates/report.html\n- Outputs: artifacts/sprint-report/\n\nWORKFLOW:\nOnce the user answers the startup questions, read and execute the sprint-report skill.", + "startupPrompt": "Greet the user as a Sprint Health Analyst. Ask the intake questions: data source, team/sprint name, audience, output format, and whether they have historical data for comparison. List the default assumptions and ask the user to confirm or correct them.", "results": { - "Interview Notes": "artifacts/specsmith/interview-notes.md", - "Implementation Plan": "artifacts/specsmith/PLAN.md", - "Validation Report": "artifacts/specsmith/validation-report.md", - "Speedrun Summary": "artifacts/specsmith/speedrun-summary.md", - "All Artifacts": "artifacts/specsmith/**/*" + "Health Reports (Markdown)": "artifacts/sprint-report/**/*.md", + "Health Reports (HTML)": "artifacts/sprint-report/**/*.html" } } ``` @@ -321,13 +241,12 @@ def load_ambient_config(cwd_path: str) -> dict: ### System Prompt Design 1. **Be specific about role**: Define exact persona and expertise -2. **Document all commands**: List every `/command` with purpose -3. **Define workflow phases**: Clear step-by-step methodology -4. **Specify output locations**: Absolute paths for artifacts -5. **Include agent orchestration**: When to invoke specialized agents -6. **Add error handling**: How to recover from failures -7. **Use markdown formatting**: Headers, lists, code blocks for readability -8. **Add workspace navigation guidance**: Help Claude find files efficiently (see [WORKSPACE_NAVIGATION_GUIDELINES.md](WORKSPACE_NAVIGATION_GUIDELINES.md)) +2. **Add workspace navigation guidance**: Standard file locations and tool selection rules (see [WORKSPACE_NAVIGATION_GUIDELINES.md](WORKSPACE_NAVIGATION_GUIDELINES.md)) +3. **Point to skills**: For complex workflows, reference the skill file(s) that contain the methodology +4. **Specify output locations**: Where artifacts are written (e.g., `artifacts/my-workflow/`) +5. **Add error handling**: How to recover from failures +6. **Use markdown formatting**: Headers, lists, code blocks for readability +7. **Keep it focused**: Delegate detailed methodology to skills rather than cramming everything into the systemPrompt ### Startup Prompt Design @@ -348,14 +267,24 @@ def load_ambient_config(cwd_path: str) -> dict: workflow-repo/ ├── .ambient/ │ └── ambient.json ← Configuration here -├── artifacts/ ← Output location (in systemPrompt) -│ └── workflow-name/ -│ ├── interview.md -│ └── plan.md +├── .claude/ +│ └── skills/ ← Skill definitions (preferred) +│ └── my-skill/ +│ └── SKILL.md +├── templates/ ← Optional templates ├── README.md └── scripts/ ← Optional helper scripts ``` +At runtime, artifacts are written relative to the workspace root, not inside +the workflow directory: + +```text +/workspace/sessions/{session}/ +├── workflows/my-workflow/ ← Workflow files loaded here +└── artifacts/my-workflow/ ← Output goes here (sibling, not nested) +``` + --- ## Common Mistakes @@ -383,7 +312,7 @@ workflow-repo/ ```json { "systemPrompt": "You help with development" - // Too generic - needs phases, commands, outputs + // Too generic - needs role, file locations, entry point } ``` @@ -393,8 +322,8 @@ workflow-repo/ { "name": "My Workflow", "description": "Detailed description of purpose", - "systemPrompt": "You are [role].\n\n## Commands\n- /cmd\n\n## Phases\n1. Step one\n\n## Output\nartifacts/my-workflow/", - "startupPrompt": "Greet the user, briefly describe your purpose, and suggest using /cmd to start.", + "systemPrompt": "You are [role].\n\nFile locations:\n- Skill: .claude/skills/my-skill/SKILL.md\n- Outputs: artifacts/my-workflow/\n\nRead and execute the skill when the user provides context.", + "startupPrompt": "Greet the user, briefly describe your purpose, and ask what they need help with.", "results": { "Output": "artifacts/my-workflow/**/*.md" } @@ -412,24 +341,23 @@ workflow-repo/ - Startup prompt execution: `platform/components/runners/ambient-runner/ambient_runner/app.py` - Workflow metadata API: `platform/components/runners/ambient-runner/ambient_runner/endpoints/content.py` -**Example Workflows**: +**Example Workflows** (in this repository): -- `/Users/jeder/repos/workflows/workflows/specsmith-workflow/.ambient/ambient.json` -- `/Users/jeder/repos/workflows/workflows/amber-interview/.ambient/ambient.json` -- `/Users/jeder/repos/workflows/workflows/template-workflow/.ambient/ambient.json` -- `/Users/jeder/repos/workflows/workflows/bugfix/.ambient/ambient.json` -- `/Users/jeder/repos/workflows/workflows/triage/.ambient/ambient.json` +- `workflows/bugfix/.ambient/ambient.json` — skill-based, multi-phase workflow +- `workflows/sprint-report/.ambient/ambient.json` — skill-based, 1–2 turn workflow +- `workflows/triage/.ambient/ambient.json` — command-based triage workflow +- `workflows/template-workflow/.ambient/ambient.json` — minimal starter template **Documentation**: -- Field reference: `workflows/template-workflow/FIELD_REFERENCE.md` -- Platform docs: `github.com/ambient-code/platform` +- Workflow development guide: [WORKFLOW_DEVELOPMENT_GUIDE.md](WORKFLOW_DEVELOPMENT_GUIDE.md) +- Agent guidelines: [AGENTS.md](AGENTS.md) --- ## Summary -The `ambient.json` schema has 4 required fields and 1 optional field, keeping the format lightweight and portable. The `systemPrompt` field is where workflows become powerful -- a well-crafted systemPrompt can define complex multi-phase workflows with specialized agents, API integrations, and sophisticated output structures. +The `ambient.json` schema has 4 required fields and 1 optional field, keeping the format lightweight and portable. For simple workflows, the `systemPrompt` can contain the full methodology inline. For complex workflows, keep the `systemPrompt` focused on role and navigation, and move detailed methodology into `.claude/skills/` files that the agent reads on demand. **Minimum viable ambient.json**: 4 required string fields (`name`, `description`, `systemPrompt`, `startupPrompt`) **Optional**: `results` for documenting artifact locations (informational only) diff --git a/workflows/sprint-report/.ambient/ambient.json b/workflows/sprint-report/.ambient/ambient.json new file mode 100644 index 0000000..7fdb689 --- /dev/null +++ b/workflows/sprint-report/.ambient/ambient.json @@ -0,0 +1,10 @@ +{ + "name": "Sprint Health Report", + "description": "Generates comprehensive sprint health reports from Jira data. Analyzes delivery metrics, detects anti-patterns, and produces actionable coaching recommendations in a 1-2 turn experience.", + "systemPrompt": "You are a Sprint Health Analyst.\n\nFile locations:\n- Skill: .claude/skills/sprint-report/SKILL.md\n- Template: templates/report.html\n- Outputs: artifacts/sprint-report/\n\nRead and execute the sprint-report skill. The skill begins with Step 0 (Discovery & Proposal) which handles all user interaction before analysis begins.", + "startupPrompt": "Greet the user as a Sprint Health Analyst. Briefly explain that you generate comprehensive health reports from sprint data (risk ratings, anti-patterns, coaching recommendations, KPI dashboards). Then read the sprint-report skill and begin Step 0: use whatever the user provided (project, component, team name, etc.) to discover the sprint automatically via Jira, and propose a plan for the user to approve before proceeding.", + "results": { + "Health Reports (Markdown)": "artifacts/sprint-report/**/*.md", + "Health Reports (HTML)": "artifacts/sprint-report/**/*.html" + } +} diff --git a/workflows/sprint-report/.claude/skills/sprint-report/SKILL.md b/workflows/sprint-report/.claude/skills/sprint-report/SKILL.md new file mode 100644 index 0000000..7de474c --- /dev/null +++ b/workflows/sprint-report/.claude/skills/sprint-report/SKILL.md @@ -0,0 +1,502 @@ +--- +name: sprint-report +description: Generate a comprehensive sprint health report from Jira data (CSV or MCP). Analyzes 8 metric dimensions, detects anti-patterns, and produces styled HTML/Markdown reports. +--- + +# Sprint Health Report + +You are generating a sprint health report. Follow the full pipeline below. + +**Reference files** (read these in Step 1a before querying Jira): + +- `.claude/skills/sprint-report/references/jira-fields.md` — custom field IDs and discovery +- `.claude/skills/sprint-report/references/jira-query-patterns.md` — JQL patterns and data volume guidance + +## Step 0: Discovery & Proposal + +Before asking the user any questions, discover what you can automatically. + +### 0a. Interpret the User Request + +The user may provide any combination of: project key, component name, board +name, team name, sprint name/number, or just a vague reference like "my +sprint." Extract whatever identifiers are present. + +**Search directly for what the user gave you.** If they said "AgentOps", +call `jira_get_agile_boards(board_name="AgentOps")`. Do NOT call +`jira_get_all_projects` or enumerate projects — go straight to the +identifier the user provided. + +### 0b. Discover the Sprint (Jira MCP) + +If Jira MCP is available: + +1. **Find the board:** + - `jira_get_agile_boards(project_key=X)` or + `jira_get_agile_boards(board_name=X)` + - If the user provided a component name, search boards by the project + containing that component +2. **Find the active sprint:** + - `jira_get_sprints_from_board(board_id=BOARD_ID, state="active")` + - If multiple active sprints: include all in proposal and ask which one + - If none: offer the most recently closed sprint +3. **Get a rough item count** (for the proposal only): + - `jira_get_sprint_issues(sprint_id=SPRINT_ID, fields="summary,status,assignee,components,customfield_10001")` + - Count total items and unique assignees from the response + - Do NOT fetch story points, description, or other heavy fields yet + +This is a lightweight preview. Full field discovery and data ingestion happen +in Step 1 after the user approves the plan. + +### 0c. Detect Mixed-Team Sprints + +Before proposing the plan, check if the sprint contains work from multiple +teams. Shared boards in large organizations often include release-wide items. + +**Indicators of a mixed-team sprint:** + +- Items span >2 Jira projects +- >15 unique assignees +- Multiple distinct values in Team field (`customfield_10001`) +- High component diversity (>5 components) + +**If detected, include filtering options in the proposal:** + +``` +I found [Sprint Name] with [N] items, but they span multiple teams/projects. + +Filtering options: +1. Team = "[Team Name]" → [X] items +2. Component contains "[Component]" → [Y] items +3. All items (no filter) → [N] items + +Which scope should I analyze? (Recommend: option 1 for team health) +``` + +Apply the chosen filter in Step 1b. + +### 0d. Propose a Plan + +Present a short proposal: + +``` +I found [Sprint Name] ([state], [start] – [end]). + +Proposed plan: +- Analyze [N] items across [M] team members +- [Include/Skip] historical comparison ([K] closed sprints available) +- Generate HTML report for all audiences +- Output to artifacts/sprint-report/ + +Approve to proceed, or tell me what to change. +``` + +Set these defaults (user can override any of them): + +| Setting | Default | Override Example | +| --- | --- | --- | +| Output format | HTML (template exists) | "Make it Markdown" | +| Audience | All | "Just for scrum master" | +| Historical trends | Include if 3+ closed sprints available | "Skip trends" | +| Sprint | Current active sprint | "Use Sprint 2 instead" | + +**Wait for user approval, then immediately continue to Step 1.** Any +affirmative response ("yes", "approve", "looks good", "proceed", "continue") +means go. If the user requests changes, adjust the plan and re-propose. Do +not stall or ask for further confirmation after receiving approval. + +### 0e. CSV or Other Source + +If Jira MCP is not available, or the user provides a CSV: + +- Ask only what you cannot derive: data source path, team name, sprint dates +- Still propose defaults for everything else + +## Step 1: Ingest Data + +The user has approved the plan. Now fetch the full data set. + +### 1a. Discover Custom Fields + +Read `.claude/skills/sprint-report/references/jira-fields.md` for known field +IDs. If the Jira instance is `redhat.atlassian.net`, use the confirmed IDs in +the "Known Fields" section — no discovery needed. + +For other instances, confirm the correct IDs: + +- Use `jira_search_fields` to search for "story point" and "sprint" +- Or fetch a single issue with all fields and inspect the keys: + `jira_search("project = X ORDER BY created DESC", maxResults=1, fields="*all")` +- Record the story points field ID and sprint field ID for subsequent queries + +### 1b. Query Sprint Issues (Full Fetch) + +Now re-fetch the sprint data with the **complete field list**. The lightweight +query from Step 0b was just for the proposal — this is the real data pull. + +See `.claude/skills/sprint-report/references/jira-query-patterns.md` for +details. + +``` +jira_get_sprint_issues( + sprint_id=SPRINT_ID, + fields="summary,status,issuetype,priority,assignee,created,updated,resolutiondate,components,description,customfield_XXXXX,customfield_YYYYY" +) +``` + +Replace `customfield_XXXXX` and `customfield_YYYYY` with the IDs from 1a. + +If a team filter was chosen in Step 0c, apply it after fetching: keep only +items where the Team field, component, or assignee matches the filter. + +**DO NOT use `fields=*all`** — this returns 100+ custom fields per issue and +can produce 500k+ characters, exceeding tool output limits. + +### Handling Large Responses + +For sprints with >20 items, the response will likely exceed tool output limits +(~25k tokens) and be **auto-saved to a file**. The error message contains the +file path. + +**Parse the saved file with bash + jq:** + +```bash +jq '.result | fromjson | .issues | map({ + key, summary, + status: .status.name, + issuetype: (.issuetype.name // "Unknown"), + priority: (.priority.name // "Undefined"), + assignee: (.assignee.display_name // "Unassigned"), + story_points: (.customfield_10028.value // 0), + sprint: [.customfield_10020.value[]? | {name, state}], + created, updated, resolutiondate, + components: [.components[]?.name], + description: (.description // "") +})' /path/to/tool-result.txt > /tmp/sprint_data.json +``` + +Then read `/tmp/sprint_data.json` with the Read tool. Adjust the +`customfield_10028` key to match the story points field ID from Step 1a. + +### 1c. Handle Mixed Sprints + +If the query returns items from multiple sprints (carryover): + +- Filter to current sprint ID in post-processing +- Record carryover items separately (count them as a metric) +- Include carryover items in the appendix, clearly marked + +### 1d. CSV Ingestion + +Parse rows and map columns to the standard fields: key, type, status, +priority, assignee, story points, created date, resolved date, sprint, AC. + +### 1e. Assess Data Quality + +After ingesting data, compute coverage: + +| Check | How to Measure | +| --- | --- | +| Story points | % of items with a non-null, non-zero points field | +| Issue type | % of items with non-null `issuetype` | +| Resolution dates | % of items with `resolutiondate` set | +| Acceptance criteria | % of items with AC patterns in `description` | +| Priority | % of items with priority set (not "Undefined") | +| Assignee | % of items with an assignee | + +**Minimum requirements:** + +- Sprint has >0 items (if 0: stop and tell the user) +- Sprint has valid start/end dates (if missing: warn but continue) + +**Set fallback metrics when data is sparse:** + +| Metric | Ideal | Fallback | Caveat to Display | +| --- | --- | --- | --- | +| Delivery Rate | Points completed / committed | Items completed / committed | "Item-based (no story points)" | +| Velocity | Avg points per sprint | Avg items per sprint | "Item-based velocity" | +| Cycle Time | Created → resolved (days) | Days in current status | "Estimated from status duration" | +| Story Sizing | Point distribution | Item type distribution | "Cannot analyze sizing without estimates" | +| Priority Analysis | Priority distribution | Skip dimension | "Priority data unavailable (X% Undefined)" | +| Issue Type Analysis | Type breakdown | Treat all as "Item" | "Issue type data unavailable" | + +If 2+ critical gaps exist (no points + no priorities + no AC + <3 days +elapsed), warn the user: + +> Data quality is low — the report will have limited insights. Proceed +> anyway, or wait until items are estimated? + +If proceeding, add a prominent data quality callout at the top of the +Executive Summary. + +### 1f. Historical Data (If Approved in Step 0) + +Query the last 3–5 closed sprints from the same board. For each, retrieve +issues the same way as the active sprint and calculate: velocity, completion +rate, carryover count. See +`.claude/skills/sprint-report/references/jira-query-patterns.md` for queries. + +If fewer than 3 closed sprints exist, skip trends and note it in the report. + +### 1g. Identify Team + +1. Extract unique assignees from sprint items +2. Sort alphabetically by last name +3. Format: "F. LastName" (e.g., "C. Zaccaria") +4. If >10 members: show first 8 + "and N more" + +## Step 2: Compute Metrics (All 8 Dimensions) + +Calculate every dimension — do not skip any even if data is sparse. Note when +data is insufficient rather than omitting the dimension. + +| Dimension | Key Metrics | +| --- | --- | +| Commitment Reliability | delivery rate (points or items completed / committed), item completion rate | +| Scope Stability | items added/removed mid-sprint, scope change %, sprint goal alignment | +| Flow Efficiency | cycle time, WIP count, status distribution | +| Story Sizing | point distribution, oversized items (>8 pts), unestimated items | +| Work Distribution | load per assignee, concentration risk (>30% = flag), unassigned items | +| Blocker Analysis | flagged items, blocking/blocked relationships, impediment duration | +| Backlog Health | acceptance criteria coverage, priority distribution, definition of ready | +| Delivery Predictability | carryover count, zombie items (>60 days old), aging analysis | + +### Cycle Time Calculation + +Use `created` → `resolutiondate` from the sprint issue data. This is +sufficient for sprint health reports. + +For deeper analysis on small sprints (<15 items), optionally call +`jira_get_issue_dates` for the top 5–10 resolved items to get precise +time-in-status breakdowns. Do NOT call it for every item — the API overhead +is not worth it for sprint-level reporting. + +### Progress Bar Point Calculation + +The template requires point breakdowns by status. Calculate: + +1. **Done points:** Sum story points where status is Done/Closed/Resolved +2. **Review/In Progress/New points:** Sum story points per status group +3. If items lack individual point estimates, distribute remaining points + proportionally by item count per status +4. Compute percentages: `status_pct = (status_points / total_points) * 100` + +### Sprint Goal Alignment + +If the sprint goal is defined (non-empty): + +1. Classify each item as aligned/not-aligned based on whether its summary + relates to a keyword or theme in the goal +2. Calculate alignment percentage +3. If <50%: add observation — "Only X% of items align with stated sprint goal" +4. If >80%: add positive signal + +### Positive Signal Detection + +Actively look for what is going well. Find at least 3 positive signals from: + +- High AC coverage (>70%) +- Low never-started rate (<15%) +- Even work distribution (no one >30% load) +- No critical blockers +- Sprint goal clearly defined +- Items completed on time +- Low carryover rate (<30%) +- Good priority coverage (>70%) +- WIP within limits (80%) + +If <3 found, list whatever exists and note "Limited positive signals this +sprint — opportunity for improvement." + +## Step 3: Detect Anti-Patterns + +Check for each pattern. Only report patterns with supporting data — do not +speculate. + +| Anti-Pattern | Trigger | +| --- | --- | +| Overcommitment | committed > 2× historical velocity | +| Perpetual carryover | items spanning 3+ sprints | +| Missing Definition of Ready | 0% acceptance criteria coverage | +| Work concentration | one person assigned >30% of items | +| Mid-sprint scope injection | items added after sprint start without descoping | +| Zombie items | any open item >60 days old | +| Item repurposing | summary/description changed mid-sprint (requires changelog) | +| Hidden work | items with no status transitions since added (requires changelog) | + +### Systematic Zombie Detection + +Do not just find the oldest item. Check **every** open item: + +1. Calculate `age = today − created_date` for all open items +2. Filter where `age > 60 days` +3. If count > 0: list ALL zombie items with key and age +4. If count == 0: record as positive signal + +### Review Bottleneck Check + +If >40% of items are in a "Review"-like status: + +- Flag as flow bottleneck +- Note that "Review" can mean code review, QA, or stakeholder approval +- Recommend the team investigate which type dominates and consider splitting + the status for visibility + +## Step 4: Generate Health Rating + +Compute a risk score on a 0–10 scale: + +| Factor | +3 | +2 | +1 | 0 | +| --- | --- | --- | --- | --- | +| Delivery rate | <50% | 50–69% | 70–84% | 85%+ | +| AC coverage | — | <30% | 30–69% | 70%+ | +| Zombie items | — | 3+ | 1–2 | none | +| Never started | — | >30% | 15–30% | <15% | +| Priority gaps | — | — | <30% prioritized | 30%+ | + +**Rating bands:** 0–3 = HEALTHY, 4–6 = MODERATE RISK, 7–10 = HIGH RISK + +If using fallback metrics (item-based instead of points-based), note the +reduced confidence in the rating. + +## Step 5: Produce the Report + +Generate artifacts in `artifacts/sprint-report/`: + +- `{SprintName}_Health_Report.md` — full Markdown report +- `{SprintName}_Health_Report.html` — styled HTML with KPI cards, progress bars, coaching notes + +Use whichever format(s) the user approved in Step 0. If they said "both," +produce both. + +### Report Structure + +Every report follows this structure regardless of format: + +1. **Executive Summary** — health rating, top 5 numbers, positive signals (minimum 3), data quality note (if applicable) +2. **KPI Dashboard** — delivery rate, WIP count, AC coverage, never-started items, cycle time, carryover +3. **Dimension Analysis** — 8 cards with observations, risks, root causes +4. **Anti-Pattern Detection** — evidence-based pattern cards +5. **Top 5 Actions for Next Sprint** — numbered, actionable +6. **Coaching Notes** — retrospective facilitation, sprint planning, backlog refinement +7. **Appendix** — per-item detail table with status, points, assignee, sprint history + +### HTML Template (MANDATORY) + +**Do NOT create HTML from scratch.** You MUST use the template. + +Read the template at `templates/report.html`. The file is ~1285 lines — read +it in chunks if needed (e.g., offset=1/limit=500, then offset=501/limit=500, +then offset=1001/limit=285). + +- Use the exact CSS, HTML structure, and JavaScript from the template +- Replace all `{{PLACEHOLDER}}` markers with computed values +- HTML-escape all Jira-sourced text (issue summaries, descriptions, assignee + names, comments) before interpolation — escape `&`, `<`, `>`, `"`, `'` +- For repeating components (dimension cards, KPI cards, anti-pattern cards, + action cards, coaching cards, observation blocks, appendix rows), replicate + the example pattern for each data item +- The template includes inline HTML comments describing how to repeat patterns + and which CSS classes to use +- Do NOT modify the CSS or JavaScript sections +- Do NOT add features not present in the template (charts, trend graphs, etc.) +- Preserve the sidebar table of contents and all section IDs for scroll-spy + +**Why this matters:** The template contains 753 lines of production CSS, +interactive JavaScript (KPI details, scroll-spy), dark mode, print/PDF +export, and responsive layout. Creating HTML from scratch loses all of this. + +Use a Python script to handle placeholder replacement and repeating section +generation — see "Scripting Policy" below. + +### Placeholder Derivation + +Key placeholders and how to derive them: + +- `{{NEXT_SPRINT_NAME}}` — increment the sprint number if numeric (Sprint 3 → Sprint 4), else "Next Sprint" +- `{{TEAM_MEMBERS}}` — "F. Last" format, comma-separated, truncate with "..." if >100 chars +- `{{POSITIVE_SIGNAL}}` — repeat `
  • ` for each signal (minimum 3) +- `{{DELIVERY_RATE_VALUE}}` — "X.X%" (item-based if no story points; label accordingly) +- `{{DELIVERY_RATE_SUB}}` — "X of Y items completed" or "X of Y points completed" +- Progress bar widths — use point totals if available, otherwise item counts; + show label only if segment width >10% +- `{{DONE_PCT}}`, `{{REVIEW_PCT}}`, `{{INPROG_PCT}}`, `{{NEW_PCT}}` — percentage widths from progress bar calculation + +After rendering, verify no unreplaced placeholders remain: +`grep "{{" output.html` should return nothing. + +## Step 6: Changelog Analysis (Optional) + +Changelog analysis is **optional enrichment**. The core report (Steps 1–5) is +complete without it. + +### When to Include + +- Sprint has <20 items (low data volume) +- Team specifically requested deep analysis +- Investigating known process issues (thrashing, reassignment) + +### When to Skip + +- Sprint has >30 items (data volume too high) +- Basic anti-patterns (zombie items, carryover) already detected +- Time-constrained analysis +- First-time runs (get baseline report first) + +### If Including + +**Preferred tool:** `jira_batch_get_changelogs` — fetches changelogs for +multiple issue keys in one call. + +``` +jira_batch_get_changelogs(issue_keys=["KEY-1", "KEY-2", "KEY-3", ...]) +``` + +Limit to the top 10–15 highest-risk items (oldest, blocked, carryover). + +**Fallback** (if batch tool is unavailable): `jira_search` with +`key in (KEY-1, KEY-2, ...)` and `expand=changelog`. + +### What to Extract + +| Pattern | What to Look For | +| --- | --- | +| Item repurposing | `summary` or `description` field changed mid-sprint | +| Reassignment | `assignee` changed (signals unclear ownership) | +| Status churn | item moved backward (e.g., Review → In Progress) | +| Sprint hopping | `Sprint` field changed (added/removed mid-sprint) | +| Hidden work | no status transitions since item was added | + +### If Skipping + +Add a note in the report: + +> Changelog analysis was not performed. Anti-patterns requiring change history +> (item repurposing, reassignment churn, status regression) could not be +> assessed. + +Integrate findings into the report on the first write — do not produce the +report and then rewrite it. + +## Scripting Policy + +- **Use Python scripts** for data processing that exceeds what tool calls and + inline reasoning can handle: metric computation, template placeholder + replacement, large JSON transformations +- **Use bash + jq** for simple transforms: extracting fields from tool result + files, filtering, counting +- **Do NOT** create reusable frameworks, CLI tools, or generalized analyzers + meant for distribution — scripts should be sprint-specific and disposable +- Do NOT implement features the user didn't ask for (dark mode, PDF export, trend charts, etc.) +- Batch tool calls wherever possible (parallel `jira_search` calls, not serial) +- Stick to the requested output format(s) — don't produce both unless asked +- After Step 0 approval, execute the full pipeline without stopping between steps + +## Output + +- Report artifacts in `artifacts/sprint-report/` +- Present a brief summary of the health rating and top findings inline after + generating the report files diff --git a/workflows/sprint-report/.claude/skills/sprint-report/references/jira-fields.md b/workflows/sprint-report/.claude/skills/sprint-report/references/jira-fields.md new file mode 100644 index 0000000..18aea02 --- /dev/null +++ b/workflows/sprint-report/.claude/skills/sprint-report/references/jira-fields.md @@ -0,0 +1,181 @@ +# Jira Fields for Sprint Reports + +Custom field IDs vary by Jira instance. This document describes the fields +the sprint report needs, how to discover their IDs, and provides known +examples from Red Hat's Jira (redhat.atlassian.net). + +## Discovery Process + +Custom fields have opaque IDs like `customfield_10028` that differ across +instances. You must discover the correct IDs at runtime. + +### Option A: Search by Name + +``` +jira_search_fields("story point") +jira_search_fields("sprint") +jira_search_fields("epic") +``` + +Look at the `name` and `description` in results to match the right field. + +### Option B: Inspect a Real Issue + +Fetch one issue with all fields and look for recognizable values: + +``` +jira_search("project = X AND sprint in openSprints() ORDER BY created DESC", maxResults=1, fields="*all") +``` + +Scan the response for: + +- A **float** value (e.g., `3.0`, `5.0`) — that's likely story points +- A **JSON object** with `name`, `state`, `startDate` — that's the sprint field +- A **key reference** like `PROJ-123` — that's likely an epic link + +Record the `customfield_XXXXX` key for each and use it in all subsequent queries. + +## Custom Fields Needed + +### Story Points + +- **What to search for:** "story point", "story points", "points" +- **Type:** Float (e.g., `1.0`, `3.0`, `8.0`). Found on Stories, Tasks, + Bugs, sometimes Epics. +- **Variants:** Some organizations split estimates by role (DEV points, QE + points, DOC points). If the primary story points field is empty, search for + role-specific variants. + +### Sprint + +- **What to search for:** "sprint" +- **Type:** Array of JSON objects with `name`, `state` (active/closed/future), + `startDate`, `endDate`, `completeDate` + +### Epic Link + +- **What to search for:** "epic link" +- **Type:** Key reference (e.g., `PROJ-123`) + +### Epic Name + +- **What to search for:** "epic name", "epic label" +- **Type:** String. Short display name shown on boards. + +### Team + +- **What to search for:** "team" +- **Type:** Team object with `name`, `id`, `isShared`. Identifies the + Atlassian team assigned to the issue. + +### Target Version + +- **What to search for:** "target version" +- **Type:** Multi-version array (e.g., `["rhoai-3.4"]`). Tracks which + product release the work targets. + +## Known Fields: Red Hat Jira (redhat.atlassian.net) + +These are the confirmed custom field IDs on the Red Hat Jira instance. Other +instances will have different IDs — always verify via discovery. + +| Field | ID | Type | Notes | +| --- | --- | --- | --- | +| Story Points | `customfield_10028` | Float | Primary story points field | +| Story point estimate | `customfield_10016` | Float | GreenHopper/JSW native estimate field | +| DEV Story Points | `customfield_10506` | Float | Developer-specific estimate | +| QE Story Points | `customfield_10572` | Float | QA-specific estimate | +| DOC Story Points | `customfield_10510` | Float | Documentation-specific estimate | +| Original story points | `customfield_10977` | Float | Snapshot of initial estimate | +| Sprint | `customfield_10020` | Sprint JSON array | GreenHopper sprint field | +| sprint_count | `customfield_10975` | Float | Number of sprints an item has been in | +| Epic Link | `customfield_10014` | Key reference | Links issue to parent epic | +| Epic Name | `customfield_10011` | String | Short epic label for boards | +| Epic Status | `customfield_10012` | String | "To Do", "In Progress", "Done" | +| Team | `customfield_10001` | Team object | Atlassian team (e.g., `"AgentOps [RAG + Vector DB]"`) | +| Target Version | `customfield_10855` | Multi-version | Product release (e.g., `["rhoai-3.4"]`) | +| Target end | `customfield_10024` | Date | Roadmap target end date | +| Rank | `customfield_10019` | Lexo-rank | Board ordering (internal use) | +| Epic Type | `customfield_10573` | Select | Classification of epic | +| Cross Team Epic | `customfield_10549` | Radio | Whether epic spans teams | + +### Story Points Strategy for Red Hat Jira + +Check fields in this order: + +1. `customfield_10028` ("Story Points") — most commonly used +2. `customfield_10016` ("Story point estimate") — GreenHopper native +3. If both are null, check role-specific fields: + `customfield_10506` (DEV), `customfield_10572` (QE), `customfield_10510` (DOC) +4. If all are null, the item is unestimated — use item-count fallback metrics + +### Issue Hierarchy on Red Hat Jira + +``` +Feature (hierarchy level 2) — RHAISTRAT project, via parent field + └── Epic (hierarchy level 1) — via parent field or Epic Link + └── Story / Task / Bug (hierarchy level 0) +``` + +Items reference their parent via the `parent` field (preferred) or +`customfield_10014` (Epic Link, legacy). The `parent` response includes +the parent's summary, status, priority, and issue type. + +## Standard Fields (Always Available) + +These don't require custom field discovery: + +| Field | Jira Key | Type | +| --- | --- | --- | +| Summary | `summary` | String | +| Status | `status` | Workflow status object | +| Assignee | `assignee` | User object | +| Priority | `priority` | Select (Blocker, Critical, Major, Normal, Minor) | +| Issue Type | `issuetype` | Select (Epic, Story, Task, Bug, etc.) | +| Created | `created` | Datetime | +| Updated | `updated` | Datetime | +| Resolution Date | `resolutiondate` | Datetime (null if unresolved) | +| Components | `components` | Multi-select | +| Fix Version | `fixVersions` | Multi-version | +| Description | `description` | Text (check here for acceptance criteria) | +| Comments | `comment` | Comment list | + +## Workflow Statuses + +Status names and classifications vary by project. Common patterns: + +### Engineering Projects (Stories/Tasks/Bugs) + +``` +New → Backlog → To Do → In Progress → Review → Done / Closed +``` + +| Classification | Typical Statuses | +| --- | --- | +| Not Started | New, Backlog, To Do, Open | +| In Progress | In Progress, In Development, Coding | +| In Review | Review, Code Review, In Review, QA | +| Done | Done, Closed, Resolved, Release Pending | + +### What "Review" Means + +"Review" can mean different things depending on the team: + +- **Code review** — PR is open, waiting for reviewer +- **QA review** — testing in progress +- **Stakeholder review** — waiting for approval + +If >40% of items are in Review, flag it as a flow bottleneck and recommend +the team investigate which type of review is the queue. + +## Acceptance Criteria Detection + +There is no standard Jira field for acceptance criteria. Teams typically put +them in the `description` field. Look for patterns: + +- Heading: `## Acceptance Criteria`, `### AC`, `**Acceptance Criteria**` +- Checkbox lists: `- [ ] ...` or `* [ ] ...` +- Numbered criteria: `AC1:`, `AC2:`, etc. + +If none of these patterns are found in the description, count the item as +having no acceptance criteria. diff --git a/workflows/sprint-report/.claude/skills/sprint-report/references/jira-query-patterns.md b/workflows/sprint-report/.claude/skills/sprint-report/references/jira-query-patterns.md new file mode 100644 index 0000000..7f6a34c --- /dev/null +++ b/workflows/sprint-report/.claude/skills/sprint-report/references/jira-query-patterns.md @@ -0,0 +1,187 @@ +# Jira Query Patterns for Sprint Reports + +## Sprint Discovery + +### Step 1: Find the Board + +``` +jira_get_agile_boards(project_key="PROJ") +jira_get_agile_boards(board_name="Team Name") +``` + +If the user provides a component name instead of a project, search boards by +the project that contains that component. + +### Step 2: Find the Active Sprint + +``` +jira_get_sprints_from_board(board_id=BOARD_ID, state="active") +``` + +This returns sprint objects with `id`, `name`, `startDate`, `endDate`, `goal`. + +- If multiple active sprints: ask the user which one +- If no active sprints: offer the most recently closed sprint instead + +### Step 3: Get Sprint Issues + +**Use `jira_get_sprint_issues` with an explicit field list:** + +``` +jira_get_sprint_issues( + sprint_id=SPRINT_ID, + fields="summary,status,issuetype,priority,assignee,created,updated,resolutiondate,components,description,customfield_XXXXX,customfield_YYYYY" +) +``` + +Replace `customfield_XXXXX` and `customfield_YYYYY` with the story points and +sprint field IDs discovered from `references/jira-fields.md`. + +**DO NOT use `fields=*all`.** This returns 100+ custom fields per issue and +can produce 500k+ characters for a typical sprint, exceeding tool output +limits. Explicit field lists keep responses under 50k characters. + +### Alternative: JQL-Based Query + +If sprint-specific APIs aren't available, use JQL: + +``` +jira_search( + jql='sprint = SPRINT_ID ORDER BY status ASC', + fields="summary,status,issuetype,priority,assignee,created,updated,resolutiondate,description,customfield_XXXXX", + maxResults=100 +) +``` + +Or query by component for teams that don't use sprint boards: + +``` +jira_search( + jql='component = "Team Component" AND sprint in openSprints() ORDER BY status ASC', + fields="...", + maxResults=100 +) +``` + +## Cycle Time & Status Transitions + +Use specialized tools instead of computing cycle time from raw dates: + +### jira_get_issue_sla + +Returns pre-computed cycle time, lead time, and time-in-status breakdowns. + +``` +jira_get_issue_sla(issue_key="KEY-1") +``` + +Best for resolved items where you need accurate cycle time. + +### jira_get_issue_dates + +Returns status transition history (timestamps for each status change). + +``` +jira_get_issue_dates(issue_key="KEY-1") +``` + +Use this for WIP aging (how long an item has been in its current status) +and for computing cycle time manually when SLA data isn't available. + +Batch calls for the top 10–15 items rather than every sprint item. + +## Changelog Data + +Changelogs add significant payload. Do NOT include `expand=changelog` on +the main sprint query — fetch changelogs separately for targeted items. + +### Preferred: jira_batch_get_changelogs + +Fetches changelogs for multiple issues in one call (Cloud only): + +``` +jira_batch_get_changelogs(issue_keys=["KEY-1", "KEY-2", "KEY-3"]) +``` + +Use this for the top 10–15 highest-risk items (oldest, blocked, carryover). + +### Fallback: jira_search with expand + +If the batch tool is unavailable: + +``` +jira_search( + jql='key in (KEY-1, KEY-2, KEY-3, ...)', + fields="summary,status", + expand="changelog" +) +``` + +### What to Extract from Changelogs + +| Pattern | What to Look For | +| --- | --- | +| Item repurposing | `summary` or `description` field changed mid-sprint | +| Reassignment | `assignee` field changed | +| Status churn | Item moved backward (e.g., Review → In Progress) | +| Sprint hopping | `Sprint` field changed (item added/removed mid-sprint) | +| Hidden work | No status transitions since item was added to sprint | + +## Historical Sprint Data + +For trend analysis across multiple sprints: + +``` +jira_get_sprints_from_board(board_id=BOARD_ID, state="closed") +``` + +This returns recent closed sprints. For each, query issues the same way as +the active sprint. Calculate per-sprint metrics to build trend data: + +- Velocity (points or items completed) +- Completion rate +- Carryover count +- Scope change percentage + +Limit to 3-5 previous sprints to keep the analysis manageable. + +## Common JQL Patterns + +```sql +-- Active sprint items for a project +project = PROJ AND sprint in openSprints() + +-- Active sprint items for a specific component +component = "Component Name" AND sprint in openSprints() + +-- Items carried over from previous sprints (still open, assigned to closed sprints) +project = PROJ AND sprint in closedSprints() AND statusCategory != Done + +-- Unestimated items in current sprint +sprint = SPRINT_ID AND cf[STORY_POINTS_ID] is EMPTY + +-- Items without acceptance criteria (approximate — checks description length) +sprint = SPRINT_ID AND description is EMPTY +``` + +## Tools to Avoid + +| Tool | Why | +| --- | --- | +| `jira_get_all_projects` | Returns 1.9M+ chars. Never needed — use `jira_get_agile_boards` with the board/project/component name the user gave you. | +| `fields=*all` on sprint queries | Returns 100+ custom fields per issue. Use explicit field lists. | +| `expand=changelog` on sprint queries | Bloats responses. Use `jira_batch_get_changelogs` separately for targeted items. | + +## Data Volume Guidelines + +| Query Type | Typical Size | Notes | +| --- | --- | --- | +| 20 issues, explicit fields | 10-30k chars | Ideal | +| 20 issues, `fields=*all` | 300-600k chars | Avoid | +| `jira_batch_get_changelogs` (10 items) | 20-50k chars | Targeted | +| `jira_get_issue_sla` (10 items) | 5-15k chars | Lightweight | +| 20 issues, `fields=*all` + changelog | 500k-1M chars | Never do this | +| `jira_get_all_projects` | 1.9M chars | Never do this | + +If the response exceeds tool output limits (~25k tokens), save to a file and +parse with `jq` or read in chunks. diff --git a/workflows/sprint-report/README.md b/workflows/sprint-report/README.md new file mode 100644 index 0000000..f708367 --- /dev/null +++ b/workflows/sprint-report/README.md @@ -0,0 +1,71 @@ +# Sprint Health Report Workflow + +Generates comprehensive sprint health reports from Jira data. Analyzes delivery +metrics across 8 dimensions, detects anti-patterns, computes a health rating, +and produces actionable coaching recommendations — all in a 1–2 turn experience. + +## How It Works + +1. The startup prompt collects context: data source, team, audience, format +2. The agent reads the sprint-report skill and executes the full analysis pipeline +3. Artifacts are written to `artifacts/sprint-report/` + +## Directory Structure + +```text +sprint-report/ +├── .ambient/ +│ └── ambient.json # Workflow config +├── .claude/ +│ └── skills/ +│ └── sprint-report/ +│ └── SKILL.md # Analysis methodology +├── templates/ +│ └── report.html # HTML report template +└── README.md +``` + +## Data Sources + +- **Jira CSV export** — upload a CSV exported from a Jira sprint board +- **Jira MCP** — query Jira directly via `jira_search` with a sprint or board ID +- **Other formats** — the agent adapts to whatever tabular data the user provides + +## Output Formats + +| Format | Description | +| --- | --- | +| Markdown | `{SprintName}_Health_Report.md` — full report with tables | +| HTML | `{SprintName}_Health_Report.html` — styled report with KPI cards, progress bars, and coaching notes using the included template | + +## Metrics Analyzed + +The report covers 8 dimensions: + +1. **Commitment Reliability** — delivery rate, item completion rate +2. **Scope Stability** — mid-sprint additions/removals, scope change % +3. **Flow Efficiency** — cycle time, WIP count, status distribution +4. **Story Sizing** — point distribution, oversized/unestimated items +5. **Work Distribution** — load per assignee, concentration risk +6. **Blocker Analysis** — flagged items, impediment duration +7. **Backlog Health** — acceptance criteria coverage, priority distribution +8. **Delivery Predictability** — carryover count, zombie items, aging + +## Health Rating + +A 0–10 risk score derived from delivery rate, acceptance criteria coverage, +zombie items, never-started items, and priority gaps. + +- **0–3** = HEALTHY +- **4–6** = MODERATE RISK +- **7–10** = HIGH RISK + +## Testing with Custom Workflow + +To test changes before merging: + +| Field | Value | +| --- | --- | +| **URL** | `https://github.com/ambient-code/workflows.git` (or your fork) | +| **Branch** | your branch name | +| **Path** | `workflows/sprint-report` | diff --git a/workflows/sprint-report/templates/report.html b/workflows/sprint-report/templates/report.html new file mode 100644 index 0000000..e7ceaf4 --- /dev/null +++ b/workflows/sprint-report/templates/report.html @@ -0,0 +1,1285 @@ + + + + + +{{REPORT_TITLE}} - {{TEAM_NAME}} + + + +
    + + + + +
    + + +
    +

    {{REPORT_TITLE}}

    +
    {{TEAM_NAME}}
    +
    +
    Report Date
    {{REPORT_DATE}}
    +
    Sprint
    {{SPRINT_LABEL}}
    +
    Team Size
    {{TEAM_SIZE}} members
    +
    Team
    {{TEAM_MEMBERS}}
    +
    +
    + +
    + + +
    +
    Section 1
    +

    Executive Summary

    + + +
    +
    + + {{HEALTH_RATING}} +
    + Score: {{HEALTH_SCORE}}/10+ + +
    +
    +

    How the Health Rating Works

    +
    + The rating is calculated from a risk score — the sum of points across multiple dimensions of sprint health. A higher score means more areas of concern. +
    +
    + Rating thresholds:
    + 🟢 HEALTHY (0–2): Delivering predictably with good process discipline.
    + 🟡 MODERATE RISK (3–5): Mostly on track but process gaps could compound over time.
    + 🔴 HIGH RISK (6+): Significant delivery problems requiring focused attention. +
    +
    + What contributes to the score: + + + + + + + + + + + +
    Delivery rate < 50%+3
    Delivery rate 50–69%+2
    Delivery rate 70–84%+1
    AC coverage < 30%+2
    AC coverage 30–69%+1
    3+ zombie items+2
    1–2 zombie items+1
    > 30% items never started+2
    15–30% items never started+1
    Priority coverage < 30%+1
    +
    +
    + Use this as a retro conversation starter: “We scored {{HEALTH_SCORE}} this sprint — what are the 1–2 biggest contributors we can address next sprint?” The goal is steady improvement toward HEALTHY, not perfection in one sprint. +
    +
    + + +
    + + +
    + +
    {{DELIVERY_RATE_VALUE}}
    +
    Delivery Rate
    +
    {{DELIVERY_RATE_SUB}}
    +
    + + +
    + +
    {{NEVER_STARTED_VALUE}}
    +
    Items Never Started
    +
    {{NEVER_STARTED_SUB}}
    +
    + + +
    + +
    {{AC_COVERAGE_VALUE}}
    +
    Acceptance Criteria
    +
    AC coverage
    +
    + + +
    + +
    {{OLDEST_ITEM_VALUE}}
    +
    Oldest Open Item
    +
    {{OLDEST_ITEM_KEY}}
    +
    + + +
    + +
    {{CYCLE_TIME_VALUE}}
    +
    Avg Cycle Time
    +
    {{CYCLE_TIME_SUB}}
    +
    + + +
    + +
    {{CARRYOVER_VALUE}}
    +
    Max Sprint Carryover
    +
    Sprints for longest-carried item
    +
    + +
    + + +
    +

    Delivery Rate

    +
    Percentage of committed story points completed by sprint end. The core measure of sprint commitment reliability.
    +
    A consistently low rate means the team is over-committing, getting pulled into unplanned work, or hitting unanticipated blockers. The fix is “commit to less and finish it.”
    +
    Thresholds: 🟢 85%+   🟡 50–84%   🔴 <50%
    +
    Risk score: +1 (70–84%)   +2 (50–69%)   +3 (<50%)
    +
    +
    +

    Items Never Started

    +
    Percentage of items that remained in “New” status for the entire sprint — committed but never picked up.
    +
    These reveal a disconnect between planning and capacity. The team is treating the sprint backlog like a wish list rather than a commitment. Coach the team to only pull in what they genuinely intend to start.
    +
    Thresholds: 🟢 <15%   🟡 15–30%   🔴 >30%
    +
    Risk score: +1 (15–30%)   +2 (>30%)
    +
    +
    +

    Acceptance Criteria

    +
    Percentage of items with acceptance criteria written in their description. Measures definition-of-ready discipline.
    +
    Without AC, “done” is subjective. Low coverage leads to rework, mid-item scope creep, and review delays. This is a leading indicator — fix it and downstream metrics (cycle time, delivery rate) tend to improve.
    +
    Thresholds: 🟢 70%+   🟡 30–69%   🔴 <30%
    +
    Risk score: +1 (30–69%)   +2 (<30%)
    +
    +
    +

    Oldest Open Item

    +
    Age in days of the oldest unfinished item in the sprint. A high number flags stale work that should be descoped or re-evaluated.
    +
    Old items create cognitive drag — they clutter the board, distort metrics, and signal that it’s acceptable to leave things unfinished. Action: close it, descope it, or break it into something achievable this sprint.
    +
    Thresholds: 🟢 <30d   🟡 30–90d   🔴 90d+
    +
    +
    +

    Avg Cycle Time

    +
    Average days from when an item entered the sprint (or was created, if newer) to resolution. Measures how fast work flows through the sprint.
    +
    High cycle time + high delivery rate = finishing things but slowly (large items). High cycle time + low delivery rate = work getting stuck. Look for WIP overload, blocked queues, or handoff delays.
    +
    Thresholds: 🟢 <14d   🟡 14–30d   🔴 30d+
    +
    +
    +

    Max Sprint Carryover

    +
    The highest number of sprints any single item has been carried through. Identifies the worst “zombie” — work that keeps rolling forward without completion.
    +
    An item carried 4+ sprints usually points to unclear ownership, missing prerequisites, or work that should have been descoped. Find it, ask “what’s blocking this from being done or removed?” — the root cause often reveals a systemic issue.
    +
    Thresholds: 🟢 1   🟡 2–3   🔴 4+
    +
    Risk score: +1 (1–2 zombies)   +2 (3+ zombies)
    +
    + + +
    + Story Points by Status +
    +
    {{DONE_PTS}} pts
    +
    {{REVIEW_PTS_LABEL}}
    +
    {{TESTING_PTS_LABEL}}
    +
    {{INPROG_PTS_LABEL}}
    +
    {{NEW_PTS_LABEL}}
    +
    +
    + Resolved ({{DONE_PTS}} pts, {{DONE_ITEMS}} items) + Review ({{REVIEW_PTS}} pts, {{REVIEW_ITEMS}} items) + Testing ({{TESTING_PTS}} pts, {{TESTING_ITEMS}} items) + In Progress ({{INPROG_PTS}} pts, {{INPROG_ITEMS}} items) + New / Not Started ({{NEW_PTS}} pts, {{NEW_ITEMS}} items) +
    +
    + + +
    + #1 Recommended Action +

    {{TOP_RECOMMENDATION}}

    +
    +
    + + +
    + Positive Signals +
      + +
    • {{POSITIVE_SIGNAL}}
    • +
    +
    + + +
    + {{CALLOUT_TITLE}} + {{CALLOUT_BODY}} +
    + + + +
    +
    Section 2
    +

    Key Sprint Observations

    + + + + + + + + + + + + + +
    ObservationDetailImpact
    {{OBSERVATION_TITLE}}{{OBSERVATION_DETAIL}}{{OBSERVATION_IMPACT}}
    +
    + + + +
    +
    Section 3
    +

    Dimension Analysis

    + + +
    +
    + {{DIM_NUMBER}} +

    {{DIM_TITLE}}

    +
    +
    +

    Observations

    +
      +
    • {{DIM_OBSERVATION}}
    • +
    +

    Potential Risks

    +
      +
    • {{DIM_RISK}}
    • +
    +

    Coaching Recommendations

    +
      +
    • {{DIM_RECOMMENDATION}}
    • +
    +
    +
    + +
    + + + +
    +
    Section 4
    +

    Agile Anti-Patterns Detected

    + +
    + +
    +
    {{ANTIPATTERN_NAME}}
    +
    {{ANTIPATTERN_EVIDENCE}}
    +
    {{ANTIPATTERN_IMPACT}}
    +
    +
    +
    + + + +
    +
    Section 5
    +

    Flow Improvement Opportunities

    + + +

    {{FLOW_SUBSECTION_TITLE}}

    +
      +
    • {{FLOW_ACTION_TITLE}} {{FLOW_ACTION_DETAIL}}
    • +
    +
    + + + +
    +
    Section 6
    +

    Backlog Improvement Opportunities

    + +

    Structural Issues

    +
      +
    1. {{BACKLOG_ISSUE}} — {{BACKLOG_ISSUE_DETAIL}}
    2. +
    + +

    Recommendations

    +
      +
    • {{BACKLOG_REC_TITLE}} {{BACKLOG_REC_DETAIL}}
    • +
    +
    + + + +
    +
    Section 7
    +

    Top 5 Actions for {{NEXT_SPRINT_NAME}}

    + +
    + +
    +
    {{ACTION_NUMBER}}
    +
    +
    {{ACTION_TITLE}}
    +
    {{ACTION_IMPACT}}
    +
    {{ACTION_EVIDENCE}}
    +
    +
    +
    +
    + + + +
    +
    Section 8
    +

    Agile Coaching Notes

    + + +
    +
    For the Sprint Retrospective
    +
    +

    Suggested focus areas:

    +
      +
    • {{RETRO_FOCUS_AREA}}
    • +
    +

    Facilitation tips:

    +
      +
    • {{RETRO_TIP}}
    • +
    +
    +
    + + +
    +
    For Sprint Planning
    +
    +

    Key principles:

    +
      +
    • {{PLANNING_PRINCIPLE}}
    • +
    +
    +
    + + +
    +
    For Backlog Refinement
    +
    +

    Session structure (60 min):

    +
      +
    1. {{REFINEMENT_STEP}}
    2. +
    +

    Definition of Ready checklist:

    +
      +
    • {{DOR_ITEM}}
    • +
    +
    +
    +
    + + + +
    +
    Section 9
    +

    Additional Observations

    +

    The following patterns were detected from enrichment data (changelogs, comments, sprint history) and may warrant discussion.

    + + +
    +
    + {{OBS_SEVERITY}} + {{OBS_TITLE}} +
    +

    {{OBS_BODY}}

    + +

    Affected: {{OBS_AFFECTED_KEYS}}

    +
    +
    + + + +
    +
    Appendix
    +

    Sprint Item Tracker

    + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Issue KeyTypeStatusPtsAssigneeAgeSprint HistoryNotes
    {{ITEM_KEY}}{{ITEM_TYPE}}{{ITEM_STATUS}}{{ITEM_POINTS}}{{ITEM_ASSIGNEE}}{{ITEM_AGE}}
    {{SPRINT_LABEL_SHORT}}
    {{ITEM_NOTES}}
    +
    +
    + +
    + +
    + This report is intended to support the team's continuous improvement journey. The observations and recommendations are systemic in nature and should be discussed collaboratively. The goal is not to assign blame but to identify process improvements that enable the team to deliver more predictably, with higher quality, and with less stress. +
    + +
    +
    + + + + + + +