diff --git a/workflows/document-review/.ambient/ambient.json b/workflows/document-review/.ambient/ambient.json new file mode 100644 index 0000000..41252dd --- /dev/null +++ b/workflows/document-review/.ambient/ambient.json @@ -0,0 +1,12 @@ +{ + "name": "Document Review", + "description": "Systematic workflow for reviewing a project's documentation — assessing quality, completeness, accuracy, and consistency, then generating actionable findings.", + "systemPrompt": "You are a documentation quality specialist for the Ambient Code Platform.\n\nYou are controlled by a workflow controller at:\n .claude/skills/controller/SKILL.md\n\nRead it at the start of the session. It defines how to execute phases, recommend next steps, and handle transitions.\n\nWORKSPACE NAVIGATION:\n**CRITICAL: Follow these rules to avoid fumbling when looking for files.**\n\nStandard file locations (from workflow root):\n- Controller: .claude/skills/controller/SKILL.md\n- Phase skills: .claude/skills/{name}/SKILL.md\n- Commands: .claude/commands/*.md\n- Output files: artifacts/*.md\n\nTool selection rules:\n- Use Read for: Known paths, standard files, files you just created\n- Use Glob for: Discovery (finding multiple files by pattern)\n- Use Grep for: Content search\n\nNever glob for standard files:\n✅ DO: Read .ambient/ambient.json\n❌ DON'T: Glob **/ambient.json", + "startupPrompt": "Welcome! I'm your documentation review specialist. I systematically review project documentation to identify quality issues and classify them by severity.\n\n## Available Commands\n\n- `/scan` — Discover and catalog all documentation in the project\n- `/quality-review` — Deep quality review of the entire documentation corpus\n- `/code-check` — Cross-reference documentation against source code\n- `/report` — Consolidate all findings into a deduplicated report\n- `/full-review` — Run scan → quality-review + code-check → report in one shot\n- `/jira` — Create a Jira epic with child bugs/tasks from the report\n\n## Getting Started\n\nPoint me at a project repository or documentation path and I'll get started. You can either:\n1. Provide a repository URL or local path\n2. Run `/scan` to auto-discover documentation in the current workspace\n3. Run `/full-review` for a complete review in one pass\n", + "results": { + "Inventory": "artifacts/inventory.md", + "Quality Review Findings": "artifacts/findings-quality-review.md", + "Code Check Findings": "artifacts/findings-code-check.md", + "Report": "artifacts/report.md" + } +} diff --git a/workflows/document-review/.claude/commands/code-check.md b/workflows/document-review/.claude/commands/code-check.md new file mode 100644 index 0000000..317f9bf --- /dev/null +++ b/workflows/document-review/.claude/commands/code-check.md @@ -0,0 +1,13 @@ +# /code-check — Cross-reference docs against source code + +Verifies documentation claims against actual source code using parallel +discovery agents. Finds mismatches, undocumented features, and stale references. +Writes findings to `artifacts/findings-code-check.md`. + +**Requires:** `/scan` must have been run first. + +Read `.claude/skills/controller/SKILL.md` and follow it. + +Dispatch the **code-check** phase. Context: + +$ARGUMENTS diff --git a/workflows/document-review/.claude/commands/full-review.md b/workflows/document-review/.claude/commands/full-review.md new file mode 100644 index 0000000..9ea45fb --- /dev/null +++ b/workflows/document-review/.claude/commands/full-review.md @@ -0,0 +1,10 @@ +# /full-review — Run the complete review pipeline + +Runs scan → quality-review + code-check (parallel) → report in one shot, +pausing only for critical decisions. Results are written to `artifacts/`. + +Read `.claude/skills/controller/SKILL.md` and follow it. + +Dispatch the **full-review** phase. Context: + +$ARGUMENTS diff --git a/workflows/document-review/.claude/commands/jira.md b/workflows/document-review/.claude/commands/jira.md new file mode 100644 index 0000000..3101e3d --- /dev/null +++ b/workflows/document-review/.claude/commands/jira.md @@ -0,0 +1,53 @@ +# /jira - Create Jira Issues From Report + +## Purpose + +Creates a Jira epic from the documentation review report, with a child bug or +task for each finding. Bugs are for findings that impact external users or +customers. Tasks are for findings that impact developers or are maintenance +items. + +## Prerequisites + +- `artifacts/report.md` must exist (run `/report` first) +- `JIRA_URL`, `JIRA_EMAIL`, and `JIRA_API_TOKEN` environment variables must be set + +## Usage + +```text +/jira [component=] [labels=] [team=] [status=] +``` + +Arguments override environment variables. If no project key is given, the +`JIRA_PROJECT` environment variable is used. Any field not provided via +arguments or environment variables will be prompted for — the user must +explicitly set a value or confirm it should be left blank. + +### Environment Variables + +| Variable | Purpose | +|----------|---------| +| `JIRA_PROJECT` | Default Jira project key | +| `JIRA_COMPONENT` | Default component name | +| `JIRA_LABELS` | Default comma-separated labels | +| `JIRA_TEAM` | Default team name | +| `JIRA_INITIAL_STATUS` | Workflow transition after creation (e.g., `Backlog`) | + +### Examples + +```text +/jira DOCS component=documentation labels=docs,review team=docs-team status=Backlog +/jira DOCS component=none labels=none team=none status=New +``` + +## Process + +1. Read the skill at `.claude/skills/jira/SKILL.md` +2. Execute the skill's steps + +## Output + +Jira issues created via the Jira REST API: + +- One **Epic** summarizing the full review +- One **Bug** or **Task** per finding, as children of the epic diff --git a/workflows/document-review/.claude/commands/quality-review.md b/workflows/document-review/.claude/commands/quality-review.md new file mode 100644 index 0000000..176cd8e --- /dev/null +++ b/workflows/document-review/.claude/commands/quality-review.md @@ -0,0 +1,13 @@ +# /quality-review — Deep quality analysis of documentation + +Evaluates every document in the inventory against 7 quality dimensions +(accuracy, completeness, consistency, clarity, currency, structure, examples) +and writes findings to `artifacts/findings-quality-review.md`. + +**Requires:** `/scan` must have been run first. + +Read `.claude/skills/controller/SKILL.md` and follow it. + +Dispatch the **quality-review** phase. Context: + +$ARGUMENTS diff --git a/workflows/document-review/.claude/commands/report.md b/workflows/document-review/.claude/commands/report.md new file mode 100644 index 0000000..4df907e --- /dev/null +++ b/workflows/document-review/.claude/commands/report.md @@ -0,0 +1,14 @@ +# /report — Consolidate findings into a deduplicated report + +Merges all findings from `/quality-review` and `/code-check` into a single +report grouped by severity, with a dimension × severity summary table. +Writes to `artifacts/report.md`. + +**Requires:** At least one of `/quality-review` or `/code-check` must have +been run first. + +Read `.claude/skills/controller/SKILL.md` and follow it. + +Dispatch the **report** phase. Context: + +$ARGUMENTS diff --git a/workflows/document-review/.claude/commands/scan.md b/workflows/document-review/.claude/commands/scan.md new file mode 100644 index 0000000..aa05834 --- /dev/null +++ b/workflows/document-review/.claude/commands/scan.md @@ -0,0 +1,13 @@ +# /scan — Discover and catalog all documentation + +Discovers all documentation files in the project, classifies them by format, +topic, and audience, and writes a structured inventory to `artifacts/inventory.md`. + +**Arguments:** Optional path or glob to limit the scan to specific files or +directories. If omitted, scans the entire project. + +Read `.claude/skills/controller/SKILL.md` and follow it. + +Dispatch the **scan** phase. Context: + +$ARGUMENTS diff --git a/workflows/document-review/.claude/skills/code-check/SKILL.md b/workflows/document-review/.claude/skills/code-check/SKILL.md new file mode 100644 index 0000000..4ab8486 --- /dev/null +++ b/workflows/document-review/.claude/skills/code-check/SKILL.md @@ -0,0 +1,225 @@ +--- +name: code-check +description: Cross-reference documentation claims against actual source code. +--- + +# Code Check Documentation Skill + +You are cross-referencing a project's documentation against its actual source +code to verify accuracy. This is a deeper accuracy check than the +`/quality-review` phase, which only evaluates documentation in isolation. + +You work in three stages: + +1. **Reconnaissance** — Detect languages, frameworks, and components +2. **Discovery** — Dispatch parallel agents to build a code inventory +3. **Verification** — Cross-reference the code inventory against documentation + +## Critical Rules + +- **Read the inventory first.** This phase requires `/scan` to have been run. + If `artifacts/inventory.md` does not exist, inform the user and recommend + running `/scan` first. +- **Read, don't run.** This is static analysis — read source code, don't + execute it. +- **Be precise.** Every finding must include a direct quote from the + documentation AND an actual code snippet. Use fenced code blocks with + language tags. +- **Flag undocumented features.** Code functionality with no documentation is + a High finding. When checking a code area (e.g., a struct or route group), + scan all fields/routes — not just what docs mention. +- **Separate uncertain findings.** Low-confidence findings go in a dedicated + section, not in the main findings. Fuzzy name matches (e.g., `MAAS_DB_HOST` + vs `DB_HOST`) belong in Low-Confidence. + +## Stage 1: Reconnaissance + +Perform this yourself — no agents needed. This determines which discovery +agents to spawn in Stage 2. + +### Step 1.1: Language & Framework Detection + +Use Glob to check for these signature files in the project root and +subdirectories: + +| File | Language/Framework | +|------|-------------------| +| `go.mod` | Go | +| `Cargo.toml` | Rust | +| `package.json` | Node.js/TypeScript | +| `pyproject.toml`, `setup.py`, `requirements.txt` | Python | +| `pom.xml`, `build.gradle` | Java | +| `Gemfile` | Ruby | + +For detected languages, check framework markers: + +- Go: `go.mod`/`go.sum` for `controller-runtime`, `cobra`, `gin`, `echo`, + `chi` +- Python: `pyproject.toml`/`requirements.txt` for `fastapi`, `flask`, + `django`, `click` +- Node.js: `package.json` for `express`, `next`, `nestjs`, `commander` +- Java: `pom.xml`/`build.gradle` for `spring-boot`, `spring-web`, `quarkus`, + `micronaut`, `picocli`, `javax.ws.rs` (JAX-RS) +- Ruby: `Gemfile` for `rails`, `sinatra`, `grape`, `thor` + +### Step 1.2: Component Detection + +Use Glob to find multiple `go.mod`, `package.json`, `Cargo.toml`, +`pyproject.toml` files. Each distinct directory containing one (that is NOT +the project root) is a component. Also check for directories with their own +`README.md` + `Makefile` or `Dockerfile`. + +### Step 1.3: Category Applicability + +Determine which discovery agents to spawn: + +| Category | Reference file | Spawn condition | +|----------|---------------|----------------| +| Env vars | `discovery-env-vars.md` | Always | +| CLI args | `discovery-cli-args.md` | Entry points found (`main.go`, `main.py`, `bin/`, CLI framework) | +| Config schema | `discovery-config-schema.md` | Config files or config library imports | +| API schema | `discovery-api-schema.md` | OpenAPI specs, protobuf, HTTP framework imports | +| Data models | `discovery-data-models.md` | CRD directories, migration files, ORM imports | +| File I/O | `discovery-file-io.md` | File write operations or output file references in docs | +| External deps | `discovery-external-deps.md` | Database drivers, HTTP clients, message queues | +| Build/deploy | `discovery-build-deploy.md` | Makefiles, Dockerfiles, CI configs | + +### Step 1.4: Compile Project Profile + +Build a concise profile block to pass to discovery agents: + +```text +PROJECT PROFILE +Languages: Go, Python +Frameworks: controller-runtime, cobra +Components: + - maas-api (Go) — maas-api/ + - maas-controller (Go) — maas-controller/ +Discovery agents to spawn: env-vars, cli-args, api-schema, data-models, build-deploy +``` + +## Stage 2: Discovery + +Dispatch discovery agents in PARALLEL using the Agent tool with +`subagent_type: Explore`. + +For each agent to spawn: + +1. Read the agent's prompt template from + `.claude/skills/code-check/references/discovery-{category}.md` +2. Read the inventory format spec from + `.claude/skills/code-check/references/inventory-format.md` +3. Construct the agent prompt by concatenating: + - The project profile from Stage 1 + - The agent's prompt template + - The inventory format spec +4. Dispatch via Agent tool with `subagent_type: Explore` + +Issue ALL Agent tool calls in a SINGLE response to maximize parallelism. + +**Per-component splitting:** For projects with 4+ components, consider +spawning per-component agents for categories like API Schema or Data Models +(e.g., one API Schema agent for `maas-api/`, another for +`maas-controller/`). Include the component path scope in the agent prompt. + +### Merge Discovery Results + +After all agents return: + +1. Collect all inventory fragments +2. Empty fragments (zero items) are valid — record as "no items found" +3. For failed agents: log the failure, continue with others +4. Deduplicate: + - Primary match: item name (exact) + type + - Secondary match: same source file and line across agents + - Merge source locations into single entry + - For conflicting values: preserve conflict explicitly (e.g., + `Default: "30" (per config-schema agent) / "90" (per env-vars agent)`) + - Use entry with most non-null fields as base + - Use the env var name as the canonical item name when the item is an + environment variable +5. Organize by workflow (installation, then usage, then both), then by + category within each section + +Hold the merged code inventory in context for Stage 3. + +## Stage 3: Verification + +Cross-reference the code inventory against the documentation files cataloged +in `artifacts/inventory.md`. + +### What to Check + +For each documentation file, identify claims about code behavior and compare +against the code inventory: + +**Accuracy — do docs match code?** + +- API endpoints: routes, methods, paths, auth requirements +- CLI flags: names, types, defaults, help text +- Config options: field names, types, defaults +- Default values: what the code actually sets +- Behavior descriptions: auth flows, error handling, rate limiting + +**Completeness — are code features documented?** + +- Code inventory items with NO mention in any documentation file +- CRD fields, API endpoints, CLI flags, env vars missing from docs +- Runtime behaviors not explained + +**Staleness — do docs reference things that no longer exist?** + +- Env vars, CLI flags, API endpoints, config fields in docs but NOT in the + code inventory — confirm absence with Grep/Glob before reporting +- File paths that don't exist in the repo +- Components or modules that were removed +- Dead internal links + +### Verification Process + +1. Read each documentation file from the inventory +2. For each verifiable claim, compare against the code inventory +3. For items in the code inventory with no doc coverage, flag as undocumented +4. For items in docs with no code inventory match, verify absence with + Grep/Glob before flagging as stale +5. Record each result: + - **Match**: Doc accurately reflects code — no finding needed + - **Mismatch**: Doc contradicts code — typically Critical, but use judgment + - **Partial**: Doc is incomplete or imprecise — typically Low, but Medium + or higher if the gap affects a user-facing API or procedure + - **Undocumented**: Code feature not in docs — typically High, but Medium + for internal-only features or Low for minor config options + - **Stale**: Doc references removed functionality — typically High, but + Critical if users would follow broken instructions + +### Record Findings + +Follow the template at `templates/findings-code-check.md`. Write to +`artifacts/findings-code-check.md`. + +Each finding must include: + +- **Severity**: Critical, High, Medium, or Low (use the guidance in the + verification process above, but assess each finding individually) +- **Dimension**: Accuracy or Completeness +- **File**: Doc file path and line (e.g., `README.md:85`) +- **Code location**: Source file and line +- **Documented claim**: What the docs say (direct quote) +- **Actual behavior**: What the code does +- **Evidence**: Code snippet in a fenced code block with language tag +- **Fix**: Correction, if known with high confidence (omit if unsure) + +## Output + +- `artifacts/findings-code-check.md` + +## When This Phase Is Done + +Report to the user: + +- Total findings by type (mismatch, partial, undocumented, stale) +- Inventory coverage (which agents ran, items found per category) +- Top 3 most critical findings + +Then **re-read the controller** (`.claude/skills/controller/SKILL.md`) for +next-step guidance. diff --git a/workflows/document-review/.claude/skills/code-check/references/discovery-api-schema.md b/workflows/document-review/.claude/skills/code-check/references/discovery-api-schema.md new file mode 100644 index 0000000..5885a5a --- /dev/null +++ b/workflows/document-review/.claude/skills/code-check/references/discovery-api-schema.md @@ -0,0 +1,72 @@ +# API Schema Discovery Agent + +You are a discovery agent. Your job is to find ALL API endpoints, their +request/response schemas, and authentication requirements in this project. + +## Search Strategy + +**OpenAPI/Swagger specs:** + +- Files: `openapi.yaml`, `openapi.json`, `openapi3.yaml`, `swagger.yaml`, + `swagger.json` +- These are the richest source — extract all paths, methods, parameters, + request bodies, and response schemas + +**Go:** + +- HTTP handler registrations: `http.HandleFunc(`, `mux.HandleFunc(`, + `router.Handle(`, `r.GET(`, `r.POST(` +- Gin/Echo/Chi/Gorilla route definitions +- gRPC service definitions in `.proto` files +- `// @Summary`, `// @Router` — Swagger annotations + +**Python:** + +- FastAPI route decorators: `@app.get(`, `@app.post(`, `@router.` +- Flask routes: `@app.route(` +- Django URL patterns: `urlpatterns`, `path(` +- gRPC `.proto` files + +**Node.js/TypeScript:** + +- Express routes: `app.get(`, `app.post(`, `router.` +- NestJS decorators: `@Get(`, `@Post(`, `@Controller(` +- tRPC router definitions + +**Java:** + +- Spring MVC/WebFlux: `@GetMapping(`, `@PostMapping(`, `@PutMapping(`, + `@DeleteMapping(`, `@RequestMapping(` +- `@RestController`, `@Controller` — controller class annotations +- JAX-RS: `@GET`, `@POST`, `@PUT`, `@DELETE`, `@Path(` +- Quarkus RESTEasy: same JAX-RS annotations + +**Ruby:** + +- Rails routes: `get '`, `post '`, `resources :`, `namespace :` in + `config/routes.rb` +- Sinatra: `get '/'`, `post '/'` route definitions +- Grape API: `resource :`, `get`, `post` in API classes + +**Protobuf/gRPC:** + +- `.proto` files — service definitions, message types, RPC methods +- Generated code markers + +## Instructions + +1. First check for OpenAPI/Swagger spec files — if found, these are + authoritative +2. Search for route registrations to find all endpoints +3. For each endpoint, extract: path, HTTP method, request parameters/body + schema, response schema, auth requirements +4. Note API versioning patterns (path prefix like `/v1/`, header-based, etc.) +5. Check for middleware that applies auth, rate limiting, or other + cross-cutting concerns +6. Workflow is always `usage` + +## Output + +Produce your output following the inventory fragment format spec appended +below. For API endpoints, use the format `METHOD /path` as the ITEM_NAME +(e.g., `GET /api/v1/models`). diff --git a/workflows/document-review/.claude/skills/code-check/references/discovery-build-deploy.md b/workflows/document-review/.claude/skills/code-check/references/discovery-build-deploy.md new file mode 100644 index 0000000..88eea12 --- /dev/null +++ b/workflows/document-review/.claude/skills/code-check/references/discovery-build-deploy.md @@ -0,0 +1,65 @@ +# Build & Deployment Discovery Agent + +You are a discovery agent. Your job is to find ALL build targets, deployment +configurations, CI/CD pipelines, and infrastructure definitions in this +project. + +## Search Strategy + +**Makefiles:** + +- Read all `Makefile` files — extract target names, descriptions (from + comments), prerequisites +- Look for `.PHONY` declarations +- Note which targets are documented in README vs. available + +**Dockerfiles:** + +- `Dockerfile`, `Dockerfile.*`, `*.dockerfile` +- Extract: base images, build stages, exposed ports, entrypoint/CMD +- `ARG` and `ENV` directives + +**CI/CD:** + +- `.github/workflows/*.yml` — GitHub Actions +- `.gitlab-ci.yml` — GitLab CI +- `Jenkinsfile` — Jenkins +- `.circleci/config.yml` — CircleCI +- Extract: workflow names, trigger conditions, job names, key steps + +**Kubernetes/Kustomize/Helm:** + +- `kustomization.yaml` files — list all overlays and components +- Helm `Chart.yaml`, `values.yaml` — chart metadata and configurable values +- Deployment manifests: `Deployment`, `Service`, `ConfigMap`, `Secret` + definitions +- Note configurable parameters (image tags, replicas, resource limits) + +**Terraform:** + +- `*.tf` files — resources, variables, outputs +- `variables.tf` — input variables +- `outputs.tf` — output values + +**Scripts:** + +- `scripts/`, `bin/`, `hack/` directories +- Deployment scripts, setup scripts, utility scripts +- Extract: script purpose (from comments or name), arguments, prerequisites + +## Instructions + +1. Start with Makefiles — they often provide the entry point to understanding + build/deploy +2. Map out all Dockerfiles and their build stages +3. Catalog CI/CD workflows and their triggers +4. List all Kustomize overlays, Helm values, or Terraform variables +5. Find deployment/setup scripts and their arguments +6. Workflow is almost always `installation` +7. Note prerequisites (tools that must be installed, access requirements) + +## Output + +Produce your output following the inventory fragment format spec appended +below. Use the target/script/workflow name as the ITEM_NAME (e.g., +`make deploy`, `scripts/deploy.sh`, `build-test.yml`). diff --git a/workflows/document-review/.claude/skills/code-check/references/discovery-cli-args.md b/workflows/document-review/.claude/skills/code-check/references/discovery-cli-args.md new file mode 100644 index 0000000..aa42a97 --- /dev/null +++ b/workflows/document-review/.claude/skills/code-check/references/discovery-cli-args.md @@ -0,0 +1,70 @@ +# CLI Arguments Discovery Agent + +You are a discovery agent. Your job is to find ALL command-line arguments, +flags, and subcommands that this project defines. + +## Search Strategy + +**Go:** + +- `cobra.Command{` — Cobra command definitions (check `Use`, `Short`, `Long`, + `RunE`) +- `.Flags().String(`, `.Flags().Bool(`, `.Flags().Int(` — flag definitions +- `.PersistentFlags()` — persistent flags +- `flag.String(`, `flag.Bool(`, `flag.Int(` — stdlib flag package +- `pflag.` — spf13/pflag + +**Python:** + +- `argparse.ArgumentParser` — parser creation +- `parser.add_argument(` — argument definitions +- `@click.command`, `@click.option`, `@click.argument` — Click framework +- `typer.Option(`, `typer.Argument(` — Typer framework + +**Node.js/TypeScript:** + +- `yargs` — option definitions +- `commander` — command/option definitions +- `meow` — CLI helper +- `process.argv` — raw argument access + +**Rust:** + +- `clap::Command`, `clap::Arg` — Clap definitions +- `#[derive(Parser)]` — derive-based Clap +- `structopt` — StructOpt definitions + +**Java:** + +- `@Command`, `@Option`, `@Parameters` — Picocli annotations +- `Options`, `Option.builder(` — Apache Commons CLI +- Spring Boot `ApplicationRunner`, `CommandLineRunner` — check `run(` args +- `args` parameter in `public static void main(String[] args)` — raw access + +**Ruby:** + +- `OptionParser.new` — stdlib option parsing +- `Thor` subclass definitions — Thor CLI framework +- `ARGV` — raw argument access + +**Shell scripts:** + +- `getopts` — option parsing +- `case` statements processing `$1`, `$2`, etc. +- Usage/help text in functions or heredocs + +## Instructions + +1. First, find entry points: files with `func main()`, `if __name__`, `bin/` + scripts, etc. +2. Search for CLI framework imports to determine which patterns to prioritize +3. For each flag/argument found, extract: name, short form, type, default, + help text +4. Map out subcommand trees if applicable (parent -> child commands) +5. Look for hidden flags (e.g., `flag.Hidden = true` in Cobra) +6. Exclude test-only CLI definitions +7. Workflow is almost always `usage` unless it's a build/deploy script + +## Output + +Produce your output following the inventory fragment format spec appended below. diff --git a/workflows/document-review/.claude/skills/code-check/references/discovery-config-schema.md b/workflows/document-review/.claude/skills/code-check/references/discovery-config-schema.md new file mode 100644 index 0000000..c1aeaa5 --- /dev/null +++ b/workflows/document-review/.claude/skills/code-check/references/discovery-config-schema.md @@ -0,0 +1,68 @@ +# Config Schema Discovery Agent + +You are a discovery agent. Your job is to find ALL configuration file schemas, +config fields, and config-loading mechanisms in this project. + +## Search Strategy + +**Go:** + +- Struct definitions with `yaml:`, `json:`, `toml:`, `mapstructure:` tags — + these define config file schemas +- `viper.SetDefault(`, `viper.GetString(`, `viper.Get(` — Viper config access +- `koanf` usage — alternative config library +- Config file loading: `viper.SetConfigName(`, `viper.AddConfigPath(` +- `envconfig.Process(` — kelseyhightower/envconfig + +**Python:** + +- Pydantic `BaseSettings` or `BaseModel` classes used for config +- `configparser` usage +- `yaml.safe_load(` / `json.load(` of config files +- Django `settings.py` patterns +- `dynaconf` or `python-decouple` usage + +**Node.js/TypeScript:** + +- `convict` schema definitions +- `config` package usage +- `dotenv` + manual parsing +- `zod` or `joi` schemas for config validation + +**Java:** + +- `@ConfigurationProperties` — Spring Boot config binding classes +- `application.properties`, `application.yml` — Spring config files +- `@Value("${` — individual property injection +- Quarkus `application.properties` with `quarkus.` prefixes +- MicroProfile Config `@ConfigProperty` + +**Ruby:** + +- `config/` directory files in Rails (`database.yml`, `application.rb`) +- `Rails.application.config.` — Rails config access +- `YAML.load_file(` / `YAML.safe_load(` — YAML config loading +- `Figaro`, `dotenv-rails` — config management gems + +**General:** + +- Files named `config.yaml`, `config.json`, `config.toml`, `*.config.js`, + `settings.*` +- Example/template config files: `config.example.yaml`, `config.sample.*` +- `.env.example` files listing expected variables + +## Instructions + +1. Search for config struct definitions and config-loading code +2. For each config field, extract: field name (as it appears in the config + file), type, default value, validation rules +3. Cross-reference with example config files — do the examples match the + struct definitions? +4. Note which config file format(s) are supported (YAML, JSON, TOML, etc.) +5. Note the expected config file path(s) +6. Workflow: config consumed at startup is `usage`; config for deployment + tooling is `installation` + +## Output + +Produce your output following the inventory fragment format spec appended below. diff --git a/workflows/document-review/.claude/skills/code-check/references/discovery-data-models.md b/workflows/document-review/.claude/skills/code-check/references/discovery-data-models.md new file mode 100644 index 0000000..2ad04f9 --- /dev/null +++ b/workflows/document-review/.claude/skills/code-check/references/discovery-data-models.md @@ -0,0 +1,59 @@ +# Data Models Discovery Agent + +You are a discovery agent. Your job is to find ALL data model definitions: +CRDs, database schemas, ORM models, GraphQL schemas, and similar structured +data definitions. + +## Search Strategy + +**Kubernetes CRDs:** + +- Go type definitions with `+kubebuilder:` markers +- CRD YAML files in directories like `config/crd/`, `deploy/crds/`, + `crd/bases/` +- `controller-gen` markers: `+kubebuilder:validation:`, + `+kubebuilder:default:` +- `SchemeBuilder.Register(` — type registration + +**Database migrations:** + +- SQL migration files: `migrations/`, `db/migrate/` +- `CREATE TABLE`, `ALTER TABLE` statements +- Migration tools: goose, migrate, alembic, knex, prisma + +**ORM models:** + +- Go: GORM model structs with `gorm:` tags +- Python: SQLAlchemy models, Django models (`models.Model`) +- Node.js: Sequelize, TypeORM, Prisma schema (`schema.prisma`) +- Java: JPA entities (`@Entity`, `@Table`), Hibernate mappings, + Spring Data repositories (`extends JpaRepository`) +- Ruby: ActiveRecord models (`< ApplicationRecord` or `< ActiveRecord::Base`), + associations (`belongs_to`, `has_many`, `has_one`), validations + +**GraphQL:** + +- `.graphql` or `.gql` schema files +- `type Query`, `type Mutation` definitions + +**Protobuf messages:** + +- `message` definitions in `.proto` files (data structures, not service RPCs) + +## Instructions + +1. Search for CRD definitions first (check both Go types and generated YAML + manifests) +2. Search for database migration files and ORM model definitions +3. For each model/CRD, extract: type name, fields (name, type, validation, + defaults), relationships +4. Note which fields are required vs. optional +5. Note code generation markers (these indicate the source of truth is the Go + types, not the generated YAML) +6. Workflow: CRDs and models are typically `usage`; migration tooling may be + `installation` + +## Output + +Produce your output following the inventory fragment format spec appended +below. Use the model/CRD type name as the ITEM_NAME. diff --git a/workflows/document-review/.claude/skills/code-check/references/discovery-env-vars.md b/workflows/document-review/.claude/skills/code-check/references/discovery-env-vars.md new file mode 100644 index 0000000..e69f129 --- /dev/null +++ b/workflows/document-review/.claude/skills/code-check/references/discovery-env-vars.md @@ -0,0 +1,83 @@ +# Environment Variables Discovery Agent + +You are a discovery agent. Your job is to find ALL environment variables that +this project reads, sets, or references. Search the codebase thoroughly using +Grep and Read tools. + +## Search Strategy + +Search for these patterns (prioritize based on detected languages in the +project profile): + +**Go:** + +- `os.Getenv(` — direct env var reads +- `os.LookupEnv(` — env var reads with existence check +- `viper.BindEnv(` — Viper env bindings +- `viper.AutomaticEnv` — automatic env binding (check struct tags) +- `envconfig` or `env:` struct tags + +**Python:** + +- `os.environ` — dict-style access +- `os.getenv(` — with default +- `os.environ.get(` — with default +- Settings classes with `Field(env=` (Pydantic) + +**Node.js/TypeScript:** + +- `process.env.` — direct access +- `process.env[` — bracket access +- `dotenv` config loading + +**Java:** + +- `System.getenv("` — direct env var reads +- `System.getenv().get("` — map-style access +- `@Value("${` — Spring property/env injection +- `environment.getProperty("` — Spring Environment access + +**Ruby:** + +- `ENV['` or `ENV["` — direct access +- `ENV.fetch('` — access with required/default +- `ENV.key?('` — existence check + +**Dockerfiles:** + +- `ENV` directives +- `ARG` directives (build-time) + +**Kubernetes/Kustomize/Helm:** + +- `env:` blocks in deployment manifests +- `envFrom:` references to ConfigMaps/Secrets +- `valueFrom:` references + +**Shell scripts:** + +- Variable references `${VAR}` or `$VAR` +- `export` statements +- Default patterns `${VAR:-default}` + +**CI/CD (GitHub Actions, etc.):** + +- `env:` blocks +- `${{ env.VAR }}` or `${{ secrets.VAR }}` + +## Instructions + +1. Use Grep to search for the patterns above across the codebase +2. For each match, Read the surrounding code to extract: variable name, + default value, whether it's required, and a description +3. Exclude matches in test files (paths containing `test/`, `_test.go`, + `test_`, `.test.`, `__tests__`) UNLESS the variable also appears in + non-test code +4. Exclude matches in vendored code (`vendor/`, `node_modules/`) +5. Classify each variable's workflow (installation/usage/both) based on where + it's consumed +6. Produce output in the inventory fragment format provided below + +## Output + +Produce your output following the inventory fragment format spec appended below. diff --git a/workflows/document-review/.claude/skills/code-check/references/discovery-external-deps.md b/workflows/document-review/.claude/skills/code-check/references/discovery-external-deps.md new file mode 100644 index 0000000..21dff08 --- /dev/null +++ b/workflows/document-review/.claude/skills/code-check/references/discovery-external-deps.md @@ -0,0 +1,68 @@ +# External Dependencies Discovery Agent + +You are a discovery agent. Your job is to find ALL external services and +systems that this project connects to at runtime. + +## Search Strategy + +**Databases:** + +- Connection string patterns: `postgres://`, `mysql://`, `mongodb://`, + `redis://` +- Database driver imports: `database/sql`, `pgx`, `sqlalchemy`, `mongoose`, + `prisma` +- Connection setup code: `sql.Open(`, `pgxpool.Connect(`, `create_engine(` +- Java: `DriverManager.getConnection(`, `DataSource`, Spring + `spring.datasource.` properties +- Ruby: `database.yml` config, `ActiveRecord::Base.establish_connection` + +**HTTP clients:** + +- `http.Client`, `http.Get(`, `http.Post(` (Go) +- `requests.get(`, `httpx.` (Python) +- `fetch(`, `axios.` (Node.js) +- `OkHttpClient`, `HttpClient.newHttpClient(`, `RestTemplate`, + `WebClient.create(` (Java) +- `Faraday.new(`, `HTTParty.`, `Net::HTTP.` (Ruby) +- Look at what URLs/hosts they connect to + +**Message queues:** + +- Kafka: `kafka.NewReader`, `kafka.NewWriter`, `KafkaConsumer`, + `KafkaProducer` +- RabbitMQ: `amqp.Dial`, `pika.BlockingConnection` +- NATS: `nats.Connect` +- Redis Pub/Sub: `redis.Subscribe` + +**gRPC clients:** + +- `grpc.Dial(`, `grpc.NewClient(` — outbound gRPC connections +- Service client constructors + +**Cloud services:** + +- AWS SDK clients, GCP clients, Azure SDK usage +- S3, SQS, SNS, Pub/Sub, Blob Storage, etc. + +**Service discovery:** + +- Kubernetes service references (DNS names like `service.namespace.svc`) +- Consul, etcd, Eureka references + +## Instructions + +1. Search for database connection setup and client library imports +2. Search for HTTP client construction — trace what services they call +3. Search for message queue and cache connections +4. For each dependency, extract: service type, how it's configured (which env + vars or config fields), required vs. optional +5. Cross-reference with env vars and config fields — do NOT duplicate them. + Instead, reference them: "Configured via env var `DATABASE_URL`" +6. Workflow: dependencies needed at deploy time are `installation`; runtime + dependencies are `usage` + +## Output + +Produce your output following the inventory fragment format spec appended +below. Use the service type or name as the ITEM_NAME (e.g., `PostgreSQL`, +`Redis`, `KServe inference endpoint`). diff --git a/workflows/document-review/.claude/skills/code-check/references/discovery-file-io.md b/workflows/document-review/.claude/skills/code-check/references/discovery-file-io.md new file mode 100644 index 0000000..f3a8d74 --- /dev/null +++ b/workflows/document-review/.claude/skills/code-check/references/discovery-file-io.md @@ -0,0 +1,69 @@ +# File I/O Discovery Agent + +You are a discovery agent. Your job is to find files that this project reads +from or writes to that are relevant to users (output artifacts, logs, caches, +data files). + +## Search Strategy + +**Go:** + +- `os.Create(`, `os.OpenFile(`, `os.WriteFile(` — file creation/writing +- `os.Open(`, `os.ReadFile(` — file reading +- `io.Copy(` to file destinations +- Log file configuration (e.g., `lumberjack`, `zap` file output) + +**Python:** + +- `open(` with write modes (`'w'`, `'a'`, `'wb'`) +- `pathlib.Path` write methods +- `shutil.copy`, `shutil.move` +- Logging `FileHandler` configuration + +**Node.js/TypeScript:** + +- `fs.writeFile`, `fs.createWriteStream` +- `fs.readFile`, `fs.createReadStream` + +**Java:** + +- `Files.write(`, `Files.newBufferedWriter(` — NIO file writing +- `FileOutputStream`, `BufferedWriter` — classic I/O +- `Files.readAllLines(`, `Files.newBufferedReader(` — NIO file reading + +**Ruby:** + +- `File.write(`, `File.open(` with write modes +- `IO.write(`, `IO.read(` +- `FileUtils.cp`, `FileUtils.mv` + +**General:** + +- Output directory configuration (CLI flags or env vars pointing to output + paths) +- Cache directory patterns (`~/.cache/`, `.cache/`, `tmp/`) +- Log file paths + +## Instructions + +1. Search for file write operations in application code (not tests, not build + scripts) +2. Focus on files that users would care about: output artifacts, reports, + logs, cache files, generated configs +3. EXCLUDE: internal temp files, test fixtures, build artifacts created by + Makefiles +4. For each file, extract: path or path pattern, read vs. write, file format, + purpose +5. Workflow: output files are typically `usage` + +## Scope + +Be selective. This category is inherently noisy. Only list items where: + +- The file path is user-configurable, OR +- The file is a meaningful output artifact, OR +- The file is documented (or should be documented) + +## Output + +Produce your output following the inventory fragment format spec appended below. diff --git a/workflows/document-review/.claude/skills/code-check/references/inventory-format.md b/workflows/document-review/.claude/skills/code-check/references/inventory-format.md new file mode 100644 index 0000000..8ec0e2b --- /dev/null +++ b/workflows/document-review/.claude/skills/code-check/references/inventory-format.md @@ -0,0 +1,45 @@ +# Inventory Fragment Format + +You MUST produce your output in exactly this format. The orchestrator parses +this to merge fragments from all discovery agents. + +## [Category Name] + +### Items + +For each discovered item, produce one entry: + +- **`ITEM_NAME`** + - Type: env-var | cli-flag | config-field | endpoint | data-model | + file-path | external-dep | build-target + - Source: `path/to/file.go:42` (exact file and line where the item is + defined or consumed) + - Default: `"value"` (if discoverable, otherwise omit this line) + - Required: yes | no | unknown + - Description: (extracted from code comments, help text, or inferred from + context) + - Workflow: installation | usage | both + +### Confidence Notes + +List any ambiguous or uncertain findings here. Do NOT include them as items +above. + +- "Found reference to X but could not confirm it is user-facing" +- "Y appears in test code only — excluded" + +## Rules + +- Only list items you have HIGH confidence are real. Precision over recall. +- Exclude test-only items unless they appear in documentation. +- Exclude vendored/generated code (`vendor/`, `node_modules/`, generated + files). +- Use the env var name as the canonical ITEM_NAME when the item is an + environment variable. +- Workflow tagging heuristics: + - `installation` — item appears in deployment manifests, Dockerfiles, CI + configs, setup scripts, Makefiles, Kustomize/Helm configs; consumed only + at build/deploy time + - `usage` — item is read at runtime in application code + - `both` — item appears in both installation and runtime contexts + - When unclear, default to `both` diff --git a/workflows/document-review/.claude/skills/controller/SKILL.md b/workflows/document-review/.claude/skills/controller/SKILL.md new file mode 100644 index 0000000..4aaff49 --- /dev/null +++ b/workflows/document-review/.claude/skills/controller/SKILL.md @@ -0,0 +1,215 @@ +--- +name: controller +description: Top-level workflow controller that manages phase transitions. +--- + +# Document Review Workflow Controller + +You are the workflow controller. Your job is to manage the document review +workflow by executing phases and handling transitions between them. + +## Phases + +1. **Scan** (`/scan`) — `.claude/skills/scan/SKILL.md` + Discover and catalog all documentation files in the target project. Produce + an inventory of what exists, its format, and its apparent audience. + +2. **Quality Review** (`/quality-review`) — `.claude/skills/quality-review/SKILL.md` + Deep-read each document, evaluating 7 quality dimensions. Classify findings + by severity. Identify target audience per document. + +3. **Code Check** (`/code-check`) — `.claude/skills/code-check/SKILL.md` + Cross-reference documentation against actual source code. Check that + documented APIs, CLI flags, config options, and behavior descriptions match + the implementation. + +4. **Report** (`/report`) — `.claude/skills/report/SKILL.md` + Consolidate all findings into a single deduplicated report grouped by + severity. + +5. **Jira** (`/jira`) — `.claude/skills/jira/SKILL.md` + Create a Jira epic from the report with child bugs and tasks for each + finding. Uses the Jira REST API via `curl`. + +6. **Full Review** (`/full-review`) + Run scan → quality-review + code-check (parallel) → report automatically, pausing + only for critical decisions. + +Phases can be skipped or reordered at the user's discretion. + +## Dependency Graph + +```text +scan ──┬──> quality-review (sub-agent) ──┬──> report + └──> code-check (sub-agent) ──┘ └──> jira +``` + +- **Scan** must run first — all other phases depend on the inventory. +- **Quality review** and **code check** are independent of each other. Both + read the inventory and write to separate findings files. They can run in + parallel as sub-agents. +- **Jira** reads from `artifacts/report.md` and requires a completed report. + +### Findings Files + +| Phase | Output | +|-------|--------| +| Quality Review | `artifacts/findings-quality-review.md` | +| Code Check | `artifacts/findings-code-check.md` | + +Report reads from all findings files (whichever exist). + +## How to Execute a Phase + +1. **Announce** the phase to the user before doing anything else, e.g., + "Starting the /scan phase." This is very important so the user knows the + workflow is progressing and learns about the commands. +2. **Read** the skill file from the list above +3. **Execute** the skill's steps directly — the user should see your progress +4. When the skill is done, use "Recommending Next Steps" below to offer options. +5. Present the skill's results and your recommendations to the user +6. **Stop and wait** for the user to tell you what to do next + +## Handling Multiple Commands + +When the user provides multiple commands in a single prompt (e.g., +`/scan /quality-review /report` or "run scan, quality-review, and report"), execute **all** +listed commands in order. This is equivalent to the user invoking each command +one after another — do not stop between them to ask what to do next. + +### How to process multiple commands + +1. **Parse** the full prompt and identify all commands mentioned +2. **Announce** the plan: "Running /scan → /quality-review → /report." +3. **Execute each command in sequence**, following the dependency graph: + - If a later command depends on an earlier one (e.g., `/quality-review` + needs `/scan`), execute them in order + - If commands are independent (e.g., `/quality-review` and `/code-check`), + run them in + parallel as sub-agents — same as during full-review +4. **Report combined results** at the end, after all commands have completed +5. **Then stop and wait** — recommend next steps as usual + +### Examples + +- `/scan /quality-review` → run scan, then quality-review, then present results +- `/scan /quality-review /code-check` → run scan, then quality-review + + code-check in parallel, then present results +- `/scan /quality-review /report` → run scan, then quality-review, then report, + then present results +- `/quality-review /report` → run quality-review (scan first if no inventory), + then report, then present results + +## Running Analysis Sub-Agents in Parallel + +When multiple analysis phases should run (e.g., during full-review, or when the +user requests several), use the Agent tool to launch them as parallel +sub-agents: + +1. **Announce** which sub-agents you're launching in parallel +2. **Spawn Agent calls simultaneously:** + - Agent (quality-review): Read `.claude/skills/quality-review/SKILL.md` and + execute it. Write output to `artifacts/findings-quality-review.md`. + - Agent (code-check): Read `.claude/skills/code-check/SKILL.md` and execute + it. Write output to `artifacts/findings-code-check.md`. +3. **Wait** for all agents to complete +4. **Summarize** the combined results to the user + +When running a single phase (e.g., user invokes only `/quality-review`), execute it +directly — no sub-agent needed. + +## Recommending Next Steps + +After each phase completes, present the user with **options** — not just one +next step. Use the typical flow as a baseline, but adapt to what actually +happened. + +### Typical Flow + +```text +scan → quality-review + code-check (parallel) → report +``` + +### What to Recommend + +After presenting results, consider what just happened, then offer options that +make sense: + +**After scan:** + +- Recommend `/quality-review` — the natural next step +- Offer `/code-check` if documentation references lots of code (APIs, CLI flags) +- Mention that quality-review and code-check can run in parallel +- Offer `/full-review` if the user wants to run the entire pipeline at once + +**After quality-review:** + +- Recommend `/report` to consolidate all findings +- Offer `/code-check` for deeper accuracy checking against code + +**After code-check:** + +- Recommend `/report` to consolidate all findings + +**After report:** + +- Offer `/jira` to create Jira issues for tracking remediation +- The workflow may be complete if the report is the desired output + +**After jira:** + +- The workflow is typically complete + +**Going back** — sometimes earlier work needs revision: + +- New documents discovered → offer `/scan` again +- Need deeper accuracy checking → offer `/code-check` + +### How to Present Options + +Lead with your top recommendation, then list alternatives briefly: + +```text +Recommended next step: /quality-review — deep quality analysis of the 42 documents found. + +Other options: +- /code-check — cross-reference docs against source code (can run in parallel with quality-review) +- /full-review — run scan → quality-review + code-check → report automatically +``` + +## Executing a Full Review + +When the user invokes `/full-review`: + +1. Execute the **scan** phase — announce it, read the skill, run it +2. Launch **quality-review** and **code-check** as parallel sub-agents +3. Once both complete, execute the **report** phase +4. Present the final report to the user +5. Offer `/jira` as a follow-up option + +During full-review, only pause if: + +- The project repository cannot be found or accessed +- No documentation files are discovered +- A critical error prevents the review from continuing + +## Starting the Workflow + +When the user first provides a project path, repository URL, or description: + +1. Execute the **scan** phase +2. After scanning, present results and wait + +If the user invokes a specific command (e.g., `/quality-review`), execute that phase +directly — don't force them through earlier phases. However, if a phase is +invoked without an existing inventory, run `/scan` first and inform the user. + +## Rules + +- **Never auto-advance.** Always wait for the user between phases (except + during full-review or when the user provides multiple commands in a single + prompt). +- **Recommendations come from this file, not from skills.** Skills report + findings; this controller decides what to recommend next. +- **Respect the target project.** This workflow reviews external project + documentation. Do not modify the target project's files. diff --git a/workflows/document-review/.claude/skills/jira/SKILL.md b/workflows/document-review/.claude/skills/jira/SKILL.md new file mode 100644 index 0000000..8818c0c --- /dev/null +++ b/workflows/document-review/.claude/skills/jira/SKILL.md @@ -0,0 +1,252 @@ +--- +name: jira +description: Create a Jira epic from the documentation review report with child bugs and tasks for each finding. +--- + +# Jira Skill + +You are creating Jira issues from the documentation review report. Each finding +becomes a child issue under a parent epic so the team can track remediation. + +## Prerequisites + +- `artifacts/report.md` must exist (run `/report` first) +- `pandoc` must be installed (used to convert Markdown to Jira wiki markup). + Install via `pip install pypandoc_binary` (bundles the pandoc binary). +- The following environment variables must be set for Jira API access: + - `JIRA_URL` — base URL of the Jira instance (e.g., `https://myorg.atlassian.net`) + - `JIRA_EMAIL` — email address for authentication + - `JIRA_API_TOKEN` — API token for authentication +- A Jira project key must be provided as an argument or via the `JIRA_PROJECT` + environment variable + +## Inputs + +Every field below must be explicitly specified by the user before issue creation +begins. A field can be set to a value or explicitly marked as "none" / left +blank — but the user must state this. Do not assume defaults or skip fields that +were not mentioned. If any field is missing from the arguments and environment +variables, prompt the user for it. + +Arguments passed to the `/jira` command take precedence over environment +variables. + +| Parameter | Argument | Env Var | +|-----------|----------|---------| +| Project key | first positional arg | `JIRA_PROJECT` | +| Component | `component=` | `JIRA_COMPONENT` | +| Labels | `labels=` | `JIRA_LABELS` | +| Team | `team=` | `JIRA_TEAM` | +| Initial status | `status=` | `JIRA_INITIAL_STATUS` | + +The **Initial status** is the workflow transition to apply after creating each +issue (e.g., `Backlog`, `New`, `To Do`). If set, transition each issue to this +status immediately after creation. If explicitly left blank, issues stay in the +workflow's default initial state. + +## Process + +### Step 1: Read the Report + +Read `artifacts/report.md`. Extract: + +- **Header metadata**: date, repository, commit SHA, instruction +- **Summary table**: dimension x severity counts and ratings +- **All findings**: each finding under its severity heading (Critical, High, + Medium, Low) + +For each finding, capture: + +- **ID**: the severity-prefixed number (C1, H1, M1, L1, etc.) +- **Title**: the heading text after the ID +- **Dimension**: the quality dimension (Accuracy, Completeness, etc.) +- **File**: the file path and line reference +- **Source**: which phase detected it (quality-review, code-check) +- **Issue**: description of what is wrong +- **Evidence**: quoted text or output demonstrating the problem +- **Fix**: the suggested correction (if present) + +### Step 2: Resolve Jira Metadata + +1. Check arguments first, then fall back to environment variables +2. Parse comma-separated labels into a list +3. Verify that `JIRA_URL`, `JIRA_EMAIL`, and `JIRA_API_TOKEN` are set. If any + are missing, stop and tell the user which variables need to be configured. +4. Check that **every** metadata field has been explicitly addressed — either + set to a value or explicitly left blank. If any field is unspecified (not + provided as an argument and not set as an environment variable), stop and + ask the user for the missing fields. List each missing field by name so the + user can provide a value or confirm it should be left blank. +5. Confirm the full plan with the user before creating issues: + - Project key + - Component (or "none") + - Labels (or "none", in addition to `acp:document-review`) + - Team (or "none") + - Initial status transition (or "default") + - Number of findings to file + +### Step 3: Create the Epic + +Use the Jira REST API via `curl` to create an Epic. All API calls target +`$JIRA_URL/rest/api/2/issue`. + +**Authentication:** Pipe credentials via stdin using `curl --config -` so they +do not appear as command-line arguments (which would be visible in `ps` output). +Use this pattern for every `curl` call: + +```bash +printf 'user = "%s:%s"\n' "$JIRA_EMAIL" "$JIRA_API_TOKEN" | \ + curl --config - \ + -X POST \ + -H "Content-Type: application/json" \ + -d '...' \ + "$JIRA_URL/rest/api/2/issue" +``` + +#### Convert Markdown to Jira wiki markup + +Jira REST API v2 description fields use wiki markup, not Markdown. Convert +content with pandoc before sending (pipe the Markdown through +`pandoc -f markdown -t jira`). This applies to the epic description and every +child issue description in Step 4. + +#### Create the Epic with: + +- **Project**: the resolved project key +- **Issue type**: `Epic` +- **Epic Name**: `Documentation Review: ` (use the date from the report + header) +- **Summary**: `Documentation Review Report of for ` where + `` is the report date and `` is the repository or repositories + listed in the report header (if multiple repos, join them with commas) +- **Description**: Only the header portion of `artifacts/report.md` — everything + before the `## Summary` heading (title, date, repositories, and instruction). + Convert this extract to Jira wiki markup via pandoc before sending. +- **Labels**: merge `acp:document-review` with any user-provided labels +- **Component**: set if provided + +Record the created epic key (e.g., `PROJ-123`). + +#### Attach the full report + +After creating the epic, attach `artifacts/report.md` to it using: + +```bash +printf 'user = "%s:%s"\n' "$JIRA_EMAIL" "$JIRA_API_TOKEN" | \ + curl --config - \ + -X POST \ + -H "X-Atlassian-Token: no-check" \ + -F "file=@artifacts/report.md" \ + "$JIRA_URL/rest/api/2/issue//attachments" +``` + +This keeps the full report accessible from the epic without cluttering the +description field. + +If an initial status was specified, transition the epic to that status using +`POST $JIRA_URL/rest/api/2/issue//transitions` — first GET the available +transitions to find the matching transition ID, then POST it. + +### Step 4: Create Child Issues + +For each finding in the report, create a child issue under the epic. + +#### Classify as Bug or Task + +Decide per-finding based on the issue content: + +- **Bug** — the finding impacts external users or customers. Examples: + - Incorrect instructions that would cause users to fail + - Missing steps that block user workflows + - Broken commands or wrong API references users would encounter + - Misleading descriptions of user-facing behavior + - Dead links in user-facing documentation + +- **Task** — the finding impacts developers or is a maintenance/housekeeping + item. Examples: + - Internal inconsistencies between developer docs + - Structural or organizational improvements + - Stale references in contributor-facing documentation + - Style or formatting issues + - Missing code comments or developer-facing docs + +#### Build the Issue + +Use the Jira REST API via `curl` (same `--config -` auth pattern as Step 3) +for each finding: + +- **Project**: the resolved project key +- **Issue type**: `Bug` or `Task` (per classification above) +- **Parent**: the epic key from Step 3 +- **Summary**: `. ` (e.g., `C1. Incorrect CLI flag in quickstart`) +- **Description**: structured as follows (convert to Jira wiki markup via + `pandoc -f markdown -t jira` before sending): + +``` +## Issue + +<issue text from the finding> + +## Why This Is a Problem + +<reasoning about why this matters, synthesized from the dimension, evidence, +and context — explain the impact on users or developers> + +**Evidence:** +<quoted evidence from the finding> + +**Affected file:** <file path and line> +**Quality dimension:** <dimension> +**Detected by:** <source phase(s)> + +## Expected Outcome + +<what needs to be true when this is resolved — derived from the Fix field if +present, otherwise describe the desired end state based on the issue> +``` + +- **Labels**: merge `acp:document-review` with any user-provided labels +- **Component**: set if provided + +After creating each child issue, if an initial status was specified, transition +it using the same approach as the epic (GET available transitions, then POST). + +### Step 5: Report Results + +After all issues are created, present a summary to the user: + +``` +## Jira Issues Created + +**Epic:** <EPIC-KEY> — Documentation Review: <date> + +| ID | Type | Key | Summary | +|----|------|-----|---------| +| C1 | Bug | PROJ-124 | Incorrect CLI flag in quickstart | +| H1 | Task | PROJ-125 | Inconsistent config key names | +| ... | ... | ... | ... | + +**Total:** N issues (X bugs, Y tasks) under <EPIC-KEY> +``` + +## Error Handling + +- If a `curl` call fails or returns an error response for a specific finding, + log the error, continue with remaining findings, and report failures at + the end +- If the epic creation fails, stop and report the error — do not attempt to + create child issues without a parent epic +- If any of `JIRA_URL`, `JIRA_EMAIL`, or `JIRA_API_TOKEN` are not set, stop + and tell the user which variables are missing +- If `pandoc` is not installed, install it with `pip install pypandoc_binary`. + If that fails, stop and tell the user. + +## Output + +This skill does not write to `artifacts/`. Its output is the set of Jira +issues created via the Jira REST API. + +## When This Phase Is Done + +Report the summary table to the user and re-read the controller +(`.claude/skills/controller/SKILL.md`) for next-step guidance. diff --git a/workflows/document-review/.claude/skills/quality-review/SKILL.md b/workflows/document-review/.claude/skills/quality-review/SKILL.md new file mode 100644 index 0000000..c164f94 --- /dev/null +++ b/workflows/document-review/.claude/skills/quality-review/SKILL.md @@ -0,0 +1,285 @@ +--- +name: quality-review +description: Deep quality review of the entire documentation corpus. +--- + +# Quality Review Documentation Skill + +You are performing a deep quality review of a project's documentation corpus. +Your job is to evaluate each document against 7 quality dimensions and produce +a structured findings report. + +## Your Role + +Read every document in the inventory, evaluate its quality, identify issues, +and classify findings by severity and dimension. This is a docs-only review — +you are not cross-referencing against source code (that's `/code-check`). + +## Critical Rules + +- **Read the inventory first.** This phase requires `/scan` to have been run. + If `artifacts/inventory.md` does not exist, inform the user + and recommend running `/scan` first. +- **Process one file at a time.** Review each document completely, write its + findings to the output file, then move to the next. Do not accumulate + findings across all documents in memory — this causes context window buildup + that leads to skipped documents and incomplete coverage. +- **Be specific.** Every finding must cite the exact file, section, or line + where the issue occurs. +- **Show evidence.** Every finding must include a direct quote from the + document — the actual text that is wrong, missing, or unclear. Use a + fenced code block or blockquote. Never describe evidence indirectly + (e.g., "the section does not mention X"); instead quote what the section + *does* say and explain why it falls short. +- **Don't nitpick style.** Focus on content quality over formatting + preferences. Minor markdown formatting issues are not worth reporting unless + they affect readability. +- **Assess audience fit.** Identify who each document is written for and + evaluate whether the content is appropriate for that audience. + +## Quality Dimensions + +Evaluate each document against these 7 dimensions: + +1. **Accuracy** — Are statements factually correct? This includes: + - Version numbers, command syntax, or flag names that contradict other + documents or the project's own configuration files + - Empty or broken code blocks that claim to show a command or output + - Placeholder values (e.g., `username`, `password`) presented as if they + are real values a user should copy verbatim + - Statements that contradict what the reader would actually experience + Flag claims that seem suspect based on what you can determine from the + documentation alone. Note that `/code-check` does deeper code + cross-referencing, but obvious factual errors belong here — don't defer + everything to code-check. + +2. **Completeness** — Does the document cover its topic fully? Are there + obvious omissions? Are prerequisites listed? Are edge cases mentioned? + +3. **Consistency** — Does terminology match across documents? Are formatting + conventions consistent? Do factual claims agree between documents? + +4. **Clarity** — Is the language clear and unambiguous? Are concepts explained + before they're used? Is the level of detail appropriate for the target + audience? + +5. **Currency** — Are there references to deprecated features, old version + numbers, dead links, or outdated screenshots? Does the content reflect the + current state of the project? + +6. **Structure** — Are headings logical and hierarchical? Does information flow + in a sensible order? Is the document navigable? Are there appropriate + cross-references? + +7. **Examples** — Check every code block and inline code sample: + - **Presence.** Are code examples provided where the reader would need + them? A configuration reference with no example snippet, or a CLI + description with no invocation, is a High finding. + - **Syntax validity.** Does each code block look syntactically valid for + the language shown (per the language tag or surrounding context)? Flag + obviously broken syntax — unclosed brackets, unterminated strings, + invalid YAML indentation — as a Critical finding. Empty code blocks + (a fenced block with no content) that claim to show a command or + output are Critical (Accuracy), not just an Examples issue. + - **Placeholder clarity.** Are user-supplied values clearly distinguished + from literal values? Flag values that look real but are meant to be + replaced (e.g., `192.168.1.100` as a placeholder IP, `my-password` as + a credential) without any indication to substitute. Severity: + Low (Clarity). + - **Command completeness.** Do CLI commands include all required arguments + and flags to actually run? A command missing a required positional + argument or a mandatory flag is a Critical finding. + - **Explanation.** Are non-obvious code constructs explained? An example + using advanced syntax, flags, or patterns that the target audience + would not recognize should have accompanying explanation. Severity: + Low. + +## Finding Severities + +Classify each finding by impact: + +- **Critical** — Incorrect information, broken commands, or missing steps that would block users or cause them to take wrong actions +- **High** — Significant gaps, contradictions, or outdated content that degrades the user experience +- **Medium** — Issues that cause confusion but have workarounds or limited impact +- **Low** — Minor improvements to clarity, structure, or presentation + +## Reviewer Lens + +Different document types need different scrutiny. After identifying what a +document is, adopt the appropriate lens to focus your evaluation. + +### Detecting document type + +Classify each document as one of: + +- **Procedural** — contains numbered steps, shell commands, installation + instructions, tutorials, quickstarts, or getting-started guides. The reader + intends to follow along and do something. +- **Conceptual** — explains how something works, describes architecture, or + provides background context. The reader is trying to understand, not act. +- **Reference** — catalogs options, parameters, API fields, or configuration + keys. The reader looks up specific facts. +- **Mixed** — combines explanatory sections with procedural steps (e.g., an + architecture overview followed by a deployment guide). Apply both lenses to + the relevant sections. + +Use the inventory's "Has Instructions" field as a starting signal, but verify +by reading the document — some docs tagged "No" contain implicit instructions, +and some tagged "Yes" are primarily conceptual with minor code snippets. + +### Developer lens (procedural and reference docs) + +Read as an implementer who will follow every step and run every command. Ask: + +- Can I actually follow this from start to finish? +- Are the prerequisites complete before I start? +- Will these commands run as written? +- What happens when something goes wrong? +- Can I verify each step succeeded? + +This lens triggers the **procedural document checks** below and emphasizes the +**Examples** and **Completeness** dimensions. Findings from this lens are +typically high-severity (Critical, High) because they directly block users. + +### Architect lens (conceptual docs) + +Read as someone building a mental model of the system. Ask: + +- **Internal consistency.** Does the description of components and their + relationships hold together? Flag contradictions between prose and diagrams, + or between different sections of the same document. Severity: Critical + (Accuracy) or Medium (Consistency). +- **Abstraction level.** Is the depth right for the audience? Flag + implementation details that belong in a procedure rather than a concept. + Flag content that is too abstract for a developer who needs concrete + guidance. Severity: Low (Clarity). +- **"Why" context.** Does the document explain *why*, not just *what*? + Configuration options should explain when you would use them and what + trade-offs are involved. Architecture descriptions should explain design + decisions, not just list components. Severity: High (Completeness) if + entirely absent, Low (Clarity) if present but shallow. +- **Onward paths.** Are there cross-references where a reader would need to + go elsewhere to complete a task or deepen understanding? A concept that + describes a feature but never links to the procedure for using it is a High + finding (Structure). + +This lens emphasizes the **Accuracy**, **Clarity**, and **Structure** +dimensions. + +## Procedural Document Checks + +When a document contains executable instructions (tagged "Has Instructions: +Yes" in the inventory, or containing numbered steps, shell commands, or +installation procedures), apply these additional checks. These catch the +highest-impact documentation gaps — issues that leave users stuck with no +recourse when something goes wrong. + +### Failure path coverage + +- **Verification steps.** Every command or action that changes state should + have a way to confirm it succeeded. Flag procedures where a create/apply/ + install step has no corresponding get/describe/status check. Example: an + `oc apply -f manifest.yaml` with no `oc get` to confirm the resource exists + is a High finding (Completeness). +- **Error guidance.** Flag procedures that describe only the happy path with no + mention of what to do if a step fails. At minimum, common failure modes + should be acknowledged. A procedure with 5+ steps and zero error handling is + a High finding (Completeness). +- **Undocumented intermediate state.** Flag procedures where failure at step N + would leave the system in a state the documentation never describes. If a + user gets halfway through and something breaks, can they recover or roll + back? Missing rollback/undo guidance is a High finding (Completeness). +- **Prerequisite placement.** Flag prerequisites that first appear mid-procedure + rather than at the top. A tool, credential, or permission that is needed at + step 5 but not mentioned until step 5 is a High finding (Structure). + +### Cross-step consistency + +Check that variable names, resource names, file paths, and output values chain +correctly across steps. If step 2 creates a resource named `my-app` but step 4 +references `myapp`, flag it as Critical (Accuracy). + +Classify procedural findings using the same severity and dimension system as +all other findings. The most common classification is High (Completeness) for +missing verification, error handling, and rollback guidance. + +## Process + +### Step 1: Load the Inventory + +Read `artifacts/inventory.md` to understand what documents +exist and how they're organized. Build a list of all document paths to review. + +### Step 2: Initialize the Findings File + +Write the file header to `artifacts/findings-quality-review.md` using +the template at `templates/findings-quality-review.md`. Fill in the date, repository, commit SHA, +and instruction. Leave the summary table counts as `N` — you will update them +at the end. + +Write the `## Findings by Document` heading. The file is now ready to receive +per-document findings incrementally. + +### Step 3: Review Each Document (One at a Time) + +Process documents **one at a time** to prevent context window buildup. For each +document: + +1. **Read** the document fully +2. **Identify** the target audience (end user, developer, operator, general) +3. **Detect document type** — classify as procedural, conceptual, reference, or + mixed (see Reviewer Lens above). This determines which lens to apply. +4. **Assess audience fit:** + - Is the assumed knowledge level appropriate for the target audience? + - Are prerequisites clearly stated? + - Is jargon defined or avoided based on audience? + - Does the document serve its apparent purpose (tutorial vs reference vs + explanation)? +5. **Evaluate** against each of the 7 quality dimensions +6. **Apply lens-specific checks:** + - **Procedural / reference / mixed docs** → apply the developer lens and + procedural document checks (failure path coverage, cross-step consistency). + These are the highest-value findings for procedural docs. + - **Conceptual / mixed docs** → apply the architect lens (internal + consistency, abstraction level, "why" context, onward paths). + - For mixed docs, apply both sets of checks to the relevant sections. +7. **Record** findings with: + - **Severity**: Critical, High, Medium, or Low + - **Dimension**: Which quality dimension is affected + - **File**: File path and line in backticks (e.g., `docs/guide.md:42`) + - **Issue**: What the problem is + - **Evidence**: Quote the problematic text + - **Fix**: The correction, if known with high confidence (omit if unsure) + - **Audience impact**: How this affects the target audience +8. **Append** the document's section to `artifacts/findings-quality-review.md` + immediately — including the document heading, audience assessment, and all + findings (or an explicit note that no issues were found). Do not hold + findings in memory across documents. + +If a document has no issues, still append its section with a note: +`No issues identified.` + +### Step 4: Cross-Document Consistency Check + +After all individual documents have been reviewed, check for cross-document +issues: + +- Contradictory statements between documents +- Inconsistent terminology (same concept called different names) +- Duplicated content that could drift out of sync +- Missing cross-references between related documents +- Inconsistent formatting conventions + +Append the `## Cross-Document Issues` section to the findings file. + +### Step 5: Update Summary Table + +Read the findings file you have built. Count findings for each +dimension × severity cell. Update the summary table at the top of +`artifacts/findings-quality-review.md` with the actual counts, +replacing the placeholder `N` values. Include row totals (per dimension) and +column totals (per severity). + +## Output + +- `artifacts/findings-quality-review.md` diff --git a/workflows/document-review/.claude/skills/report/SKILL.md b/workflows/document-review/.claude/skills/report/SKILL.md new file mode 100644 index 0000000..9334205 --- /dev/null +++ b/workflows/document-review/.claude/skills/report/SKILL.md @@ -0,0 +1,112 @@ +--- +name: report +description: Consolidate all findings into a single deduplicated report grouped by severity. +--- + +# Report Skill + +You are consolidating all documentation review findings into a single +authoritative report. This report is the primary deliverable of the workflow — +it contains every finding, deduplicated and grouped by severity. + +## Your Role + +Read all findings files, merge them into one consolidated list, remove +duplicates, and produce a report grouped by severity (Critical → High → Medium +→ Low). Every finding from every phase must appear in the report unless it +duplicates another. + +## Critical Rules + +- **Findings must exist.** If neither + `artifacts/findings-quality-review.md` nor + `artifacts/findings-code-check.md` exists, inform the user and + recommend running `/quality-review` first. +- **Include every finding.** This is not a summary — it is the consolidated + record. Every finding from every phase must appear unless it is a duplicate. +- **Deduplicate across phases.** The same issue may be reported by + quality-review and code-check. Merge these into a single finding, noting which phases detected + it in the **Source** field. +- **Group by severity.** Findings are organized under `## Critical`, + `## High`, `## Medium`, and `## Low` headings, in that order. +- **Number findings within each group.** Use a severity prefix: C1, C2, … + for Critical; H1, H2, … for High; M1, M2, … for Medium; L1, L2, … for Low. + +## Process + +### Step 1: Load Findings + +Read whichever findings files exist: + +- `artifacts/findings-quality-review.md` (from `/quality-review`) +- `artifacts/findings-code-check.md` (from `/code-check`) + +Also read `artifacts/inventory.md` for context. + +### Step 2: Merge and Deduplicate + +Collect every finding from all files into a single list. For each finding, +record its severity, dimension, location, description, evidence, and which +phase produced it (quality-review, code-check). + +Identify duplicates — findings that describe the same issue in the same +location. When two or more phases report the same issue: + +- Keep the version with the strongest evidence +- Use the highest severity if they differ +- Merge the **Source** field to list all phases that detected it + +### Step 3: Compute Statistics + +Build a dimension × severity cross-tabulation from the deduplicated list: for +each of the 7 dimensions (Accuracy, Completeness, Consistency, Clarity, +Currency, Structure, Examples), count findings at each severity level (Critical, +High, Medium, Low). Include row and column totals. + +For each dimension, assign a qualitative rating: + +- **Good** — Few or no issues +- **Fair** — Some issues but generally acceptable +- **Poor** — Significant issues that need attention + +### Step 4: Note Skipped Phases + +Check whether optional phases were skipped and note the reason in the report: + +- If `artifacts/findings-code-check.md` does not exist, note that code verification was not + performed. + +This helps readers understand the scope of the review. + +### Step 5: Write the Report + +Follow the template at `templates/report.md` exactly. Write to +`artifacts/report.md`. + +Write every finding under its severity heading. Each finding must include: + +- **Dimension** — which quality dimension is affected +- **File** — file path and line in backticks (e.g., `docs/guide.md:42`) +- **Source** — which phase(s) detected it (quality-review, code-check) +- **Issue** — what the problem is +- **Evidence** — quoted text, code snippet, or command output +- **Fix** — the correction, if known with high confidence (omit if unsure) + +Omit any severity section that has zero findings (e.g., if there are no +Critical findings, omit the `## Critical` section entirely). + +## Output + +- `artifacts/report.md` + +## When This Phase Is Done + +Report to the user: + +- Total findings (after deduplication) +- Breakdown by severity +- The top 3 most impactful findings +- Recommended next step + +Then **re-read the controller** (`.claude/skills/controller/SKILL.md`) for +next-step guidance. diff --git a/workflows/document-review/.claude/skills/scan/SKILL.md b/workflows/document-review/.claude/skills/scan/SKILL.md new file mode 100644 index 0000000..728d02b --- /dev/null +++ b/workflows/document-review/.claude/skills/scan/SKILL.md @@ -0,0 +1,154 @@ +--- +name: scan +description: Discover and catalog all documentation in the target project. +--- + +# Scan Documentation Skill + +You are surveying a project to discover and catalog all documentation. This is +the first phase of the document review workflow. Your job is to find everything +that constitutes documentation and produce a structured inventory. + +## Your Role + +Discover all documentation files, classify them by type and audience, and +produce an inventory that subsequent phases will use as their input. This is +discovery only — do not evaluate quality yet. + +## Critical Rules + +- **Do not evaluate documentation quality.** This phase is discovery and + cataloging only. +- **Be thorough.** Documentation lives in many places — don't just check + `docs/`. +- **Respect scope.** If the user specified particular files or directories, + limit the scan to those. Otherwise, scan everything. + +## Process + +### Step 1: Locate the Project + +Check if the project repository is already accessible: + +```bash +# Check common locations +ls /workspace/repos/ 2>/dev/null +ls /workspace/artifacts/ 2>/dev/null +``` + +- If the repo is already present (e.g., mounted via `add_dirs`), note its path +- If not and the user provided a URL, clone it: + +```bash +gh repo clone OWNER/REPO /workspace/repos/REPO +``` + +- If neither, ask the user where the project is located + +### Step 2: Discover Documentation Files + +Search for documentation using multiple strategies: + +**Standard documentation files (project root):** + +- `README*` (README.md, README.rst, README.txt, etc.) +- `CONTRIBUTING*` +- `LICENSE*`, `NOTICE*`, `AUTHORS*` +- `SECURITY*`, `CODE_OF_CONDUCT*` +- `CLAUDE.md`, `AGENTS.md` (AI-specific docs) + +**Documentation directories:** + +- `docs/`, `doc/`, `documentation/` +- `wiki/`, `guides/`, `tutorials/` +- `examples/`, `samples/` +- `man/`, `manpages/` +- `api/`, `api-docs/` + +**Formats to find:** + +- Markdown (`.md`, `.mdx`) +- reStructuredText (`.rst`) +- Plain text (`.txt`) +- AsciiDoc (`.adoc`, `.asciidoc`) +- HTML documentation (`.html` in doc directories) + +**Other documentation sources:** + +- Inline API documentation (JSDoc, Javadoc, docstrings — note their presence + but don't catalog every file) +- Configuration file comments (note if config files have substantial inline + docs) +- Makefile/Dockerfile/CI comments (note if significant) +- GitHub-specific: `.github/ISSUE_TEMPLATE/`, `.github/PULL_REQUEST_TEMPLATE/` + +Use Glob for pattern-based discovery: + +``` +**/*.md +**/*.rst +**/*.adoc +docs/**/* +doc/**/* +``` + +### Step 3: Catalog Each Document + +For each documentation file found, **read at least the first 40 lines** to +determine its topic and audience — do not guess from the filename alone. +Record: + +- **Path**: Relative path from project root +- **Format**: md, rst, txt, adoc, html, etc. +- **Size**: Approximate line count (use `wc -l` or count while reading) +- **Topic**: What the document covers (determined from title, headings, and + opening content — not inferred from path) +- **Audience**: Who this appears to be written for: + - End users (installation, usage, configuration) + - Developers (API reference, architecture, contributing) + - Operators (deployment, monitoring, troubleshooting) + - General (README, license, changelog) +- **Has executable instructions**: Whether the doc contains code blocks with + shell commands, installation steps, or usage examples + +### Step 4: Identify Documentation Structure + +Assess the overall documentation organization: + +- Is there a documentation site framework (MkDocs, Sphinx, Docusaurus, + GitBook, etc.)? +- Is there a table of contents or navigation structure? +- Is documentation flat (all in root) or hierarchical (organized in + directories)? +- Are there cross-references between documents? + +### Step 5: Note Preliminary Gaps + +Without doing a deep review, flag obvious gaps: + +- Project has a public API but no API reference docs +- No contributing guide despite accepting PRs +- No installation guide despite requiring setup +- Documentation exists but is clearly outdated (e.g., references very old + versions) + +### Step 6: Write the Inventory + +Follow the template at `templates/inventory.md` exactly. Write the inventory to +`artifacts/inventory.md`. + +## Output + +- `artifacts/inventory.md` + +## When This Phase Is Done + +Report your findings: + +- How many documentation files were discovered +- Key categories and their coverage +- Any obvious gaps noted +- Whether executable instructions were found + +Then **re-read the controller** (`.claude/skills/controller/SKILL.md`) for +next-step guidance. diff --git a/workflows/document-review/CLAUDE.md b/workflows/document-review/CLAUDE.md new file mode 100644 index 0000000..e208337 --- /dev/null +++ b/workflows/document-review/CLAUDE.md @@ -0,0 +1,62 @@ +# Document Review Workflow + +Systematic documentation review through these phases: + +1. **Scan** (`/scan`) — Discover and catalog all documentation files +2. **Quality Review** (`/quality-review`) — Deep quality analysis against 7 dimensions +3. **Code Check** (`/code-check`) — Cross-reference docs against source code +4. **Report** (`/report`) — Consolidate all findings into a single deduplicated report +5. **Jira** (`/jira`) — *(Optional)* Create Jira epic with child bugs/tasks from the report + +### Convenience Commands + +- **Full Review** (`/full-review`) — Run scan → quality-review + code-check → report in one shot + +Quality review and code check are independent — they can run in parallel as sub-agents +after scan completes. Each writes to its own findings file. + +The workflow controller lives at `.claude/skills/controller/SKILL.md`. +It defines how to execute phases, recommend next steps, and handle transitions. +Phase skills are at `.claude/skills/{name}/SKILL.md`. +Output files are written to `artifacts/`. + +## Quality Dimensions + +1. **Accuracy** — Do docs match reality? +2. **Completeness** — Are there gaps or missing docs? +3. **Consistency** — Do docs agree with each other? Is terminology uniform? +4. **Clarity** — Is language clear for the target audience? +5. **Currency** — Dead links, deprecated references, old versions? +6. **Structure** — Logical organization, navigation, headings? +7. **Examples** — Code samples present and correct? + +## Finding Severities + +- **Critical** — Incorrect information, broken commands, or missing steps that would block users or cause them to take wrong actions +- **High** — Significant gaps, contradictions, or outdated content that degrades the user experience +- **Medium** — Issues that cause confusion but have workarounds or limited impact +- **Low** — Minor improvements to clarity, structure, or presentation + +## Principles + +- Show evidence — quote the doc, cite file:line, don't make vague claims +- Be specific about what's wrong and why it matters +- Don't nitpick style when content is the real issue +- Assess audience-appropriateness for each document +- Flag uncertainty rather than guessing + +## Hard Limits + +- Do not modify the project's documentation — this workflow is read-only +- Do not make assumptions about intended behavior — flag for verification +- Read-only access to project code unless explicitly told otherwise +- Never execute commands that could be destructive to the host system + +## Working With the Project + +This workflow gets deployed into different projects. Respect the target project: + +- Understand the project's documentation conventions before critiquing +- Evaluate against the project's own standards, not arbitrary preferences +- Consider the project's maturity level when assessing completeness +- When in doubt about intended behavior, check git history and existing code diff --git a/workflows/document-review/README.md b/workflows/document-review/README.md new file mode 100644 index 0000000..fa88aec --- /dev/null +++ b/workflows/document-review/README.md @@ -0,0 +1,161 @@ +# Document Review Workflow + +Systematic workflow for reviewing a project's documentation — assessing quality, completeness, accuracy, and consistency, then generating actionable findings. + +## Features + +- Auto-discovers all documentation files across the project +- Evaluates docs against 7 quality dimensions +- Classifies findings by severity for prioritized action +- Cross-references documentation claims against source code using parallel discovery agents +- Runs quality-review and code-check in parallel as sub-agents for speed +- Creates Jira epics with child bugs/tasks from the report via Jira REST API +- Supports a full-review mode for one-shot review + +## Quick Start + +### Loading the Workflow + +In ACP, select **Document Review** from the workflow list, then open or point the session at the project repositories whose documentation you want to review. + +### One-Shot Review + +Run `/full-review` — this executes scan → quality-review + code-check (parallel) → report automatically. Results are written to `artifacts/`. + +### Step-by-Step Review + +For more control, run phases individually: + +1. `/scan` — discover and catalog all docs +2. `/quality-review` — deep quality analysis (runs as sub-agent) +3. `/code-check` — cross-reference docs against source code (runs as sub-agent, parallel with quality-review) +4. `/report` — consolidate findings into a single report +5. `/jira` — create Jira issues from the report (optional, requires Jira credentials) + +### Environment Variables + +Only required if using `/jira`: + +| Variable | Required | `/jira` Argument | Description | +|----------|----------|------------------|-------------| +| `JIRA_URL` | Yes | — | Base URL of the Jira instance (e.g., `https://myorg.atlassian.net`) | +| `JIRA_EMAIL` | Yes | — | Email address for authentication | +| `JIRA_API_TOKEN` | Yes | — | API token for authentication | +| `JIRA_PROJECT` | No | first positional arg | Default project key | +| `JIRA_COMPONENT` | No | `component=<name>` | Default component name | +| `JIRA_LABELS` | No | `labels=<a,b,c>` | Default comma-separated labels | +| `JIRA_TEAM` | No | `team=<name>` | Default team name | +| `JIRA_INITIAL_STATUS` | No | `status=<name>` | Workflow transition after creation (e.g., `Backlog`) | + +> **Warning:** `/jira` uses `curl` to call the Jira REST API directly because the Atlassian MCP does not support creating epics or issues. Credentials are piped via stdin (`curl --config -`) to avoid exposure in process listings, but your `JIRA_API_TOKEN` will still be visible in the session history. + +## Directory Structure + +```text +workflows/document-review/ +├── .ambient/ +│ └── ambient.json # Workflow configuration +├── .claude/ +│ ├── commands/ +│ │ ├── scan.md # Discover and catalog docs +│ │ ├── quality-review.md # Quality review +│ │ ├── code-check.md # Code cross-referencing +│ │ ├── report.md # Consolidated report +│ │ ├── jira.md # Jira issue creation +│ │ └── full-review.md # Full pipeline +│ └── skills/ +│ ├── controller/SKILL.md # Phase orchestration +│ ├── scan/SKILL.md # Document discovery +│ ├── quality-review/SKILL.md # Quality evaluation +│ ├── code-check/SKILL.md # Source code verification +│ ├── code-check/references/ # Discovery agent prompts +│ ├── report/SKILL.md # Report generation +│ └── jira/SKILL.md # Jira issue creation +├── templates/ # Output format templates +│ ├── inventory.md +│ ├── findings-quality-review.md +│ ├── findings-code-check.md +│ └── report.md +├── CLAUDE.md # Behavioral context +└── README.md # This file +``` + +## Commands + +| Command | Purpose | +|---------|---------| +| `/scan` | Discover and catalog all documentation in the project | +| `/quality-review` | Deep quality review against 7 dimensions | +| `/code-check` | Cross-reference docs against source code | +| `/report` | Consolidate all findings into a deduplicated report | +| `/jira` | Create Jira epic with child bugs/tasks from the report (optional) | +| `/full-review` | Run scan → quality-review + code-check → report in one shot | + +## Workflow Phases + +```text +scan ──┬──> quality-review (sub-agent) ──┬──> report ──> jira (optional) + └──> code-check (sub-agent) ──┘ +``` + +Quality review and code check are independent after scan — they run in parallel as sub-agents, each writing to its own findings file. + +### 1. Scan + +Discovers all documentation files using glob patterns. Catalogs each file by path, format, topic, audience, and whether it contains executable instructions. Produces an inventory. + +### 2. Quality Review + +Deep-reads each document evaluating 7 quality dimensions: accuracy, completeness, consistency, clarity, currency, structure, and examples. Identifies target audience per document and assesses audience-appropriateness. Classifies findings by severity. + +### 3. Code Check + +Runs a three-stage pipeline to systematically verify documentation against source code: + +1. **Reconnaissance** — Detects languages, frameworks, and components in the project +2. **Discovery** — Dispatches up to 8 parallel agents to scan source code for env vars, CLI args, config schemas, API endpoints, data models, file I/O, external deps, and build targets +3. **Verification** — Cross-references the discovered code inventory against documentation to find inaccuracies, undocumented features, and stale references + +### 4. Report + +Consolidates all findings from quality review and code check into a single deduplicated report. Findings are grouped by severity (Critical → Low) with a dimension × severity summary table. Reads from whichever findings files exist. + +### 5. Jira (Optional) + +Creates a Jira epic from the report with a child bug or task for each finding. Bugs are for findings that impact external users or customers. Tasks are for developer-facing or maintenance items. Uses the Jira REST API via `curl` (requires `JIRA_URL`, `JIRA_EMAIL`, and `JIRA_API_TOKEN` environment variables). Accepts project key, component, labels, team, and initial status as arguments or environment variables. + +### 6. Full Review + +Runs scan → quality-review + code-check (parallel) → report in one shot, pausing only for critical decisions. + +## Quality Dimensions + +| Dimension | What It Checks | +|-----------|---------------| +| Accuracy | Do docs match reality? | +| Completeness | Are there gaps or missing docs? | +| Consistency | Do docs agree with each other? | +| Clarity | Is language clear for the audience? | +| Currency | Dead links, deprecated refs? | +| Structure | Logical organization? | +| Examples | Code samples present and correct? | + +## Finding Severities + +| Severity | Definition | +|----------|-----------| +| Critical | Incorrect information, broken commands, or missing steps that block users | +| High | Significant gaps, contradictions, or outdated content | +| Medium | Issues that cause confusion but have workarounds | +| Low | Minor improvements to clarity, structure, or presentation | + +## Output Artifacts + +Output files are written to `artifacts/`: + +| File | Content | +|------|---------| +| `artifacts/inventory.md` | Documentation file catalog | +| `artifacts/findings-quality-review.md` | Detailed findings by document | +| `artifacts/findings-code-check.md` | Code verification findings | +| `artifacts/report.md` | Consolidated findings report | diff --git a/workflows/document-review/templates/findings-code-check.md b/workflows/document-review/templates/findings-code-check.md new file mode 100644 index 0000000..8b423ac --- /dev/null +++ b/workflows/document-review/templates/findings-code-check.md @@ -0,0 +1,97 @@ +# Code Verification Findings + +**Date:** [date] +**Repository:** [repository] @ [commit SHA] +**Instruction:** [task and goal description] + +--- + +**Source files checked:** N + +## Verification Summary + +| Result | Count | +|--------|-------| +| Match | N | +| Mismatch | N | +| Partial | N | +| Undocumented | N | +| Stale | N | + +| Severity | Count | +|----------|-------| +| Critical | N | +| High | N | +| Medium | N | +| Low | N | + +## Findings by Document + +### [path/to/document.md] + +#### Verification Finding 1 + +- **Severity:** Critical +- **Dimension:** Accuracy +- **Doc location:** README.md, "Configuration" section, line 85 +- **Code location:** src/config.py:42 +- **Documented claim:** "Set `MAX_RETRIES` to configure retry count (default: 3)" +- **Actual behavior:** Default is 5, not 3. See `DEFAULT_MAX_RETRIES = 5` +- **Evidence:** + + ```python + DEFAULT_MAX_RETRIES = 5 # src/config.py:42 + ``` + +- **Fix:** Change "default: 3" to "default: 5". + +## Undocumented Features + +### Feature 1 + +- **Severity:** High +- **Dimension:** Completeness +- **Code location:** src/cli.py:120 +- **Issue:** The `--dry-run` flag exists in code but is not documented +- **Evidence:** + + ```python + parser.add_argument('--dry-run', help='Preview changes without applying') + ``` + +## Stale References + +### Stale 1 + +- **Severity:** High +- **Dimension:** Accuracy +- **Doc location:** docs/guide.md:45 +- **Issue:** References `--legacy-mode` flag which no longer exists in code +- **Evidence:** Grep for `legacy-mode` across codebase returns no matches +- **Fix:** Remove the `--legacy-mode` reference. + +## Low-Confidence Findings + +- "Found `DB_HOST` in docs and `MAAS_DB_HOST` in code — possible match but + names differ enough to be uncertain" +- "Config key `timeout` appears in example but may be dynamically constructed" + +## Inventory Coverage + +| Category | Status | Items Found | +|----------|--------|-------------| +| Env vars | completed | N | +| CLI args | completed | N | +| Config schema | skipped (no config libraries) | - | +| API schema | completed | N | +| Data models | completed | N | +| File I/O | skipped (no file operations) | - | +| External deps | completed | N | +| Build/deploy | completed | N | + +[Note any agents that failed and why] + +## Code Inventory + +[The complete merged inventory from Stage 2, organized by workflow and +category] diff --git a/workflows/document-review/templates/findings-quality-review.md b/workflows/document-review/templates/findings-quality-review.md new file mode 100644 index 0000000..5b63095 --- /dev/null +++ b/workflows/document-review/templates/findings-quality-review.md @@ -0,0 +1,53 @@ +# Documentation Review Findings + +**Date:** [date] +**Repository:** [repository] @ [commit SHA] +**Instruction:** [task and goal description] + +--- + +## Summary + +| Dimension | Critical | High | Medium | Low | Total | +|-----------|----------|------|--------|-----|-------| +| Accuracy | N | N | N | N | N | +| Completeness | N | N | N | N | N | +| Consistency | N | N | N | N | N | +| Clarity | N | N | N | N | N | +| Currency | N | N | N | N | N | +| Structure | N | N | N | N | N | +| Examples | N | N | N | N | N | +| **Total** | **N** | **N** | **N** | **N** | **N** | + +## Findings by Document + +### [path/to/document.md] + +**Audience:** [end user | developer | operator | general] +**Audience fit:** [appropriate | needs adjustment — explanation] + +#### Finding 1 + +- **Severity:** Critical +- **Dimension:** Accuracy +- **File:** `path/to/document.md:42` +- **Issue:** The documented command uses a flag that doesn't exist. +- **Evidence:** `pip install --global mypackage` + (`--global` is not a valid pip flag) +- **Fix:** Change `--global` to `--user` or remove the flag entirely. + +#### Finding 2 + +... + +### [path/to/another-doc.md] + +... + +## Cross-Document Issues + +### Issue 1 + +- **Files:** `doc-a.md`, `doc-b.md` +- **Issue:** ... +- **Evidence:** ... diff --git a/workflows/document-review/templates/inventory.md b/workflows/document-review/templates/inventory.md new file mode 100644 index 0000000..1cfddd9 --- /dev/null +++ b/workflows/document-review/templates/inventory.md @@ -0,0 +1,53 @@ +# Documentation Inventory + +**Date:** [date] +**Repository:** [repository] @ [commit SHA] +**Instruction:** [task and goal description] + +--- + +**Scope:** [full project | specific paths] + +## Summary + +- **Total documentation files:** N +- **Formats:** md (X), rst (Y), ... +- **Total approximate size:** N lines / N words +- **Documentation structure:** [flat | hierarchical | doc-site framework] + +## Documents by Category + +### User Documentation + +| Path | Format | Size | Topic | Has Instructions | +|------|--------|------|-------|-----------------| +| ... | ... | ... | ... | Yes/No | + +### Developer Documentation + +| Path | Format | Size | Topic | Has Instructions | +|------|--------|------|-------|-----------------| +| ... | ... | ... | ... | Yes/No | + +### Operational Documentation + +| Path | Format | Size | Topic | Has Instructions | +|------|--------|------|-------|-----------------| +| ... | ... | ... | ... | Yes/No | + +### Project Metadata + +| Path | Format | Size | Topic | +|------|--------|------|-------| +| LICENSE | ... | ... | ... | +| ... | ... | ... | ... | + +## Documentation Structure + +[Description of how docs are organized, any doc-site framework, navigation] + +## Preliminary Gaps + +- [Gap 1] +- [Gap 2] +- ... diff --git a/workflows/document-review/templates/report.md b/workflows/document-review/templates/report.md new file mode 100644 index 0000000..4c23995 --- /dev/null +++ b/workflows/document-review/templates/report.md @@ -0,0 +1,73 @@ +# Documentation Review Report + +**Date:** [date] +**Repository:** [repository] @ [commit SHA] +**Instruction:** [task and goal description] + +--- + +## Summary + +| Dimension | Critical | High | Medium | Low | Total | Rating | +|-----------|----------|------|--------|-----|-------|--------| +| Accuracy | N | N | N | N | N | Good/Fair/Poor | +| Completeness | N | N | N | N | N | Good/Fair/Poor | +| Consistency | N | N | N | N | N | Good/Fair/Poor | +| Clarity | N | N | N | N | N | Good/Fair/Poor | +| Currency | N | N | N | N | N | Good/Fair/Poor | +| Structure | N | N | N | N | N | Good/Fair/Poor | +| Examples | N | N | N | N | N | Good/Fair/Poor | +| **Total** | **N** | **N** | **N** | **N** | **N** | | + +## Critical + +### C1. [title] + +- **Dimension:** Accuracy +- **File:** `path/to/document.md:42` +- **Source:** quality-review | code-check +- **Issue:** [what is wrong] +- **Evidence:** [quoted text or output] +- **Fix:** [correction, if known with high confidence] + +### C2. [title] + +... + +## High + +### H1. [title] + +- **Dimension:** Completeness +- **File:** `path/to/document.md:85` +- **Source:** quality-review | code-check +- **Issue:** [what is wrong] +- **Evidence:** [quoted text or output] + +### H2. [title] + +... + +## Medium + +### M1. [title] + +... + +## Low + +### L1. [title] + +... + +## Phases Not Run + +[Note any optional phases that were skipped and why. Remove this section if all +phases were executed.] + +- **Code Check:** [reason — e.g., "not requested"] + +## Next Steps + +- Run `/jira` to create Jira issues for tracking remediation +- Run `/code-check` for deeper code cross-referencing (if not already done)