refactor: extract PolicyGenerator trait and collapse parallel engine fields#79
Conversation
The /pr command was missing a step to check for related open issues, so PRs created via the command were not getting Fixes/Closes references and issues had to be closed manually. Port the missing step from spicy/ea.
Phase B (#71) part 1. Replace the parallel `Finding::rego_stub` / `Finding::cedar_stub` fields with a single `policy_outputs: Vec<PolicyOutput>` keyed by `PolicyEngine`, so adding an Nth engine no longer means adding an Nth optional field. Backward compatibility is preserved on the read side via a `FindingShim` (`#[serde(from = ...)]`): legacy findings JSON with `rego_stub` / `cedar_stub` continues to deserialize and the legacy fields fold into `policy_outputs`. The serialize side emits only `policy_outputs` — downstream consumers reading the legacy keys will need to update. `PolicyEngine` moves from `cli` to `types` (it's a domain type, not a CLI concern) and gains `Serialize` / `Deserialize`. `cli` re-exports it so existing imports keep working.
Phase B (#71) part 2. Replace `PatternRule::rego_template` / `PatternRule::cedar_template` with a single `policy_templates: Vec<PolicyTemplate>` collection and a `template_for(engine)` helper, keyed off the canonical `PolicyEngine` enum. Rule TOML parsing accepts both shapes: - Legacy: `[rule.rego_template]` / `[rule.cedar_template]` blocks - New: `[[rule.policy_templates]]` array entries with `engine` + `template` fields Legacy blocks fold into `policy_templates` at parse time; explicit `[[rule.policy_templates]]` entries win on conflict, matching the `Finding` shim contract. No rule files change in this commit — that migration lands separately so the parser change can stand on its own. MCP `get_rule` and the rule-resource JSON now expose templates as `policy_templates: [{engine, template}]`; the rule-list summary exposes `template_engines: [...]`. Downstream consumers reading the old keys will need to update. `mcp::tools::PolicyEngineArg` was a duplicate of `types::PolicyEngine`; collapse onto the canonical enum now that it carries serde derives.
Phase B (#71) part 3. Introduce a `PolicyGenerator` trait in `src/policy.rs` carrying the operations every engine needs along the extract and MCP paths: `engine`, `validate`, `render_template`, `default_stub`, `wrap_by_confidence`, and `group_and_generate`. Add `RegoGenerator` and `CedarGenerator` implementations as thin adapters over the existing engine modules, plus a `generator_for(engine)` factory. Collapse parallel pipelines onto the trait: - `extract::execute` becomes generic over `&dyn PolicyGenerator`. The two near-identical `extract_rego` / `extract_cedar` functions collapse into a single `run_extract` that pre-fills missing per-engine outputs, groups, validates, and writes — engine-specific code lives only inside the trait impls. - `mcp::tools::suggest_policy` and `validate_policy` dispatch through the trait instead of carrying their own engine `match` arms. Update CEDAR_SUPPORT.md decision log to record Phase B landing.
Address review nits on the PolicyGenerator extraction (#71): - Restore capitalized engine name in extract output by adding PolicyEngine::human_name() alongside as_str(); Display now delegates to as_str() so the lowercase canonical form stays the single source. - Drop the dead default-stub fallback in rego/cedar grouping — the extract pipeline already pre-fills policy_output for every finding, so grouping can treat its presence as an invariant. Tests updated to mirror the pre-fill. - Replace match arms that derived "rego"/"cedar" string keys in MCP tools with engine.as_str(). - Dedupe policy_outputs (FindingShim) and policy_templates (parse_rule) by engine on construction so duplicate explicit entries can't sneak past first-match lookups. - generator_for now returns &'static dyn PolicyGenerator backed by module-level statics, dropping the per-call Box allocation.
📝 WalkthroughWalkthroughIntroduces an engine-agnostic policy generation abstraction: adds PolicyGenerator + Rego/Cedar generators, collapses per-engine stub fields into ChangesPolicy Abstraction & Data Model Consolidation
Sequence DiagramsequenceDiagram
participant CLI as CLI
participant Scanner as Scanner
participant Extract as ExtractCmd
participant Generator as Generator
participant Validator as Validator
participant FS as Filesystem
CLI->>Scanner: run scan / execute_query -> Findings w/ policy_outputs
Scanner-->>CLI: findings
CLI->>Extract: run_extract(findings, engine)
Extract->>Generator: group_and_generate(findings)
Generator-->>Extract: Vec<PolicyFile>
Extract->>Validator: validate(policy_file.content)
Validator-->>Extract: ValidationResult
Extract->>FS: write_policy_file(output_path, content)
FS-->>Extract: write result / error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Review Summary
This pull request successfully implements Phase B of the Cedar/Rego unification work. The refactoring extracts a clean PolicyGenerator trait abstraction and collapses parallel engine-specific fields into unified collections, significantly reducing code duplication.
Key Strengths
Architecture: The PolicyGenerator trait provides an excellent abstraction layer with zero-sized generators served as static references, avoiding unnecessary allocations while maintaining clean dispatch semantics.
Backward Compatibility: The deserialization shims (FindingShim and legacy TOML template parsing) ensure existing persisted findings and rule files continue to load without forced migration, with explicit policy_outputs taking precedence over legacy fields.
Test Coverage: Comprehensive test suite with 453 passing tests covers all critical paths including legacy field migration, deduplication, and serialization behavior.
Security: Path traversal protection in write_policy_file uses proper canonicalization with TOCTOU awareness.
Verification
- ✅ All tests pass (
cargo test --all-features) - ✅ Clean clippy output
- ✅ Code formatting verified
- ✅ Breaking changes clearly documented
- ✅ No critical defects identified
The implementation is production-ready and ready to merge.
You can now have the agent implement changes and create commits directly on your pull request's source branch. Simply comment with /q followed by your request in natural language to ask the agent to make changes.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/commands/extract.rs (1)
133-154:⚠️ Potential issue | 🟠 Major | ⚡ Quick winReject symlink leaves before writing generated policy files.
The parent containment check is not sufficient here. Line 154 will still follow an existing symlink at
output_path, so a leaf likegenerated/foo.rego -> /tmp/outsidepasses thecanonical_parentcheck and escapes the output directory.Minimal hardening
fn write_policy_file(output_path: &Path, content: &str, canonical_output_dir: &Path) -> Result<()> { let parent = output_path.parent().ok_or_else(|| { ZiftError::General(format!( "output path '{}' has no parent", output_path.display() )) })?; std::fs::create_dir_all(parent)?; let canonical_parent = parent.canonicalize().map_err(|e| { ZiftError::General(format!( "failed to resolve parent dir '{}': {e}", parent.display() )) })?; if !canonical_parent.starts_with(canonical_output_dir) { return Err(ZiftError::General(format!( "output path '{}' escapes output directory '{}'", output_path.display(), canonical_output_dir.display() ))); } + if let Ok(meta) = std::fs::symlink_metadata(output_path) { + if meta.file_type().is_symlink() { + return Err(ZiftError::General(format!( + "refusing to write through symlink '{}'", + output_path.display() + ))); + } + } std::fs::write(output_path, content)?; Ok(()) }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/commands/extract.rs` around lines 133 - 154, The function write_policy_file currently only verifies the canonicalized parent but can still follow a symlink at the final leaf; add a pre-write symlink check on output_path to reject any existing symlink leaves. Before calling std::fs::write, call std::fs::symlink_metadata(output_path) (or output_path.symlink_metadata()), and if it returns Ok(meta) and meta.file_type().is_symlink() return a ZiftError::General indicating the output path is a symlink/escapes the output directory; keep the existing parent canonicalization check as-is.src/mcp/tools.rs (1)
443-478:⚠️ Potential issue | 🟠 Major | ⚡ Quick winKeep MCP template vars in sync with the scanner path.
This helper still only emits the raw literal slots plus
roles/plans. It never derivesroles_set/cedar_roles_setor normalizes comma-delimited role lists, sosuggest_rego/suggest_policywithrule_idcan render broken output for rules like ASP.NETRoles = "Admin,Manager"even though structural scan now handles that case.
🧹 Nitpick comments (5)
.agents/commands/pr.md (1)
15-19: ⚡ Quick winClarify the plan file matching algorithm and edge case handling.
The instruction to "match by issue number or feature name" is underspecified. Consider clarifying:
- How should the agent match? By filename pattern (e.g.,
*-896-*)? By searching file content?- What should happen if no plan file is found?
- What if multiple files match?
- How should the agent handle plan files that don't contain any checklist items?
📝 Suggested clarification
-2. **Move completed plan to done** (if applicable): - - Check `plans/todo/` for a plan file related to this branch (match by issue number or feature name) - - If found, check whether all checklist items (`- [ ]`) are complete (`- [x]`) - - If all complete: `mkdir -p plans/done && git mv plans/todo/<file> plans/done/<file>` - - Commit: `git commit -m "docs: move completed plan to plans/done/"` - - If not all complete, leave it in `plans/todo/` +2. **Move completed plan to done** (if applicable): + - Check `plans/todo/` for a plan file related to this branch: + - Extract issue number from branch name (e.g., `fix/896-buffer-import` → `896`) + - Look for files matching `plans/todo/*896*.md` or containing the branch feature name + - If exactly one file is found: + - Check whether all checklist items (`- [ ]`) are complete (`- [x]`) + - If all checklist items complete (or no checklists exist): `mkdir -p plans/done && git mv plans/todo/<file> plans/done/<file>` + - Commit: `git commit -m "docs: move completed plan to plans/done/"` + - If no file found or multiple matches, skip this step + - If checklist items remain incomplete, leave in `plans/todo/`🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.agents/commands/pr.md around lines 15 - 19, Clarify and tighten the plan-file matching and edge-case handling: when scanning plans/todo/ look for files that match either a filename pattern containing the issue/feature token (e.g., token present anywhere in the filename) or whose file content contains the token (search the file text); if exactly one match is found proceed with checklist evaluation, if none are found log/return "no plan file found" and do nothing, and if multiple matches are found list them and abort (require human disambiguation) rather than moving any file. For checklist evaluation in the matched file (look for "- [ ]" and "- [x]" markers), treat files with zero checklist items as "no checklist" and do not move them; only when all checklist items are present and all are "- [x]" perform mkdir -p plans/done && git mv plans/todo/<file> plans/done/<file> and commit with the message "docs: move completed plan to plans/done/"; include clear logging at each decision point.src/lib.rs (1)
22-22: ⚡ Quick winKeep the stable API rustdoc list in sync with the new export.
policyis now a public module, but the crate-level “Stable public API” list still omits it, so docs are now stale.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/lib.rs` at line 22, Add the new public module to the crate-level "Stable public API" rustdoc list: update the module list in the crate root documentation to include the newly exported symbol "policy" so the stable API docs stay in sync with the pub mod policy; locate the crate-level doc comment that enumerates stable exports and insert "policy" in the same format as the other items.src/mcp/resources.rs (1)
108-121: ⚡ Quick winAdd an explicit MCP resource test assertion for
policy_templates.This path now defines the canonical rule payload shape. Please assert
policy_templatesexists (and is an array) inread_rule_returns_definitionso regressions on the renamed field are caught early.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/mcp/resources.rs` around lines 108 - 121, The test read_rule_returns_definition must assert the new canonical payload field policy_templates is present and an array; update that test to fetch the JSON response and add an assertion that the top-level "policy_templates" key exists and its value is an array (e.g., non-null and JSON array length/type check) to catch regressions from the renamed field, referencing the code that constructs policy_templates in the resource (the policy_templates variable and the JSON object with "policy_templates": policy_templates).src/rego/grouping.rs (1)
150-158: 💤 Low valueConsider adding a debug log when skipping findings without policy output.
The code correctly enforces the invariant that findings must arrive with pre-filled Rego output, but silently continuing when the output is missing could mask upstream bugs (e.g., a regression in the extract pipeline's pre-fill logic). A
tracing::debug!ortracing::warn!would make such cases observable without failing the grouping.💡 Optional: Add observability for skipped findings
let Some(stub) = finding.policy_output(PolicyEngine::Rego) else { + tracing::debug!( + finding_id = %finding.id, + "skipping finding without Rego policy output" + ); continue; };🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/rego/grouping.rs` around lines 150 - 158, Add a tracing::debug! (or tracing::warn!) call right before the `continue` in the `let Some(stub) = finding.policy_output(PolicyEngine::Rego) else { ... }` branch to record that a finding was skipped due to missing Rego policy output; include PolicyEngine::Rego and a concise identifier for the finding (e.g., finding.id() or a Debug/string representation of `finding`) in the log message so skipped cases are observable for upstream debugging while preserving the existing continue behavior.src/mcp/tools.rs (1)
419-440: ⚡ Quick winMake the Rego MCP tools true aliases of the generator path.
suggest_regoandvalidate_regonow reimplement the Rego branch instead of pinningsuggest_policy/validate_policytoPolicyEngine::Rego. That leaves two codepaths to keep in sync, which is how helper drift likebuild_template_vars()slips in.Also applies to: 505-512
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/mcp/tools.rs` around lines 419 - 440, The Rego-specific functions (suggest_rego and validate_rego) should not reimplement the Rego branch — instead make them thin aliases that call the existing generic functions with PolicyEngine::Rego; replace the body of suggest_rego to parse args as now, then forward to suggest_policy(parsed, PolicyEngine::Rego) (and do the same pattern for validate_rego calling validate_policy), ensuring you pass through parsed.confidence/parsed.category/parsed.code_snippet as needed; remove duplicated logic that manually searches ctx.rules, calls template_for, build_template_vars, render_template, generate_default_stub, and apply_confidence_wrapping so all behavior is centralized in suggest_policy/validate_policy.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/cedar/grouping.rs`:
- Around line 133-140: When a finding lacks Cedar policy_output (the let
Some(stub) = finding.policy_output(PolicyEngine::Cedar) check), don't silently
continue; instead emit a clear warning or error (e.g., using the crate
logger/tracing with identifying info from finding such as id/path) and ensure
that whatever counters or metrics are incremented for emitted
stubs/headers/files are NOT incremented for this finding (update the code paths
that currently assume the stub was rendered so totals reflect only actually
rendered stubs). Keep the existing invariant of not synthesizing a default stub,
but surface the missing data and adjust the emitted counts accordingly.
In `@src/types.rs`:
- Around line 94-99: set_policy_output currently updates only the first matching
PolicyOutput and leaves any duplicate entries for the same engine intact; change
it to remove all existing entries for that engine before inserting the new one
so the vector always has at most one PolicyOutput per PolicyEngine (e.g. use
self.policy_outputs.retain(|p| p.engine != engine) and then push the new
PolicyOutput { engine, content } inside set_policy_output).
---
Outside diff comments:
In `@src/commands/extract.rs`:
- Around line 133-154: The function write_policy_file currently only verifies
the canonicalized parent but can still follow a symlink at the final leaf; add a
pre-write symlink check on output_path to reject any existing symlink leaves.
Before calling std::fs::write, call std::fs::symlink_metadata(output_path) (or
output_path.symlink_metadata()), and if it returns Ok(meta) and
meta.file_type().is_symlink() return a ZiftError::General indicating the output
path is a symlink/escapes the output directory; keep the existing parent
canonicalization check as-is.
---
Nitpick comments:
In @.agents/commands/pr.md:
- Around line 15-19: Clarify and tighten the plan-file matching and edge-case
handling: when scanning plans/todo/ look for files that match either a filename
pattern containing the issue/feature token (e.g., token present anywhere in the
filename) or whose file content contains the token (search the file text); if
exactly one match is found proceed with checklist evaluation, if none are found
log/return "no plan file found" and do nothing, and if multiple matches are
found list them and abort (require human disambiguation) rather than moving any
file. For checklist evaluation in the matched file (look for "- [ ]" and "- [x]"
markers), treat files with zero checklist items as "no checklist" and do not
move them; only when all checklist items are present and all are "- [x]" perform
mkdir -p plans/done && git mv plans/todo/<file> plans/done/<file> and commit
with the message "docs: move completed plan to plans/done/"; include clear
logging at each decision point.
In `@src/lib.rs`:
- Line 22: Add the new public module to the crate-level "Stable public API"
rustdoc list: update the module list in the crate root documentation to include
the newly exported symbol "policy" so the stable API docs stay in sync with the
pub mod policy; locate the crate-level doc comment that enumerates stable
exports and insert "policy" in the same format as the other items.
In `@src/mcp/resources.rs`:
- Around line 108-121: The test read_rule_returns_definition must assert the new
canonical payload field policy_templates is present and an array; update that
test to fetch the JSON response and add an assertion that the top-level
"policy_templates" key exists and its value is an array (e.g., non-null and JSON
array length/type check) to catch regressions from the renamed field,
referencing the code that constructs policy_templates in the resource (the
policy_templates variable and the JSON object with "policy_templates":
policy_templates).
In `@src/mcp/tools.rs`:
- Around line 419-440: The Rego-specific functions (suggest_rego and
validate_rego) should not reimplement the Rego branch — instead make them thin
aliases that call the existing generic functions with PolicyEngine::Rego;
replace the body of suggest_rego to parse args as now, then forward to
suggest_policy(parsed, PolicyEngine::Rego) (and do the same pattern for
validate_rego calling validate_policy), ensuring you pass through
parsed.confidence/parsed.category/parsed.code_snippet as needed; remove
duplicated logic that manually searches ctx.rules, calls template_for,
build_template_vars, render_template, generate_default_stub, and
apply_confidence_wrapping so all behavior is centralized in
suggest_policy/validate_policy.
In `@src/rego/grouping.rs`:
- Around line 150-158: Add a tracing::debug! (or tracing::warn!) call right
before the `continue` in the `let Some(stub) =
finding.policy_output(PolicyEngine::Rego) else { ... }` branch to record that a
finding was skipped due to missing Rego policy output; include
PolicyEngine::Rego and a concise identifier for the finding (e.g., finding.id()
or a Debug/string representation of `finding`) in the log message so skipped
cases are observable for upstream debugging while preserving the existing
continue behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: afdb6693-f958-4bc6-9b9b-50b1ffdb7248
📒 Files selected for processing (27)
.agents/commands/pr.mddocs/CEDAR_SUPPORT.mdsrc/cedar/generator.rssrc/cedar/grouping.rssrc/cedar/mod.rssrc/cli.rssrc/commands/extract.rssrc/commands/rules.rssrc/deep/candidate.rssrc/deep/context.rssrc/deep/finding.rssrc/deep/merge.rssrc/deep/prompt.rssrc/lib.rssrc/mcp/resources.rssrc/mcp/tools.rssrc/output/json.rssrc/output/text.rssrc/policy.rssrc/rego/generator.rssrc/rego/grouping.rssrc/rego/mod.rssrc/rules/mod.rssrc/scanner/matcher.rssrc/types.rstests/deep_http_integration.rstests/deep_subprocess_integration.rs
- src/cedar/grouping.rs, src/rego/grouping.rs: warn (instead of silently skip) when a finding lacks the engine's policy_output, so the pre-fill invariant is observable rather than masked. - src/types.rs: dedupe duplicate engine entries in `set_policy_output` via retain+push so the vector holds at most one PolicyOutput per engine even for in-memory-built Findings. - src/commands/extract.rs: reject existing symlinks at the leaf of `write_policy_file` to close a follow-the-link escape past the canonical-parent containment check. - src/mcp/tools.rs: collapse `suggest_rego` and `validate_rego` into thin aliases that forward to `suggest_policy` / `validate_policy_tool` pinned to Rego, eliminating the parallel dispatch path. Extend `build_template_vars` with `roles_set` / `cedar_roles_set` and CSV normalisation so MCP rule_id-driven suggestions render the same as the structural scanner path for rules like ASP.NET Authorize(Roles="Admin,Manager"). - src/mcp/resources.rs: assert `policy_templates` is a JSON array in `read_rule_returns_definition` to guard against regressions on the renamed field. - src/lib.rs: add the `policy` module to the crate-level Stable public API rustdoc list. - .agents/commands/pr.md: tighten the plan-file matching algorithm (token derivation, no-match / multi-match / no-checklist cases).
|
@coderabbitai Addressed in 6627b80: Outside-diff range
Nitpicks
|
|
Tip For best results, initiate chat on the files or code changes.
ଘ(੭ˊᵕˋ)੭* ੈ✩‧₊˚ |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.agents/commands/pr.md:
- Line 16: Update the search behavior sentence "Search `plans/todo/` for files
whose **filename** contains the token, and as a fallback files whose **content**
contains it" to remove or constrain the content fallback: either delete the
content-fallback clause so only filenames are matched, or restrict content
checks to structured areas (e.g., YAML frontmatter or the first N lines) and
define matching rules (whole-word match, case-insensitive) to avoid broad
matches and expensive scans; ensure the edited line clearly states the chosen
approach and the matching rules so implementers can follow it when updating the
search logic.
- Around line 15-16: Update the "Extract an issue/feature token from the branch
name" rule to be explicit: if the branch contains a slash (type/slug) and the
portion after the slash begins with digits, extract the leading numeric sequence
as the token (e.g., fix/896-buffer-import → 896); otherwise when there is a
slash extract the entire kebab-case slug after the slash as the token (e.g.,
feature/add-new-widget → add-new-widget); if there is no slash and the branch
starts with digits extract the leading digits, otherwise use the full branch
name as the slug (e.g., fix-typo → fix-typo); keep the search behavior for
plans/todo/ as first matching filenames that contain the token and then fall
back to searching file contents.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: cf2c73ef-e8ee-4564-baaf-eaa3405464c7
📒 Files selected for processing (8)
.agents/commands/pr.mdsrc/cedar/grouping.rssrc/commands/extract.rssrc/lib.rssrc/mcp/resources.rssrc/mcp/tools.rssrc/rego/grouping.rssrc/types.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src/mcp/resources.rs
- src/lib.rs
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.agents/commands/pr.md (1)
22-25: ⚡ Quick winClarify checklist logic control flow.
The three checklist conditions (zero items, any unchecked, all checked) read as independent "if" statements rather than a clear if-else-if cascade. While the intent is apparent, restructuring as an explicit conditional chain would eliminate ambiguity for implementers.
📋 Suggested restructure for clarity
- - If **exactly one match**: inspect its checklist items - - If the file has zero `- [ ]` / `- [x]` items, treat it as "no checklist" and skip the move - - If any `- [ ]` items remain unchecked, leave it in `plans/todo/` - - Only when every checklist item is `- [x]`: `mkdir -p plans/done && git mv plans/todo/<file> plans/done/<file>` and commit `docs: move completed plan to plans/done/` + - If **exactly one match**: inspect its checklist items and decide: + - **If** the file has zero `- [ ]` / `- [x]` items → skip the move (no checklist to complete) + - **Else if** any `- [ ]` items remain unchecked → leave it in `plans/todo/` (work incomplete) + - **Else** (all items are `- [x]`) → `mkdir -p plans/done && git mv plans/todo/<file> plans/done/<file>` and commit `docs: move completed plan to plans/done/`🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.agents/commands/pr.md around lines 22 - 25, The checklist handling is ambiguous because the three conditions are written like independent ifs; change the logic to an explicit if-else-if cascade so only one branch runs: if the file has zero `- [ ]`/`- [x]` items then treat it as "no checklist" and skip moving; else if any `- [ ]` items remain unchecked then leave the file in `plans/todo/`; else (every checklist item is `- [x]`) perform the move (`mkdir -p plans/done && git mv plans/todo/<file> plans/done/<file>`) and commit with message `docs: move completed plan to plans/done/`. Ensure implementers follow this exact ordering and that only the final "all checked" branch executes the git mv and commit.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In @.agents/commands/pr.md:
- Around line 22-25: The checklist handling is ambiguous because the three
conditions are written like independent ifs; change the logic to an explicit
if-else-if cascade so only one branch runs: if the file has zero `- [ ]`/`- [x]`
items then treat it as "no checklist" and skip moving; else if any `- [ ]` items
remain unchecked then leave the file in `plans/todo/`; else (every checklist
item is `- [x]`) perform the move (`mkdir -p plans/done && git mv
plans/todo/<file> plans/done/<file>`) and commit with message `docs: move
completed plan to plans/done/`. Ensure implementers follow this exact ordering
and that only the final "all checked" branch executes the git mv and commit.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ea46134b-ce6e-404e-930a-9c3a119d9dce
📒 Files selected for processing (1)
.agents/commands/pr.md
Summary
Phase B of the Cedar/Rego unification work (#71). Collapses the parallel
rego_*/cedar_*fields onFindingandPatternRuleinto engine-keyedcollections, and routes the extract pipeline and MCP
suggest_policy/validate_policytools through a singlePolicyGeneratortrait.Changes
PolicyGeneratortrait (src/policy.rs) — engine-agnostic dispatchsurface for validate, render template, default stub, confidence wrapping,
and grouping.
RegoGenerator/CedarGeneratorare zero-sized adaptersserved as
&'static dyn PolicyGenerator.Finding::policy_outputs: Vec<PolicyOutput>replacesrego_stub/cedar_stub. AFindingShimkeeps deserialization tolerant of legacyJSON; serialization writes only the new shape. Duplicate engine entries
are deduped on construction.
PatternRule::policy_templates: Vec<PolicyTemplate>replacesrego_template/cedar_template. The TOML parser accepts both thenew
[[rule.policy_templates]]array form and legacy[rule.rego_template]/[rule.cedar_template]blocks.PolicyEnginemoved tosrc/types.rswithas_str()(lowercasecanonical) and
human_name()(capitalized for prose) helpers.commands::extract— singlerun_extractpipeline parameterised by&dyn PolicyGenerator, replacing the side-by-sideextract_rego/extract_cedarpaths.get_rule/list_rulesnow exposepolicy_templates/template_engines;suggest_policy/validate_policydispatchthrough the trait.
Testing
cargo test --all-features— 453 tests pass, including new coverage:legacy_stub_fields_fold_into_policy_outputsexplicit_policy_outputs_wins_over_legacy_fieldsfinding_serializes_only_policy_outputslegacy_template_blocks_fold_into_policy_templatespolicy_templates_array_form_parsescargo clippy --all-features -- -D warningscleancargo fmt --checkcleanBreaking changes
JSON output:
rego_stub/cedar_stubkeys are no longer emitted onFinding— consumers must readpolicy_outputs. Persisted findings JSONwritten by older versions still loads via the deserialization shim.
MCP
get_rule/list_rulesno longer exposerego_template/has_rego_template; usepolicy_templates/template_engines.Closes #71
Summary by CodeRabbit
New Features
Improvements
Documentation