Conversation
Greptile SummaryThis PR introduces a Key findings:
Confidence Score: 5/5Safe to merge; all remaining findings are P2 style and clarity improvements that do not affect correctness of the CI gate or the output artifacts. The CI workflow logic is sound and the core delta computation (signature-based deduplication) is well-tested through scripts/slop-review.ts — dead condition in parseArgs and annotation ID uniqueness; scripts/slop-review.test.ts — parseArgs error branch coverage.
|
| Filename | Overview |
|---|---|
| scripts/slop-review.ts | New helper script that compares slop-analyzer base/head JSON reports, builds delta reports, emits Hunk agent-context JSON; has a dead headPath === undefined condition in parseArgs and potential annotation ID collisions across files. |
| scripts/slop-review.test.ts | Unit tests covering delta occurrence building, agent context generation, and text formatting; parseArgs error branches are not covered despite being exported. |
| .github/workflows/slop-ci.yml | New CI workflow that archives base/head snapshots, runs slop-analyzer, builds Hunk review artifacts, and fails only on newly introduced findings; logic is sound and steps are well-structured. |
| README.md | Adds documentation for local and CI slop-analyzer review flow with clear usage examples. |
Sequence Diagram
sequenceDiagram
participant GH as GitHub Actions
participant Git as git archive
participant SA as slop-analyzer
participant SR as slop-review.ts
participant Hunk as Hunk (local)
GH->>Git: archive base SHA → base_dir
GH->>Git: archive HEAD → head_dir
Git-->>GH: extracted snapshots
GH->>SA: scan base_dir --json
SA-->>GH: base-slop.json
GH->>SA: scan head_dir --json
SA-->>GH: head-slop.json
GH->>SR: --base base-slop.json --head head-slop.json (text)
SR-->>GH: slop-summary.txt
GH->>SR: --base ... --head ... --json
SR-->>GH: slop-delta.json
GH->>SR: --base ... --head ... --agent-context
SR-->>GH: slop-agent-context.json
GH->>GH: git diff base...HEAD → pr.patch
GH->>SR: --base ... --head ... --fail-on-new
SR-->>GH: exit 0 / exit 1 (fails CI on new slop)
GH->>GH: upload artifacts (pr.patch, slop-agent-context.json, ...)
Note over Hunk: Developer downloads artifacts
Hunk->>Hunk: hunk patch pr.patch --agent-context slop-agent-context.json
Reviews (1): Last reviewed commit: "feat(ci): add slop analyzer review workf..." | Re-trigger Greptile
| if (!headPath) { | ||
| throw new Error("Pass --head <report.json>."); | ||
| } | ||
|
|
||
| if ( | ||
| headPath === undefined || | ||
| headPath.startsWith("--") || | ||
| (basePath?.startsWith("--") ?? false) | ||
| ) { | ||
| throw new Error("Expected a file path after --head/--base."); |
There was a problem hiding this comment.
Dead
headPath === undefined condition
After the if (!headPath) guard throws, the variable can never be undefined or falsy. The second if block's headPath === undefined branch is dead code that will never be reached. The intent (catching --head --base ... where the user forgot the actual path) is real and is correctly caught by headPath.startsWith("--"), but the leading dead condition makes the intent less clear and could mislead future readers.
| if (!headPath) { | |
| throw new Error("Pass --head <report.json>."); | |
| } | |
| if ( | |
| headPath === undefined || | |
| headPath.startsWith("--") || | |
| (basePath?.startsWith("--") ?? false) | |
| ) { | |
| throw new Error("Expected a file path after --head/--base."); | |
| if (!headPath) { | |
| throw new Error("Pass --head <report.json>."); | |
| } | |
| if (headPath.startsWith("--") || (basePath?.startsWith("--") ?? false)) { | |
| throw new Error("Expected a file path after --head/--base."); | |
| } |
| return { | ||
| id: `slop:${occurrence.finding.ruleId}:${index}`, |
There was a problem hiding this comment.
Annotation IDs collide across files
The index passed to buildAnnotation is the per-file occurrences.map(…, index) counter, which resets to 0 for each file. If the same ruleId is the first new finding in two different files, both will be assigned the same ID (e.g. slop:structure.duplicate-function-signatures:0). Given the test data in this PR, that exact collision will happen for src/accounts/normalize.ts and src/users/normalize.ts.
If Hunk's MCP or selection layer resolves annotations by ID globally, duplicate IDs will silently surface the wrong annotation. Using the occurrence.path in the ID makes it globally unique at essentially zero cost:
| return { | |
| id: `slop:${occurrence.finding.ruleId}:${index}`, | |
| id: `slop:${occurrence.finding.ruleId}:${occurrence.path}:${index}`, |
| describe("slop review helpers", () => { | ||
| test("delta comparison keeps only new per-file occurrences from grouped findings", () => { | ||
| const delta = buildDeltaOccurrences(baseReport, headReport); | ||
|
|
||
| expect(delta).toHaveLength(3); | ||
| expect(delta.map((occurrence) => occurrence.path)).toEqual([ | ||
| "src/accounts/normalize.ts", | ||
| "src/users/normalize.ts", | ||
| "src/teams/normalize.ts", | ||
| ]); | ||
| }); | ||
|
|
||
| test("agent context groups new findings by file and emits hunk-friendly annotations", () => { | ||
| const context = buildAgentContext(baseReport, headReport); | ||
|
|
||
| expect(context.summary).toContain("3 new findings across 3 files vs base"); | ||
| expect(context.files.map((file) => file.path)).toEqual([ | ||
| "src/accounts/normalize.ts", | ||
| "src/users/normalize.ts", | ||
| "src/teams/normalize.ts", | ||
| ]); | ||
| expect(context.files[0]).toMatchObject({ | ||
| path: "src/accounts/normalize.ts", | ||
| summary: "1 new slop finding · hotspot score 4.50.", | ||
| annotations: [ | ||
| { | ||
| summary: "Found 3 duplicated function signatures", | ||
| newRange: [1, 1], | ||
| confidence: "medium", | ||
| source: "slop-analyzer", | ||
| author: "slop-analyzer", | ||
| }, | ||
| ], | ||
| }); | ||
| expect(context.files[0]?.annotations[0]?.rationale).toContain( | ||
| "Rule: structure.duplicate-function-signatures", | ||
| ); | ||
| expect(context.files[0]?.annotations[0]?.rationale).toContain("Also flagged in 2 other files."); | ||
| }); | ||
|
|
||
| test("delta report and text summary describe new findings succinctly", () => { | ||
| const delta = buildDeltaReport(baseReport, headReport); | ||
| const text = formatDeltaText(baseReport, headReport); | ||
|
|
||
| expect(delta.summary).toEqual({ | ||
| baseFindingCount: 2, | ||
| headFindingCount: 3, | ||
| newFindingCount: 3, | ||
| newFileCount: 3, | ||
| }); | ||
| expect(text).toContain("New slop findings: 3 across 3 files"); | ||
| expect(text).toContain("- src/accounts/normalize.ts"); | ||
| expect(text).toContain("medium Found 3 duplicated function signatures"); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
parseArgs error branches not covered
parseArgs is exported and has several distinct error paths (missing --head, flag-as-value guard, mutually exclusive --json/--agent-context, unknown argument) that are each exercised only by hand. A few targeted tests would prevent regressions when the CLI grows:
parseArgs([])→ throws "Pass --head"parseArgs(["--head", "--base"])→ throws "Expected a file path"parseArgs(["--head", "a.json", "--json", "--agent-context"])→ throws "Specify only one"parseArgs(["--head", "a.json", "--unknown"])→ throws "Unknown argument"
Per the repo's testing guidelines, unit tests should be colocated next to the code they cover (scripts/slop-review.test.ts is already the right place).
| export interface SlopDeltaSummary { | ||
| baseFindingCount: number; | ||
| headFindingCount: number; | ||
| newFindingCount: number; | ||
| newFileCount: number; | ||
| } |
There was a problem hiding this comment.
Mixed "finding" vs "occurrence" metrics in the delta summary
newFindingCount and newFileCount in SlopDeltaSummary count per-file occurrences (a multi-file finding is expanded into N occurrences, one per file), while baseFindingCount / headFindingCount are lifted verbatim from slop-analyzer's own summary.findingCount, which counts whole findings regardless of file span.
In the test data, the single new multi-file finding produces newFindingCount: 3 while headFindingCount only rises by 1 (2→3). A CI summary showing "New slop findings: 3 … Base findings: 2 → Head findings: 3" could be read as the delta being 1 rather than 3, causing confusion.
Consider either renaming newFindingCount to newOccurrenceCount (or newFileFindingCount) to signal that it is an occurrence count, or computing it directly from slop-analyzer finding counts for consistency.
Summary
scripts/slop-review.tshelper that comparesslop-analyzerbase/head JSON, reports only new findings, and emits Hunk--agent-contextJSONslop-analyzer@0.1.2, uploads Hunk review artifacts, and fails only on newly introduced slopREADME.mdTesting
bun run typecheckbun test scripts/slop-review.test.tsbun run format:checkbun run lintThis PR description was generated by Pi using OpenAI GPT-5