Skip to content

feat(text-metrics): split qa_accuracy#645

Open
davidberenstein1957 wants to merge 2 commits intofeat/vlm-pr-2-infrastructurefrom
feat/vlm-pr-3a-qa-accuracy
Open

feat(text-metrics): split qa_accuracy#645
davidberenstein1957 wants to merge 2 commits intofeat/vlm-pr-2-infrastructurefrom
feat/vlm-pr-3a-qa-accuracy

Conversation

@davidberenstein1957
Copy link
Copy Markdown
Member

@davidberenstein1957 davidberenstein1957 commented Apr 28, 2026

Summary

Splits qa_accuracy into its own stacked PR, adds QAAccuracyMetric, and wires GenEval to qa_accuracy + clip_score.

Stack Position

Files

  • src/pruna/evaluation/metrics/metric_qa_accuracy.py
  • src/pruna/evaluation/benchmarks.py

Test Plan

uv run pytest tests/evaluation/test_text_metrics.py -k qa_accuracy

Review Focus

  • Aggregation behavior (all_or_nothing)
  • GenEval benchmark wiring

Review Flow (Order)

Review the stack in this exact order:

  1. feat(vendor): add LLM2Vec embedding model #637 vendor
  2. feat(infrastructure): add VLM base classes and utilities #638 infrastructure
  3. feat(text-metrics): split qa_accuracy #645 qa_accuracy
  4. feat(text-metrics): split oneig_alignment #646 oneig_alignment
  5. feat(text-metrics): split text_score pair #647 text_score pair
  6. feat(text-metrics): split oneig_reasoning #648 oneig_reasoning
  7. feat(vision-metrics): split vqa #649 vqa
  8. feat(vision-metrics): split vie_score #650 vie_score
  9. feat(vision-metrics): split img_edit_score #651 img_edit_score
  10. feat(e2e-tests): stacked e2e after split metrics #641 e2e tests

This PR in the flow (3/10)

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Comment @cursor review or bugbot run to trigger another review on this PR

Reviewed by Cursor Bugbot for commit 15db155. Configure here.

"between prompts and generated images via VQA-style questions."
),
metrics=["clip_score"], # §3.2: Mask2Former; not in Pruna
metrics=["qa_accuracy", "clip_score"], # strict QA + CLIP score
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GenEval benchmark uses lenient mean aggregation

Medium Severity

The GenEval benchmark's qa_accuracy metric is intended for "strict QA" (all-or-nothing aggregation). However, Task.from_benchmark doesn't pass the all_or_nothing kwarg, causing QAAccuracyMetric to default to mean aggregation. This leads to inflated scores that are not comparable to the paper's reference.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 15db155. Configure here.

raise ValueError(
f"qa_accuracy aggregation must be one of {{'mean', 'all_or_nothing'}}. Got: {self.aggregation!r}."
)
self.metric_units = type(self).metric_units
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant metric_units self-assignment in __init__

Low Severity

Setting self.metric_units = type(self).metric_units is a no-op because metric_units is already declared as a class attribute and nothing earlier in __init__ (neither super().__init__ nor the surrounding code) overrides it. The line just shadows the class attribute with the same value and adds maintenance noise.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 15db155. Configure here.

davidberenstein1957 and others added 2 commits May 8, 2026 11:43
Isolates qa_accuracy metric implementation and GenEval benchmark wiring so it can be reviewed independently before stacking the remaining text metrics.

Made-with: Cursor
Co-authored-by: Cursor <cursoragent@cursor.com>
@davidberenstein1957 davidberenstein1957 force-pushed the feat/vlm-pr-3a-qa-accuracy branch from 04ab2e5 to 161223e Compare May 8, 2026 09:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant