Skip to content

Commit dab4564

Browse files
Restore motivation and background for Medico 2026
Reintroduced motivation and background section to emphasize the importance of transparency, interpretability, and safety in AI systems for clinical workflows. This section outlines the challenges faced by existing VQA models and the goals of Medico 2026.
1 parent 34180a2 commit dab4564

1 file changed

Lines changed: 9 additions & 12 deletions

File tree

_editions/2026/tasks/medico.md

Lines changed: 9 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -29,18 +29,6 @@ Participating teams will write short working-notes papers that are published in
2929
the methods that the teams use to address the task and analyze the results and, second, "Quest for Insight" papers, which address a question aimed at gaining more insight into the task, but do not necessarily present
3030
task results. Example questions for "Question for Insight" papers are below.
3131

32-
#### Motivation and background
33-
34-
For AI systems to be integrated into clinical workflows, they must be transparent, interpretable, and safe. In GI imaging, deep learning models have achieved promising results for classification and detection tasks,
35-
yet their black-box nature limits trust among clinicians. Medical professionals require explanations that clearly connect visual evidence to clinical conclusions.
36-
37-
Medical VQA offers a natural interface for explainable decision support, enabling clinicians to ask structured questions and receive interpretable responses. Nevertheless, many existing VQA models provide answers without
38-
sufficient justification or safeguards against unsafe reasoning. Medico 2026 addresses these limitations by explicitly integrating explainability and safety into both task design and evaluation. By encouraging multimodal
39-
explanations and clinically consistent behavior, the challenge aims to advance AI systems that support, rather than replace, clinical expertise.
40-
41-
42-
#### Task Description
43-
4432
**Subtask 1: Medical Image Question Answering in GI Endoscopy**
4533

4634
This subtask focuses on developing models that accurately answer clinically relevant questions based on GI endoscopy images using the Kvasir-VQA-x1 dataset, which contains more than 150,000 question–answer pairs.
@@ -58,6 +46,15 @@ In addition to interpretability, this subtask introduces a dedicated safety laye
5846
misleading explanations, or non-compliance with established medical best practices. To support retrieval-augmented reasoning, participants may leverage a curated database of verified endoscopy resources provided as
5947
part of the challenge.
6048

49+
#### Motivation and background
50+
51+
For AI systems to be integrated into clinical workflows, they must be transparent, interpretable, and safe. In GI imaging, deep learning models have achieved promising results for classification and detection tasks,
52+
yet their black-box nature limits trust among clinicians. Medical professionals require explanations that clearly connect visual evidence to clinical conclusions.
53+
54+
Medical VQA offers a natural interface for explainable decision support, enabling clinicians to ask structured questions and receive interpretable responses. Nevertheless, many existing VQA models provide answers without
55+
sufficient justification or safeguards against unsafe reasoning. Medico 2026 addresses these limitations by explicitly integrating explainability and safety into both task design and evaluation. By encouraging multimodal
56+
explanations and clinically consistent behavior, the challenge aims to advance AI systems that support, rather than replace, clinical expertise.
57+
6158

6259
#### Target group
6360

0 commit comments

Comments
 (0)