Back to talks

WIMC 2026

CheXGPT: A Multimodal Reasoning Language Model for Chest X-Ray Labelling and Report Generation

A mentee-presented multimodal reasoning model for chest X-ray labelling and report generation.

WIMC2026Mentee presentation

Session details

2026 / Warsaw, Poland

WIMC

Summary

What the session covered and why it mattered.

A lighter detail template for conference presentations that do not yet have public slides or long-form notes.

This mentee-presented work introduced CheXGPT, a multimodal reasoning language model designed for chest X-ray labelling and report generation. The project connects image understanding, structured labels, and report-level language generation within a clinically recognizable radiology workflow.

Session context

WIMC 2026

2026 / Warsaw, Poland

Mentee presentation

Outcome

Recognition, result, and the talk's core takeaway.

Award details are kept inline, and non-awarded talks still keep their canonical result page.

Result

Conference presentation

Presented as part of the conference record. Supporting material can be attached here later without changing the public URL.

Takeaway

Shows how supervised mentorship work can turn medical vision-language models into concrete radiology reporting tasks.

Tags

radiologychest X-raymultimodal AIlanguage models

Contact

Speaking work is most useful when it turns technical systems into clinically usable understanding.

For invitations, workshops, teaching sessions, or collaboration around clinical AI communication, email is the simplest route.