WIMC 2026
CheXGPT: A Multimodal Reasoning Language Model for Chest X-Ray Labelling and Report Generation
A mentee-presented multimodal reasoning model for chest X-ray labelling and report generation.
Session details
2026 / Warsaw, Poland
WIMC
On this page
Summary
What the session covered and why it mattered.
This mentee-presented work introduced CheXGPT, a multimodal reasoning language model designed for chest X-ray labelling and report generation. The project connects image understanding, structured labels, and report-level language generation within a clinically recognizable radiology workflow.
Session context
WIMC 2026
2026 / Warsaw, Poland
Mentee presentation
Outcome
Recognition, result, and the talk's core takeaway.
Result
Conference presentation
Presented as part of the conference record. Supporting material can be attached here later without changing the public URL.
Takeaway
Shows how supervised mentorship work can turn medical vision-language models into concrete radiology reporting tasks.
Tags
Contact
Speaking work is most useful when it turns technical systems into clinically usable understanding.
For invitations, workshops, teaching sessions, or collaboration around clinical AI communication, email is the simplest route.
On this page