Skip to main content

Knowledge In, Errors Out: Distilling Trust into Clinical LLMs

23 March 2026
11.00am – 12.00pm AEDT
Room G03, Ainsworth Building (J17), UNSW

Trustworthiness in clinical LLM systems depends not on fluency, but on grounding models in authoritative knowledge and distilling that signal into reliable behaviours. In this talk, Dr Yuan-Fang Li, Chief AI Scientist at Oracle Health & AI, presents two case studies demonstrating this principle in practice.

For automated ICD-10 coding, integrating coding standards and structured validation reduces hallucinated predictions and improves robustness as guidelines evolve. For radiology report generation, verifiable, fact-level feedback and preference-based supervision enhance accuracy while enabling more data- and compute-efficient reinforcement fine-tuning.

Together, these examples outline a practical blueprint for trustworthy clinical AI: combine grounding with verification, then distil the resulting signal so models not only sound correct, but behave reliably.

Dr Li leads a team of over 100 applied scientists at Oracle Health & AI in Australia, developing advanced AI solutions for electronic health record systems. His research spans large language models, knowledge graphs, multimodal learning, and neuro-symbolic AI.

Speakers

Yuan-Fang Li

Chief AI Scientist at Oracle Health & AI in Australia

Dr Yuan-Fang Li is the Chief AI Scientist at Oracle Health & AI in Australia, where he works with a team of over 100 applied scientists to develop cutting-edge AI solutions for electronic health record (EHR) systems, aimed at transforming healthcare. In this role, Yuan-Fang provides strategic scientific leadership to ensure the delivery of impactful, high-quality, and innovative AI-driven products. His research interests include large language models, knowledge graphs, multimodal learning, and neuro-symbolic AI.