About the Project
The rise of wearable technology has enabled unprecedented continuous data collection, opening new frontiers in health monitoring, activity recognition, and personalized medicine. This abundant sensor data holds immense potential for improving healthcare but also presents significant modelling challenges. To address these, Large Language Models (LLMs) like GPT-4 and Llama are now being harnessed to interpret complex data patterns and analyse human behaviour. However, the opaque “black box” nature of LLMs makes it challenging to quantify uncertainty in their predictions—a critical limitation in healthcare, where transparency and reliability are paramount.
This project aims to develop a comprehensive framework for predictive uncertainty in LLMs, focusing on applications in medical NLP. By developing advanced uncertainty quantification (UQ) methods—including probability calibration, conformal prediction, and Bayesian techniques—this research seeks to reliably assess the accuracy of LLM predictions in healthcare. Key areas of investigation include understanding aleatoric uncertainty (randomness in sensor data) and epistemic uncertainty (model knowledge limitations), as well as examining how domain-specific fine-tuning affects uncertainty estimation.
The project’s methodologies will enable LLM-based healthcare applications to identify potentially erroneous outputs, interpret model reliability, and defer decisions in high-uncertainty scenarios, all while generating comprehensive responses to medical inquiries. Additionally, by integrating UQ, this research envisions healthcare chatbots and AI systems capable of effectively communicating their confidence levels, improving decision-making and trust in medical AI applications.
Through real-world testing, the project will validate the utility of these UQ methods, addressing the important need for robust uncertainty assessment in healthcare LLMs. Ultimately, this research will promote the responsible adoption of LLMs in sensor-driven healthcare, enhancing transparency, safety, and accountability in AI-assisted medical decision-making.
The School of Computing at Ulster University holds Athena Swan Bronze Award since 2016 and is committed to promote and advance gender equality in Higher Education. We particularly welcome female applicants, as they are under-represented within the School.