Developing Novel Approaches for Explainable AI: Enhancing Transparency and Trust in Black-Box Models
Ulster University - Magee Campus
About the Project
This PhD aims to address a critical challenge in artificial intelligence: enhancing the transparency and trustworthiness of “black-box” models through Explainable AI (XAI). As AI systems increasingly impact fields like healthcare and finance, their opaque nature presents significant ethical, legal, and operational concerns. The core issue lies in the trade-off between model accuracy and interpretability. While state-of-the-art AI models, such as deep neural networks, reinforcement learning and generative AI are highly accurate, they often lack transparency. In contrast, simpler models are more interpretable but less capable in handling complex tasks.
This research proposes to bridge this gap by developing novel XAI techniques that maintain model performance while improving interpretability. Moving beyond current methods such as post-hoc explanations (LIME, SHAP, Grad-CAM), this study seeks to establish robust, user-centric approaches that meet the needs of diverse stakeholders, including data scientists, policymakers, and end users.
Key objectives include designing model-agnostic explainability techniques for complex models, real-time interpretability for dynamic applications like healthcare, and customised explanations for different audiences. The project will also integrate fairness and bias detection in high-stakes areas, including healthcare, finance, and law.
The research methodology will combine theoretical framework development, algorithm design, and empirical testing. Theoretical work will draw on information theory, causality, and cognitive science to formalise interpretability. New algorithms will enhance transparency, scalability, and accessibility for technical and non-technical users. Empirical validation using domain-specific datasets will rigorously evaluate interpretability, accuracy, and fairness.
This research will contribute to the field of XAI by developing explanation methods that are robust, fair, and suited to real-time, high-stakes applications. Expected contributions include new interpretability tools, methods for bias detection, and user-friendly solutions to enhance public trust in AI. The proposal aligns with industry and societal demands for ethical AI and promises significant academic and practical advancements.
To help us track our recruitment effort, please indicate in your email – cover/motivation letter where (jobs-near-me.eu) you saw this job posting.