xAI Workshop
Designing Explainable AI Systems
What is Explainability?
AI and Machine Learning are increasingly used in healthcare and other high-impact sectors. Yet, many of these systems remain 'black boxes': their inner workings are hidden, making it hard to understand or trust their decisions.
Explainable AI (XAI) seeks to make AI systems transparent and human-understandable. By designing explanations from the start, we can:
- Align AI with human decision-making
- Detect bias and risks early in development
- Ensure trustworthiness, fairness and accountability
However, most current XAI techniques focus only on the technical side. To create truly trustworthy systems, explanations must be designed with people in mind — combining technical methods with human-centered design, ethics, and diverse user perspectives.
What is the Goal?
This 1-day workshop explores how to implement effective oversight to keep AI aligned with ethical and societal values. Using a practical use case, we will explore step by step:
- •What the AI application is meant to achieve
- •Who the different stakeholders are and their explanation needs
- •Why explainability is important in healthcare
- •How to design for appropriate trust in AI systems
Why Participate?
Through discussions and exercises, we will work together to determine the best explainability approaches for different stakeholders:
- Learn how to integrate explainability early in the design process
- Apply methods to real healthcare use cases
- Build a shared understanding across technical and non-technical teams
- Co-create practical solutions for responsible and trustworthy AI
Learning Objectives
Integrate Early Design
Learn how to integrate explainability early in the design process to ensure transparency from the start
Identify Stakeholder Needs
Reflect on who the application targets, why different actors use it, and their specific explanation needs
Apply to Real Use Cases
Apply explainability methods to real healthcare scenarios through hands-on exercises
Co-create Solutions
Build shared understanding across technical and non-technical teams for responsible and trustworthy AI
Workshop Schedule
A structured day of interactive exercises and collaborative learning
Introduction
13:00 - 13:25Introducing the topic and goal of the workshop, getting to know each other & warming-up
Exercise 1: Mapping the Actors
13:25 - 14:00Defining the different actors involved in AI applications and their needs
Exercise 2: XAI Needs per Actor
14:00 - 14:50Determine specific questions each actor has about the application requiring explanation
Coffee Break
14:50 - 15:00Exercise 3: Prioritising & Timing
15:00 - 15:30Identify the most important explainability needs and when explanations are required
Exercise 4: Translating the Needs
15:30 - 16:25Design tangible explanations specific to the healthcare application
Closing & Feedback
16:25 - 16:30Reflect on exercises and share key takeaways
Prerequisites
To get the most out of this workshop, participants should have:
- Interest in AI ethics and healthcare applications
- No technical background required
- Open to collaborative learning
- Willingness to engage in group exercises
Workshop Instructors
Kristýna Sirka Kacafírková
PhD Candidate
imec-SMIT, Vrije Universiteit Brussel, Brussels, Belgium
Katherine (Kate) Prescott
Programme Manager
Oxford-GSK Collaboration in Biostatistics & AI in Medicine, University of Oxford
Sami Adnan
DPhil Candidate
NDPCHS, University of Oxford
Ready to Design Explainable AI?
Join our interactive workshop and learn how to create transparent, trustworthy AI systems that put people first
Funding Support
This workshop is funded by FWO Travel Research Grant No. V451725