xAI Workshop

Designing Explainable AI Systems

November 20251-Day Workshop9-12 max (to form 3 groups)

What is Explainability?

AI and Machine Learning are increasingly used in healthcare and other high-impact sectors. Yet, many of these systems remain 'black boxes': their inner workings are hidden, making it hard to understand or trust their decisions.

Explainable AI (XAI) seeks to make AI systems transparent and human-understandable. By designing explanations from the start, we can:

  • Align AI with human decision-making
  • Detect bias and risks early in development
  • Ensure trustworthiness, fairness and accountability

However, most current XAI techniques focus only on the technical side. To create truly trustworthy systems, explanations must be designed with people in mind — combining technical methods with human-centered design, ethics, and diverse user perspectives.

XAI Workshop Framework Diagram

What is the Goal?

This 1-day workshop explores how to implement effective oversight to keep AI aligned with ethical and societal values. Using a practical use case, we will explore step by step:

  • What the AI application is meant to achieve
  • Who the different stakeholders are and their explanation needs
  • Why explainability is important in healthcare
  • How to design for appropriate trust in AI systems

Why Participate?

Through discussions and exercises, we will work together to determine the best explainability approaches for different stakeholders:

  • Learn how to integrate explainability early in the design process
  • Apply methods to real healthcare use cases
  • Build a shared understanding across technical and non-technical teams
  • Co-create practical solutions for responsible and trustworthy AI

Learning Objectives

Integrate Early Design

Learn how to integrate explainability early in the design process to ensure transparency from the start

Identify Stakeholder Needs

Reflect on who the application targets, why different actors use it, and their specific explanation needs

Apply to Real Use Cases

Apply explainability methods to real healthcare scenarios through hands-on exercises

Co-create Solutions

Build shared understanding across technical and non-technical teams for responsible and trustworthy AI

Workshop Schedule

A structured day of interactive exercises and collaborative learning

Introduction

13:00 - 13:25

Introducing the topic and goal of the workshop, getting to know each other & warming-up

Short introduction to XAI concepts
Healthcare use case(s) overview
Workshop goals and methodology
Participant introductions and icebreaker

Exercise 1: Mapping the Actors

13:25 - 14:00

Defining the different actors involved in AI applications and their needs

Identifying key stakeholders
Building personas for different actors
Understanding diverse perspectives
Group presentations of personas

Exercise 2: XAI Needs per Actor

14:00 - 14:50

Determine specific questions each actor has about the application requiring explanation

Identifying explanation needs
Global vs local explanations
Values and properties consideration
Group discussion of XAI needs

Coffee Break

14:50 - 15:00

Exercise 3: Prioritising & Timing

15:00 - 15:30

Identify the most important explainability needs and when explanations are required

Voting on priority needs
Timing of explanations
Critical decision points
Group consensus building

Exercise 4: Translating the Needs

15:30 - 16:25

Design tangible explanations specific to the healthcare application

Prototyping explanations
Visual and textual design
Implementation strategies
Presentation of solutions

Closing & Feedback

16:25 - 16:30

Reflect on exercises and share key takeaways

Main learnings
Implementation insights
Future applications
Participant feedback

Prerequisites

To get the most out of this workshop, participants should have:

  • Interest in AI ethics and healthcare applications
  • No technical background required
  • Open to collaborative learning
  • Willingness to engage in group exercises

Workshop Instructors

Kristýna Sirka Kacafírková

PhD Candidate

imec-SMIT, Vrije Universiteit Brussel, Brussels, Belgium

Katherine (Kate) Prescott

Programme Manager

Oxford-GSK Collaboration in Biostatistics & AI in Medicine, University of Oxford

Sami Adnan

DPhil Candidate

NDPCHS, University of Oxford

Ready to Design Explainable AI?

Join our interactive workshop and learn how to create transparent, trustworthy AI systems that put people first

Download Syllabus

Funding Support

This workshop is funded by FWO Travel Research Grant No. V451725