Ethical Design for AI in Medicine — Rationale

 Artificial intelligence (AI) is slowly but steadily being introduced in medical and healthcare settings. From basic research to applied systems already deployed in hospitals, AI-based systems (AIS) are now used in applications that go far beyond the initial role that the first expert systems had in the 1970s and the 1980s. AIS are now helping not only with diagnosis tasks, but also with risk stratification, assistance during surgical procedures, or monitoring of biomarkers in chronically ill patients. This list is far from being exhaustive. Any kind of decision that may be taken in healthcare is now susceptible to be supported by AI.

However, AIS cannot be considered as mere tools that will not affect medical practice or biomedical ethics. By nature—and typically when they are based on Machine Learning—AIS can be opaque for users or even programmers, becoming so-called black boxes. More generally, AIS can help to computationally obtain decision models that are sometimes hardly explainable to the commoners. The use of such AIS raises many ethical questions: how can we trust opaque medical systems—be they black or grey boxes? How should the responsibility be shared when these systems are used? How can we define reasonable trade-offs in terms of opacity, safety and efficiency of those systems? What is the role of explanation in medicine and how could AIS disrupt it? Finally, once we have understood the normative expectations we direct towards AIS, how can we foster ethical design practices to actually build these ethical AI-based systems we are envisioning?

These questions are a mere sample of all the ethical stakes linked to the introduction of AIS in healthcare. These stakes are neither purely theoretical nor purely practical. Instead, they are intertwined in a conundrum—a wicked problem—we need to collectively tackle using the expertise of every stakeholder. Through this workshop, we hope to facilitate an interdisciplinary dialogue between technologists, medical practitioners and ethicists.



Due to personal reasons, the presentation of Mihaela van der Schaar and Thomas Callender has been cancelled. You will find below the updated schedule.


9:30-10:00: welcome words Lionel Tarassenko (Reuben College, University of Oxford) & Pascal Marty (Maison Française d'Oxford) + General Introduction by Aurelia Sauerbrei (Ethox Centre, University of Oxford)

The ends - What should ethical AIS for healthcare be?

10:00-11:00: Christine Hine (University of Surrey) — Ethics and artificial intelligence in the interdisciplinary collaborations of smart care
11:00-11:30: break
11:30-12:00: Éric Pardoux (IHRIM, École Normale Supérieure de Lyon & MFO) — The ethics of AI ethics or how to ethically design ethical AI?
12:00-12:30: Discussion of the first axis — chaired by Angeliki Kerasidou (Ethox Centre, University of Oxford)

 12:30-14:00: lunch break

The means - How to develop ethical AIS for healthcare?

14:00-15:00: Karin Jongsma, Megan Milota, Jojanneke Drogt (University of Utrecht (Netherlands)) — Visualizing the ethics of AI in pathology through the lens of human expertise and responsibility
15:00-15:30: break
15:30-16:00: Francis McKay (Ethox Centre, University of Oxford) — Digital Health Citizenship and the Problem of De-identified Data Ownership
16:00-16:30: Jessica Morley (Oxford Internet Institute, University of Oxford) — The importance of pro-ethical design
16:30-17:00: Discussion of the second axis

17:00-17:30: General discussion to conclude the workshop


Please find the detailed program with abstracts at the bottom of this page.


If you would like to attend the workshop, please register as soon as possible.

Lunch will be provided by the National Pathology Imaging Co-operative (NPIC) for participants who registered before 06/02/2023.

The workshop will take place in the auditorium of the Maison Française d'Oxford, 2-10 Norham road, OX2 6SE, please find the map here

Remote access will be provided, details will be shared a few days before the event with registered participants.


Christine Hine: Ethics and artificial intelligence in the interdisciplinary collaborations of smart care

University of Surrey
Perspectives from Science and Technology Studies, and in particular a focus on how scientific and technical work is done, can help us to understand what ethical artificial intelligence might be and how to achieve it. To illustrate this point the presentation will explore a case study of a smart care system using machine learning to enable remote monitoring of people living at home with dementia. Through exploring ethics as practice and discourse from the diverse perspectives of those involved in development, key areas of working where ethical moments arise and are handled are identified. Ethics manifests in a shared understanding of the common goal within an interdisciplinary trading zone and also through the practices of key team members who translate concerns between discipline-based research groups and who act as representatives of the ultimate users of the systems. Ethics is done, according to participants, both in the meetings where engineering and clinical perspectives come together and in discipline-specific practices that sit outside of the trading zone. This STS-informed perspective on ethical artificial intelligence enables us to understand how important it is for an infrastructure that supports ethical thinking to be woven through the collaborations that create artificial intelligence, across disciplines and throughout the lifetime of a project.

Mihaela van der Schaar & Thomas Callender: “Sunlight is said to be the best of disinfectants”: Transparency is key to ethical AI in healthcare

University of Cambridge & University College London


Karin Jongsma, Megan Milota, Jojanneke Drogt: Visualizing the ethics of AI in pathology through the lens of human expertise and responsibility

University of Utrecht
Machine learning and deep learning have proven to be particularly useful for image processing. This may explain why the majority of current and proposed AI applications in medicine are used to aid image-based diagnostics in fields like radiology and pathology. In their interactions with these new technologies, medical professionals will have to renegotiate their position and role in the digital transition; they will also have to critically consider what expertise they are willing to outsource to AI tools and which (new) competencies and tasks medical professionals need to responsibly use these technologies.
These issues take center stage in our talk. To visualize the ways in which pathologist work and the skills and roles involved, we will screen part of our ethnographic film.  This film can provide the knowledge on pathology as a specialism and explicitly show what AI can mean for pathology practices. It also shows the expertise pathologists and lab technicians require for conducting their work and give a clearer image of the responsibilities they take on, on a daily basis.
            After a brief introduction about the overarching project by Karin Jongsma, Megan Milota will introduce the film, how it was made, and the fragments which will be shown during the talk. Afterwards, Jojanneke Drogt will shortly reflect on the ethical concerns raised by the fragments; how should AI be integrated in the expertise of pathology professionals? What does the integration of AI mean for the skills and responsibilities of pathologists?

Francis McKay: Digital Health Citizenship and the Problem of De-identified Data Ownership

University of Oxford 
Many legal and regulatory precedents allow for the sharing of de-identified health data for the training of medical AI systems without obtaining opt-in consent from data subjects beforehand. The ethical grounds for doing so largely stem from the reduced risks to patient privacy that anonymity brings and the limited interests data subjects possess in anonymised data. Yet, as several examples of public backlash over medical data sharing reveal, there are nonetheless persistent concerns from the public regarding de-identified data sharing. In some cases, these collective concerns have frustrated efforts to build efficient medical data sharing systems, and they promise to continue to do so in the future if not addressed. This paper delves into one key cause for that public reaction: an anthropological phenomenon I call the "inalienability of data." The problem, which I derive from ethnographic research amongst patient and public involvement groups over the past two years, refers to a persistent anthropological imaginary regarding the ownership of health information, in which de-identified data remains symbolically linked back to the original data subject, despite sufficient technological attempts to remove them as a referent. This phenomena, I argue, is widespread, and can shape ethical expectations of patients and the public regarding the rights they ought to have over the sharing of medical data. In many cases, these expectations also go counter to current legal and bioethical licences for de-identified data sharing, and insofar as they do, provide a challenge to prevailing ethical precedents and to the acquisition of a social licence for medical AI research. Solving that problem is central, then, to the future continuance of medical AI research. I therefore conclude by offering suggestions on what to do in light of that persistent problem.

Jess Morley: The importance of pro-ethical design

University of Oxford 

This talk will cover the importance of going beyond ethics washing by taking concrete action and ensuring ethics is seen as being a key part of the successful development, deployment, and use of AI in healthcare.

Éric Pardoux: The ethics of AI ethics or how to ethically design ethical AI?

École Normale Supérieure de Lyon / IHRIM & Maison Française d'Oxford

Artificial Intelligence (AI) appears to be everywhere nowadays, in medicine as elsewhere. As such, almost any field of philosophy can be linked to it in some fashion. Epistemology, ethics and philosophy of medicine are some of these specialized domaines. Nonetheless, the exact roles that philosophy (and philosophers) can take in the development of AI and AI-based systems are still blurry.

My own doctoral research aims at studying AI in healthcare and medicine. My main aim would be to understand how to ethically design ethical AI systems. This project gives me the opportunity to question the position a philosopher can take at the interplay of philosophy, computer science and medicine. More specifically pondering the inputs philosophers or philosophy can provide to the actual development of AI based systems.

Although the objective of my doctoral research is broad, this talk will give an insight on some considerations about the ways in which both ethics and epistemology may be incorporated in the very design of AI systems for healthcare and medicine.

Online user: 2 RSS Feed | Privacy