The ethics and epistemology of explanatory AI in medicine and healthcare

More Info
expand_more

Abstract

AI is believed to have the potential to radically change modern medicine. Medical AI systems are developed to improve diagnosis, prediction, and treatment of a wide array of medical conditions. It is assumed to enable more accurate and efficient ways to diagnose diseases and “to restore the precious and time-honored connection and trust – the human touch – between patients and doctors“ (Topol, 2019, p. 18), by enabling health care professionals to spend more time with their patients. Sophisticated self-learning AI systems that do not follow predetermined decision rules – often referred to as black-boxes (Esteva et al 2019; Shortliffe et al 2018) – have spawned philosophical debate: the black-box nature of AI systems is believed to be a major ethical challenge for the use of these systems in medicine and it remains disputed whether explainability is philosophically and computationally possible. This special issue focuses on the ethics and epistemology of explainability in medical AI broadly construed.

Files

S10676_022_09666_7_1.pdf
(pdf | 0.546 Mb)
Unknown license

Download not available