Medical uncertainty and AI: ethics and practice
Professor Antoniya Georgieva, Nuffield Department of Women’s and Reproductive Health.
Medical uncertainty has been an innate and inevitable feature of medical practice. And yet, healthcare professionals (HCPs) are often reluctant to admit medical uncertainty to their patients fearing loss of trust. Dealing with uncertainty in a way that preserves patient trust is particularly important in the context of maternal and perinatal care, where medical decisions can impact on the life and health of two individuals simultaneously, namely that of the mother and of the child.
The use of medical tools and technologies aims at reducing uncertainty and assisting HCPs at treating patients successfully. In recent years, artificial intelligence (AI) and machine learning (ML) applications are promising to resolve uncertainty by being more efficient than human HCPs in utilising big data and “keeping up-to-date” with new research and publications.
The implications of AI and ML tools that can provide more accurate predictions can be significant for HCPs. On the one hand, it could cause HCPs to lose trust in their own clinical judgement and ‘intuition’, leading eventually to their deskilling. On the other hand, it could lead to a new type of (trust) relationship between HCPs and patients, especially regarding the understanding and management of risk.
This project aims to investigate the concept of medical uncertainty and how the introduction of AI and ML might affect its ethics and practice on the ground, specifically in the context of maternal and perinatal health.
The student will conduct a conceptual investigation of the notion of uncertainty in the medical context. S/he will use theoretical and empirical literature to critically examine how uncertainty is perceive and managed in medicine and its relationship with concepts of trust and professionalism. The student will also conduct qualitative research with HCPs in order to examine and analyse their views regarding managing and communicating uncertainty and risk, and their beliefs and perceptions regarding the role of AI and ML in dealing with medical uncertainty.
RESEARCH EXPERIENCE, RESEARCH METHODS AND TRAINING
The project will provide a range of training opportunities in empirical bioethics research methods, including literature review, conceptual ethical analysis, qualitative research, data analysis.
FIELD WORK, SECONDMENTS, INDUSTRY PLACEMENTS AND TRAINING
Qualitative interviews will be required with practising healthcare professionals some of whom might have experience with using AI and ML in their practice, and also with scientists who are developing AI and ML algorithms for use in maternal and perinatal healthcare.
This project would suit a candidate with a background in bioethics/applied ethics or social sciences (sociology or Science and Technology Studies) wishing to develop expertise in the field of empirical bioethics with an interest in AI, new technologies, and health care.