Empathy, trust and AI in healthcare: the future of healthcare provision?
Artificial Intelligence (AI) is a fast advancing technological field with far-reaching applications including in the area of healthcare. Machine learning (ML) algorithms are getting increasingly better at performing tasks more accurately and with greater efficiency than human doctors and nurses, such as triaging patients for accident and emergency departments, classifying suspicious skin lesions, and identifying pulmonary tuberculosis on chest radiographs. However, AI machines (still) lack certain moral qualities that seem to be important in the provision of healthcare, such as trustworthiness, as well as the ability for empathy and compassion.
This project will investigate how the introduction of AI technologies in healthcare might affect the ‘ethos’ of healthcare professions and professionals. In particular, it will examine how AI and ML might change the ways in which the moral values and principles of empathy, compassion and trustworthiness are perceived and understood in practice.
This project will involve empirical bioethics methods which combine philosophical and ethical analysis with empirical research.
The student will conduct a philosophical and conceptual investigation of the moral notions of trust, empathy and compassion. S/he will look at theories of medical professionalism and use theoretical and empirical literature to critically examine the role of empathy, compassion and trustworthiness in healthcare. Also, the student will conduct qualitative research to examine healthcare professionals’ beliefs and perceptions regarding the meaning and value of empathy, compassion and trustworthiness in the provision of healthcare, and in relation to AI and ML.
RESEARCH EXPERIENCE, RESEARCH METHODS AND TRAINING
The project will provide a range of training opportunities in empirical bioethics research methods, including literature review, conceptual ethical analysis, qualitative research, data analysis.
FIELD WORK, SECONDMENTS, INDUSTRY PLACEMENTS AND TRAINING
Qualitative interviews will be required with practising healthcare professionals some of whom might have experience with using AI and ML in their practice, and also with scientists who are developing AI and ML algorithms for use in healthcare.
This project would suit a candidate with a background in philosophy (moral philosophy or philosophy of science) or social sciences (sociology or Science and Technology Studies) wishing to develop expertise in the field of empirical bioethics with an interest in AI, new technologies, and health care.