Dr. Thomas Grote
On the Ethics of Algorithmic Decision Making in Healthcare
In recent years, a plethora of high-profile scientific publications report about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms requires trade-offs at the epistemic and the normative level. Whereas it might improve the accuracy of medical diagnosis, it comes at the expense of opacity when trying to assess the reliability of given diagnosis. Drawing on literature in social epistemology and moral responsibility, we argue that the uncertainty in question potentially undermines the epistemic authority of clinicians. Furthermore, we elucidate potential pitfalls of involving machine learning in healthcare with respect to paternalism, moral responsibility and fairness. At last, we discuss how the deployment of machine learning algorithms might shift the evidentiary norms of medical diagnosis. In this regard, we hope to lay the ground for further ethical reflection of the opportunities and pitfalls of machine learning for enhancing decision-making in healthcare.
Thomas Grote is a post-doctoral researcher in Tübingen at the “Ethicsand Philosophy Lab” of the Cluster of Excellence “Machine Learning: New Perspectives for Science”. His research focusses on understanding the societal ramifications of machine learning, with an emphasis on issues at the intersection of ethics and epistemology. Before working at the Cluster, he defended his dissertation thesis at the University of Würzburg at the end of 2015 and joined the ethics centre in Tübingen in summer 2016.