Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Calls for new metrics, technical standards, and governance mechanisms to guide and evaluate the adoption of ethical Artificial Intelligence (AI) in institutions are now commonplace. Yet, most research and policy efforts do not fully account for all the different approaches and issues potentially relevant to the institutional adoption of AI. In this position paper, we contend that this omission stems, in part, from what we call the ‘relational problem’: the persistence of differing value-based terminologies to categorize and assess institutional AI systems, and the prevalence of conceptual isolation in the fields that study them including ML, human factors, and social science. After developing this critique, we propose a basic ontological framework to bridge ideas across fields—consisting of three horizontal, discipline-agnostic domains for organizing foundational concepts into themes: Operational, Epistemic, and Normative.


Conference paper

Publication Date