Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

The increasing likelihood of pandemics highlights the need for superior tools at our disposal. By robustly and efficiently analyzing vast datasets, artificial intelligence (AI) has the potential to help decision-makers better respond to, manage, and even avert infectious disease outbreaks. However, these systems could also stigmatize, discriminate, exclude, exploit, and/or otherwise oppress vulnerable populations. In doing so, they could amplify allocative and representational harms. Given the possible far-reaching consequences, critical ethical reflection and oversight are essential. Such reflection would be incomplete without considering the impacts on queer people. From HIV/AIDS to COVID-19, outbreaks have disproportionately affected sexual and gender minorities (SGMs), reflecting a long history of structural oppression and injustices. AI could further exacerbate inequalities—like anti-queer bias—particularly amid the omission of marginalized and minoritized perspectives from algorithmic fairness efforts. Adopting an Intersectional, reparative approach, this paper que(e)ries the use of AI for infectious disease surveillance purposes. Placing this technology within patterns of power, privilege, marginalization, and disadvantage, it interrogates how to achieve algorithmic justice for SGMs. It proposes concrete steps towards a reparative algorithmic praxis, including: (1) exploring how these systems reproduce inequalities, (2) centering sexual and gender diversity to disrupt problematic epistemic positions, and (3) combating opacity through participatory governance mechanisms. This work is necessary to understand how AI systems reproduce major health disparities and hold them accountable. By contemplating how to begin redressing harms, it offers a starting point for further deliberation and action towards inclusive, justice-oriented algorithmic systems in practice. I anticipate these lessons being deeply transferrable across contexts.

Original publication

DOI

10.1177/20539517241289440

Type

Journal article

Journal

Big Data and Society

Publication Date

01/01/2025

Volume

12