The Exploited Labor Behind AI
Supporting transnational worker organizing should be at the center of the fight for “ethical AI.”
AI is fueled by the artists, writers and “zombie trainers” (see below) supplying training and evaluation data without consent or compensation. Vulnerable populations like refugees and gig workers are hired as data workers under exploitative working conditions (see Data Workers Inquiry). These same vulnerable populations are also the first to experience the harms of AI systems, whether it is through automated weapons or surveillance systems (see Surveillance Watch). This page has more of our research on the inequities fueled by AI systems.
Image: Leo Lau & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Supporting transnational worker organizing should be at the center of the fight for “ethical AI.”
A new type of "Zombie Trainer" has emerged. These are people who work as data labelers, content moderators, or image data collectors without their knowledge.
The impact of “AI Art” on artists and what to do about it. Also see the Spanish version translated by Arte es Ética.
This paper provides a field scan of scholarly work on AI and hiring and inequalities that arise along a number of dimensions.
This paper investigates outsourced machine learning data work in Latin America by studying three platforms in Venezuela and a business process outsourcing company in Argentina.
The essays and case studies in Resisting Borders and Technologies of Violence (Haymarket 2023) shed light on the high-tech system of borders developed and inspiring stories of resistance to it.
Edited by Mizue Aizeki, Matt Mahmoudi, and Coline Schupfer, with a chapter from our own Marwa Fatafta.
Antony Loewenstein discussed the technology developed through experimentation on Palestinians and exported around the world. Find his book on the topic on Verso Books, and read this interview by Thomas Le Bonniec (French, English).
Petra Molnar writes about tech-enabled border surveillance. She also spoke about her work in an appearance on Mystery AI Hype Theater 3000!
In this 2019 book, Ruha Benjamin shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. Race After Technology offers conceptual tools for decoding tech promises with sociologically informed skepticism