
Community-rooted research practices
What does it mean to conduct AI research which works for communities? Read DAIR's Research Philosophy to learn more.
15 results
What does it mean to conduct AI research which works for communities? Read DAIR's Research Philosophy to learn more.
Creating natural language processing tools, and using mixed methods (quantitative and qualitative) approaches to analyze the impacts of social media platforms on neglected countries and languages. Watch our short documentary on the topic.
Analyzing the impacts of South African apartheid using computer vision techniques and satellite imagery. Read our NeurIPS paper and this MIT Tech Review article, and check out our dataset and visualizations.
AI is fueled by the artists, writers and “zombie trainers” (see below) supplying training and evaluation data without consent or compensation. Vulnerable populations like refugees and gig workers are hired as data workers under exploitative working conditions (see Data Workers Inquiry). These same vulnerable populations are also the first to experience the harms of AI systems, whether it is through automated weapons or surveillance systems (see Surveillance Watch). This page has more of our research on the inequities fueled by AI systems.
Image: Leo Lau & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Uncovering the hype behind AI and what we can do about it. In The AI Con, Emily M. Bender and Alex Hanna offer a sharp, witty, and wide-ranging take-down of AI hype across its many forms.
Guidelines for ethical data annotation and release practices without exploiting workers and stealing data.
Creating guidelines for designing AI systems that center the needs of specific communities.
Enabling workers to fight back against AI and automation at work.
Stay tuned for tools, resources, and political education for both unions in the bargaining contract process and workers who want to push back against automation technology.
Showing the extent to which vulnerable populations like refugees are both exploited laborers fueling “AI” systems and those most harmed by them. Read our papers and articles to learn more.
An internet that benefits our future is one that enables everyone to contribute to the richness of our global conversation.
Analyzing the history of Black protests in the US and Canada using machine learning methods. Read our papers on Sociological Science and Mobilization to learn more.
Focusing on how algorithmic management and workplace surveillance allows corporations like Amazon to steal wages, intensify poor working conditions, and evade responsibility.
“Turkopticon considers this to be a step forward in the fight against unfair practices leveraged by companies like Amazon, Google, and OpenAI in their shadow workforce,” said Krystal Kauffman, lead organizer with Turkopticon.
Examining who benefits from "AI" systems and who is harmed (e.g. read AI for Whom?, AI Art and Its Impact on Artists, AI and Inequality in Hiring).
Artificial Intelligence has too much hype. In this stream, linguist Prof. Emily M. Bender and sociologist Dr. Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.