Skip to main content

The TESCREAL Bundle

Big tech companies are throwing money at artificial general intelligence (AGI), with CEOs telling us that it will come, bringing with it utopia, any day now (for the measly price of SEVEN TRILLION DOLLARS according to Sam Altman). How did AGI come to be a dominant goal in AI? What motivates the drive to attempt to build it? This page is dedicated to answering this question, starting with our paper, "The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence," which you can read on First Monday. Image is Clarote & AI4Media / Better Images of AI / Power/Profit / CC-BY 4.0

Read the paper on First Monday

In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability.

What is effective accelerationism?

Paper co-author Émile P. Torres describes effective accelerationism in this article for Truthdig, highlighting its contrasts with effective altruism and observing that ultimately, "their respective visions of the future are more or less identical."

Doomers vs. accelerationists

Ethan Zuckerman writes for Prospect Magazine about the TESCREAL bundle of ideologies, the differences between doomers and accelerationists, and how these worldviews of the powerful overshadow more important conversations.

"Whoever’s worldview is embedded in ChatGPT and other AIs could have a lasting impact, for good or ill," he concludes. "If the principles of justice, equity and representation were governing the conversation, we would be paying more attention to whom AI systems include and exclude."