Statement from the listed authors of Stochastic Parrots on the “AI pause” letter

Timnit Gebru (DAIR), Emily M. Bender (University of Washington), Angelina McMillan-Major (University of Washington), Margaret Mitchell (Hugging Face)

March 31, 2023

Tl;dr: The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.

A yellow window with three images. On the right is an under water photo of a single bright blue and yellow tropical fish. The two on the left are simplifications created based on a decsion tree algorithm. The work illustrates a popular type of machine learning model: the decision tree. Decision trees work by splitting the population into ever smaller segments. I try to give people an intuitive understanding of the algorithm. I also want to show that models are simplifications of reality, but can still be useful, or in this case visually pleasing.

[Image Source: Rens Dimmendaal & David Clode / Better Images of AI / Fish reversed / CC-BY 4.0]

On Tuesday March 28, the Future of Life Institute published a letter asking for a six-month minimum moratorium on "training AI systems more powerful than GPT-4," signed by more than 2,000 people, including Turing award winner Yoshua Bengio and one of the world’s richest men, Elon Musk.

While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as "Stochastic Parrots"), such as "provenance and watermarking systems to help distinguish real from synthetic" media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined "powerful digital minds" with "human-competitive intelligence." Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today. The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities.

While we are not surprised to see this type of letter from a longtermist organization like the Future of Life Institute, which is generally aligned with a vision of the future in which we become radically enhanced posthumans, colonize space, and create trillions of digital people, we are dismayed to see the number of computing professionals who have signed this letter, and the positive media coverage it has received. It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a "flourishing" or "potentially catastrophic" future [1]. Such language that inflates the capabilities of automated systems and anthropomorphizes them, as we note in Stochastic Parrots, deceives people into thinking that there is a sentient being behind the synthetic media. This not only lures people into uncritically trusting the outputs of systems like ChatGPT, but also misattributes agency. Accountability properly lies not with the artifacts but with their builders.

What we need is regulation that enforces transparency. Not only should it always be clear when we are encountering synthetic media, but organizations building these systems should also be required to document and disclose the training data and model architectures. The onus of creating tools that are safe to use should be on the companies that build and deploy generative systems, which means that builders of these systems should be made accountable for the outputs produced by their products. While we agree that "such decisions must not be delegated to unelected tech leaders," we also note that such decisions should not be up to the academics experiencing an "AI summer," who are largely financially beholden to Silicon Valley. Those most impacted by AI systems, the immigrants subjected to "digital border walls," the women being forced to wear specific clothing, the workers experiencing PTSD while filtering outputs of generative systems, the artists seeing their work stolen for corporate profit, and the gig workers struggling to pay their bills should have a say in this conversation.

Contrary to the letter’s narrative that we must "adapt" to a seemingly pre-determined technological future and cope "with the dramatic economic and political disruptions (especially to democracy) that AI will cause," we do not agree that our role is to adjust to the priorities of a few privileged individuals and what they decide to build and proliferate. We should be building machines that work for us, instead of "adapting" society to be machine readable and writable. The current race towards ever larger "AI experiments" is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive. The actions and choices of corporations must be shaped by regulation which protects the rights and interests of people.

It is indeed time to act: but the focus of our concern should not be imaginary "powerful digital minds." Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.

---------

[1] We note that eugenics, a very real and harmful practice with roots in the 19th century and running right through today, is listed in footnote 5 of the letter as an example of something that is also only "potentially catastrophic."