Through this blog series, we are exploring what the world can look like when technology is designed and deployed for the benefit of all. These broad, near-future speculative pieces are designed to de-center dominant narratives and challenge us all to realize that things can be different. These are not alternative realities, they are possible futures.
Illustration by Sylvia Pericles.
In a cozy room filled with enough natural sunlight and adorned with calming art, a team of content moderators begins their day with a mindfulness session. Each desk in the room has a red button that moderators can press for immediate help when the content is too distressing for them.
We imagine this future because one of us has lived its opposite.
I, Fasica, was a content moderator during the Tigray conflict, reviewing thousands of images and videos documenting unspeakable violence, some from places I knew, some that mirrored the traumas I fled. I had escaped Ethiopia in fear, only to relive the war frame by frame, day after day, through a screen in Nairobi. I remember the suffocating heat of the metal-roofed warehouse, the ineffectiveness of the therapy, the emotional exhaustion that followed me home. I was tasked with shielding the world from horror, but no one was shielding me.
On a daily basis, content moderators, who essentially serve as guardians of the internet, work through thousands of graphic images, videos, and texts depicting some of the world’s most disturbing actions. The content I and my colleagues filtered includes, but is not limited to, sexual assault, child abuse, violent executions, self harm and suicide, and extremely gory content from wars and conflicts. Without effective moderation, social media platforms can become toxic environments which play significant roles in fueling conflicts and genocides. For instance, Facebook played a crucial role in the incitement of violence during the 2016-17 genocide of the Rohingya in Myanmar. Recently, social media latforms fueled the flames of the war in Ethiopia’s Tigray region, leading to the genocide of Tigrayans during the 2020-2022 Tigray war. The stakes are clear: content moderation saves lives. But who’s saving the moderators?
The future we want is one where content moderators are not just paid, but paid the wages that reflect the emotional weight of their work. In this future, mental health care isn’t an afterthought; it’s woven into the structure of the job. Mental health support wouldn’t only come from one overworked psychologist, but from a culturally competent care collective trained in diverse mental health approaches: trauma-informed therapy, political grief counseling, and digital burnout recovery. The therapy sessions are flexible, offered in the moderator’s preferred language, and can take place in calming, non-clinical environments: rooftop gardens, community wellness centers, or even virtual reality (VR) nature sanctuaries. Moderators are encouraged to take ample breaks between work sessions, with their computers turned off to ensure that they fully disconnect. These breaks are spent in comfortable offices designed for solitude, group discussions, and relaxation, with options for engaging in various gaming activities to unwind.
In this future, content moderation is not shamed or hidden behind Non Disclosure Agreements (NDAs), but recognised as a vital and respected occupation. Moderators are not discouraged, but in fact, highly encouraged to have open discussions about the emotional toll of their work with families and friends. Across the world, content moderators are celebrated for their roles as digital caretakers through annual/seasonal awareness campaigns organized primarily by the social media platforms themselves. In this future, content moderators’ labor is no longer hidden or stigmatized, but seen as essential to public safety, as vital as first line responders like firefighters, doctors, nurses and crisis responders who provide critical community services.
This is a future where social media platforms invest deeply in trauma-informed, culturally grounded technology built not to replace moderators, but to support them, especially in contexts of genocide, displacement, or mass violence. For instance, an automated severity identification system could categorize posts by labels like “graphic,” “textual-abuse,” or “violent-video,” and arrange which posts are routed to content moderators to ensure that moderators aren’t consistently exposed to the most harmful posts. The routing system would give moderators time for mental preparation before handling graphic content, and tasks would be distributed based on time on shift, recent exposure, and psychological readiness.
Platforms would have teams of dedicated human computer interaction researchers, psychologists, and machine learning practitioners aiming to understand the tools moderators need. These researchers would understand cultural differences and nuances and ensure that each deployed tool appropriately serves moderators from specific communities. Content moderation teams in Africa, for instance, would work with African researchers to build graphic content identification tools, keeping in mind the contextual values of the specific communities in question.
We imagine a future that prioritizes mental and psychological health, filled with community validation, and supported by technological interventions. Such a future isn’t only beneficial to content moderators, but everyone who uses social media platforms, and our societies which are heavily impacted by the content spread on these platforms.
This future is possible, and it is the bare minimum of what is owed to the people who hold back the digital tide of violence. If social media platforms can afford to invest billions into mass surveillance algorithms, they surely have the funds to invest in creating a sustainable community of moderators that can perform their jobs safely, healthily, and with pride.