Social media is an entirely new dimension with its own distinct properties and governing principles.
Most importantly and decidedly, it is democratic and accessible by nature. It allows users to have the entire world at their fingertips and to speak for themselves in a radically free way. Anyone can publish almost anything anywhere for anyone to see.
Given how cruel our world is, this has disastrous implications.
When the potential future consequences of social media usage are discussed, psychological effects are often at the forefront of discussion. While warranted, the larger picture is missed in the attempts to document and predict any adverse side effects that come with the entirely new social ecosystem we’ve created.
There’s been a lot of discussion about social media censorship and moderation and how it ties into our more universal human fascinations with the taboo and morbid. We’ve observed the results of open communication and discussion veiled by near-total anonymity and continually mull over the details of how best to protect free speech and freedom of expression.
We’ve seen firsthand how these concepts have spilled into and impacted the real world, and while these are extremely important and prescient issues to scrutinize, it’s alarming that we haven’t used our logic to see the dystopia unfolding right under our noses.
Our admittedly somewhat limited exposure to humanity’s worst online is made possible in no small part by the individuals subjected to them to ensure their livelihoods. We are just now beginning to see the detrimental effects of this cruel, unintentional psychological experiment that is social media.
The average person will most likely not have to encounter offensive imagery, violent crimes and other forms of brutality and cruelty while casually scrolling unless they so happen to search it out. There are exceptions to this, but by and large, we are made far safer against the threat of traumatizing imagery and content than we would otherwise.
In this, we are more privileged than we realize. Social media moderators, such as the ones employed by major companies like Meta, operate on the front lines of our collective battle and reckoning with free expression in a brutal digital world. Yet these moderators are hardly thought of or acknowledged.
They’re freedom fighters in their own right on the front of democracy. Without them, our understanding of social media censorship, moderation and free speech implications would be radically different.
With time, the working conditions these moderators face will be regarded as a labor-rights tragedy akin to the radium girls.
To think about it on a surface level is horrifying at best and dystopian at worst. The large-scale employment of human beings to consume hours and hours of footage depicting some of humanity’s worst capacities and capabilities for abject violence and brutality is on many levels an Orwellian fever dream.
It’s appalling that we don’t obsessively investigate the adverse effects that the individuals who moderate these sites and apps have to suffer with and through. These individuals are underpaid, overworked and put at severe risk of post-traumatic stress disorder while not receiving anything close to adequate mental healthcare, especially given their position.
The most consequential implications this brings forth are not discussed nearly enough and are instead relegated to the background. We collectively treat social media moderators as though they’re ghosts in the machine because those with a vested interest in hiding them have prevailed.
To add insult to injury, it’s not even as if this system is nearly effective enough. As recently as this February, many Meta users were subjected to a barrage of violent content including footage of dead bodies. It’s a losing game and possibly one of the most significant issues our generation will need to confront as this technology evolves.
These companies are caught in between their business strategy of getting as many eyes on their platforms as possible and protecting the interests of the public. People are generally drawn to more macabre and sensational content, so it can be difficult to draw the line between profitability and morality.
Carson Redmond, a fourth-year student at the University of Minnesota, said he’s observed a tendency for more controversial and sensational content to blow up.
“Stuff honestly gets more recognition when it is either upsetting or negative,” Redmond said. “It’s kind of the nature of things.”If there is any job that should be replaced, or at the very least heavily aided by AI, this should be it. It’s being integrated, but the overall consensus is that the human eye for subjectivity is still very much necessary in the protection of the masses.
Yuqing Ren, associate professor of information and decision sciences at the University, said that while the treatment of these workers requires more protection, their situation isn’t entirely unheard of and mirrors what mental health workers face, meaning there is potential for reform.
“It’s definitely not a new problem,” Ren said. “If you think about mental health workers having basically, not necessarily seen the images, but if you think about the stories they hear from clients and their daily work, the sort of information they encounter in their work, it could also have somewhat similar effects, maybe not as extreme.”
Herein lies the key to solving the problem, the true gravity of this situation needs to be acknowledged and taken seriously. There have been many lawsuits brought against these companies, and almost certainly more will come up.
More attention needs to be brought to moderation in an internet Wild West for not only the moderators but also us as consumers.