Social media is a cornerstone of freedom of speech, granting people the sacred opportunity to express themselves with liberal restrictions. Freedom of speech is not often thought of as a double-edged sword, for we as a society have had the misfortune of becoming all too familiar with the dire ramifications of silencing people’s voices. However, refusing to consider the nuances of freedom of speech is perilous, leaving us at risk of disregarding the cases where words are exponentially deadlier than silence.
Social media platforms have evolved into a weapon of choice for extremist groups who have engulfed them in a wildfire of hate speech and misinformation, fanning the flames of violence across the globe. Social media has gifted extremist groups free access to millions of users, making the processes of recruitment and spreading hateful rhetoric hazardously rudimentary. Especially with a platform such as Facebook, on which over one-third of the world’s population is active, spreading and popularizing divisive narratives has never been easier.
The exploitation of mass media by extremist organizations is not a novel phenomenon; after all, its power, which encompasses print media, news media, photography, cinema, broadcasting, and digital media, is certainly not subtle. The grasp that mass media has over people is a strikingly apparent force, with a large portion of the population relying on media platforms for news, life advice, and, distressingly, political ideals. A force with such strength will inevitably attract those hungry for influence.
For instance, perpetrating the Rwandan Genocide right alongside the Hutu Rebels was the radio.
During and prior to its colonial area, Rwanda had an ethnic hierarchy that enforced a social divide between the ethnic majority, the Hutus, and the ethnic minority, the Tutsis, who were deemed “superior.” Once Rwanda achieved its independence in 1962, the ethnic hierarchy with Tutsis on top saw its end as Hutu rebel groups, fueled by decades of ethnic grievances, organized violent efforts to dismantle it. However, a national narrative villainizing Tutsis as inferior beings did not appear overnight; the rampant spread of malicious propaganda gradually entrenched it into the minds of the Rwandan people.
The Hutu rebel groups saw no better avenue to propagate waves of abhorrent propaganda than the radio, which already had access to millions of Rwandans. Soon daily broadcasts were spewing calls for violence and twisted tales of justifications for the genocide against the Tutsis. A single radio call in 1994 sparked a movement of 100 days of violence against the Tutsis, leaving over a million people slaughtered.
History is riddled with moments where a few words of hate have cost millions of lives. These moments should be deemed cautionary tales — painfully apparent warning signs that foreshadow the precise ramifications of free hate speech. Yet, there is a reason history is bound to repeat itself: society seems to have a disdain for learning from its mistakes.
Today, the plague of hate speech continues to sweep through our globe, now, horrifyingly, amplified by social media. Countries already suffering from internal conflicts and divisions, notably Myanmar, Sri Lanka, and Ethiopia, are at the forefront of the damage.
Facebook has been alarmingly complicit in assembling the multilayered nationalist narrative of hate against the Rohingya people in Myanmar. Indeed, Bhaswati Bhattacharjee, the head of the Myanmar task force at Genocide Watch, warns that, “Facebook runs the risk of being in Myanmar what the radio was in Rwanda.”
For many years, Facebook has enabled the unchecked spread of hate speech, disinformation, and incendiary rhetoric targeting the Rohingya people, much of which is propagated by Burmese military officials and nationalist organizations. Facebook has transformed into a breeding ground for the sharing of overtly anti-Rohingya sentiments, emboldening dehumanizing narratives about the Rohingya community. Online anti-Rohingya and anti-Muslim propaganda have incited countless genocidal acts in Myanmar, including mass killings, rapes, and displacement.
In 2017, Senior General Min Aung Hlaing, the head of Myanmar’s military, wrote in a Facebook post, “We openly declare that absolutely, our country has no Rohingya race.” Later, in 2021, the same general organized a successful coup, overthrowing the democratically elected Myanmar government, and leaving the country in the tainted hands of the Myanmar military. Even when hate speech is born in the online world, its consequences manifest in the real one.
It is unclear when or if the post was ever taken down, an oversight largely attributed to Facebook’s utter lack of resources designated for upholding its own hate speech policies in Myanmar. “In 2015, only two Facebook employees spoke Burmese even though that year Facebook had some 3.7 million active users in Myanmar. Audio reviews were handled by English speakers, who had no idea how to speak the [local] language [in Myanmar],” Bhattacharjee said, highlighting the ineffectiveness of Facebook’s approach to the online crisis in Myanmar.
The companies running these platforms should be inclined to halt this online arena of terror, though contrary to expectation, expanding it appears to be at the top of many companies’ agendas. Indeed, the crux of the issue of online hate speech is built into the very algorithms that construct social media feeds.
For social media companies, views are synonymous with profit — the more views a post receives, the more profit it generates. As a result, algorithms are programmed to prioritize engagement and consequently favor sensational and inflammatory content that will capture user attention. However, there is a thin line between engaging and strikingly disturbing that algorithms conveniently seem to have a blind spot. Categorized as “engaging” by faulty algorithms, extremist content often ends up swarming feeds, opening up the floodgates for the indoctrination of users into prejudiced mindsets.
This does not mean that the blame for the epidemic of online extremism should be shifted onto algorithms, which in actuality are nothing but unconscious systems of artificial intelligence. Accountability should always be the burden of the conscious bodies — the social media companies who meticulously curated their algorithms to value profit over people’s safety.
However, the chances of big tech companies holding themselves accountable and risking losing profit are slim, at best. The task of serving justice to the victims of online hate is almost fully in the hands of other organizations and perhaps the general public, though, unfortunately, such a task is virtually impossible in the United States.
Ideally, the act of fueling mass atrocities would be easily punishable under the law, but in protest to an optimal reality, holding a U.S. company liable for any user content posted on its platform is forbidden under Section 230 of the Communications Decency Act. Section 230 reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Though this legal protection was initially crafted to foster innovation, it has evidently yielded unforeseen consequences, shielding companies capitalizing on hate.
As one cannot sue a company for facilitating the posting of hate speech, the only feasible legal recourse is to pursue a lawsuit against the individual user responsible for publishing it — another ludicrous task. It is tedious and impractical to try and track the user behind a given online post, who often has the shield of anonymity. Moreover, targeting users individually is an endlessly costly process that doesn’t set a global precedent of corporate accountability, while targeting a massive tech company would.
Even in the face of a seemingly unfeasible task, it must be understood that there is still hope and opportunity for action. In 2021, the Rohingya, backed by law firms in the United States and the United Kingdom, filed a class action lawsuit against Facebook asking $150 billion for its alleged failure to remove hate against the Rohingya from its platform.
There will inevitably be obstacles that will arise in the fight to counter hate speech, however, Bhattacharjee warns that, “Violence always incites more violence. If we do not check back for the use of hate speech, we’re asking for an eternal cycle of violence,” making it imperative that we overcome them.
Holding companies liable for content moderation is no longer solely a matter of the ethics of content moderation but rather one of lives. We must demand responsibility from social media companies whose reach is global, but whose oversight is tragically inadequate.
There will inevitably be obstacles that will arise in the fight to counter hate speech, however, Bhattacharjee warns that, “Violence always incites more violence. If we do not check back for the use of hate speech, we’re asking for an eternal cycle of violence,” making it imperative that we overcome them.