The First Amendment of the United States Constitution guarantees the freedoms of speech, religion, press, assembly, and petition.
Free speech prohibits the government from limiting an individual’s ability to express their beliefs, verbally and symbolically. The government cannot prevent you from expressing your beliefs unless that speech falls under one of a number of different supervised categories.
Brandenburg v. Ohio (1969) provided a new interpretation to the First Amendment of the U.S. Constitution, the classic example being that of yelling “Fire!” in a crowded theater. Clarence Brandenburg, a leader in the Ku Klux Klan, gave a speech at a Klan rally in Ohio, during which he made vague threats against government officials. He was convicted under an Ohio criminal syndicalism law, which prohibited advocating for violence as a means of political reform. Brandenburg appealed, arguing that his speech was protected by the First Amendment. The Supreme Court ruled in his favor, establishing the “imminent lawless action” test, which protects speech unless it is directed to inciting immediate illegal activity and is likely to produce such action. In other words, speech that is intended to incite violence or panic, or “imminent lawless action,” is not protected.
There have also been established, separate conditions for speech in special circumstances, like in schools, allowing for regulation in some instances that are not ordinarily allowed. Tinker v. Des Moines (1969) ruled, also provided new indications of the amendment. In December 1965, Mary Beth Tinker, John Tinker, and Christopher Eckhardt wore black armbands to their public school in Des Moines, Iowa, to protest the Vietnam War. The school district quickly adopted a policy prohibiting armbands, leading to the suspension of the students when they refused to remove them. The Tinkers sued the school district, claiming their First Amendment rights were violated. The Supreme Court ruled in a 7-2 decision that students do not “shed their constitutional rights to freedom of speech or expression at the schoolhouse gate,” affirming that the armbands represented pure speech and were not disruptive to the educational environment.
The legal landscape surrounding free speech continues to evolve with regards to how it applies to the public. While freedom of speech has always been a cornerstone of American democracy, questions have arisen of late surrounding how traditional concepts of free speech translate to the online world of social media in the digital era.
Trump and Social Media Bans
Three years ago, former President Donald Trump was accused of inciting the violent insurrection at the U.S. Capitol on January 6th, 2021. This unprecedented attack followed a rally where Trump delivered a speech reiterating false claims of a stolen election and urging his followers to “fight like hell.” The mob breached security, causing extensive damage, multiple deaths, and injuries. In the aftermath, Trump faced widespread condemnation, leading to his second impeachment for incitement of insurrection. This event also prompted major responses from social media platforms in effort to regulate these incidents in the future.
In response, various internet platforms — YouTube, Snapchat, Instagram, Facebook, and his personal favorite at the time, X — formerly known as Twitter — banned his account. The first to rule with the iron first, X, permanently suspended Trump’s account on January 8th, 2021, citing the risk of further incitement of violence. X stated that Trump’s tweets violated their Glorification of Violence policy, creating an environment where his supporters felt encouraged to act violently. Other social media platforms began following suit and suspended Trump’s account for similar reasons; Facebook’s Oversight Board upheld the suspension, noting that Trump’s posts during the Capitol riot praised the individuals who were engaging in violence.
A few weeks after the incident, however, the former president’s ban from Instagram and Facebook was lifted. After changing policy, Meta evaluated the current environment, including the conduct of the U.S. 2022 midterm elections and expert assessments on the security situation, and determined that the risk to public safety had sufficiently receded. When X first instituted the ban, conservative politicians and pundits were outraged. Many users called for more specific guidelines on social media content moderation, arguing that the bans on Trump, as well as others alleged to have incited the violence of January 6th, were a violation of the First Amendment right to free speech. In 2022, Elon Musk purchased X for 44 billion dollars, citing a commitment to upholding free speech as a primary reason. Musk wanted to change content moderation on the platform and promote free speech on what he said had become “the de facto town square.”
Musk, a self-proclaimed free speech absolutist, immediately implemented several controversial measures. He laid off nearly half of Twitter’s workforce, including key executives responsible for content moderation and trust and safety. This restructuring led to a perceived reduction in oversight and an increase in harmful content on the platform. The use of racial slurs reportedly increased by nearly 500% within hours of Musk’s takeover. Similar spikes were observed in antisemitic, misogynistic, and transphobic language. Musk also introduced a new verification system, allowing users to purchase the verification badge for $8 per month. This system was quickly abused, resulting in the creation of numerous fake accounts and significant reputational and financial damage to companies on the platform.
Setting aside the mess caused by Musk’s actions at X — evidenced by the unprecedented increase of hate speech on the platform — we should focus instead on X’s emergence as a de facto town square.. For the First Amendment, this has vital implications.
Spaces designated to serve as public forums, the sort of space Musk refers to when mentioning the town square, offer the strongest protection of our First Amendment rights. Traditional or quintessential public forums, such as sidewalks or city streets, allow individuals to freely express their opinions, with the government having limited authority to intervene as long as public safety is not compromised.
In the realm of social media, platforms like Instagram and Facebook encompass millions of users in their customer base. Services that were once only used for sharing photos or updates with friends have evolved into forums for significant public discourse. This prompts the natural question, are social media platforms evolving into the modern-day public forum?
Academics at the Knight First Amendment Institute at Columbia University have been working for years to establish just that, albeit in specific circumstances. In 2018, Knight First Amendment Institute v. Trump (928 F.3d 226) involved a lawsuit by members of the Institute against then-President Trump. The lawsuit was filed on behalf of seven defendants whom Donald Trump blocked on X for criticizing him. At the time, Trump’s X account had 53.4 million followers, trafficked tens of thousands of retweets, and contained official statements from the President. At 928 F.3d at 237, the panel concluded that Trump’s X account was intentionally made to be a public forum, thus making it unconstitutional to deny others the right to view and comment on it. Therefore, Trump’s actions constituted viewpoint discrimination.
In a past case in 2017, Matal v. Tam, 137 S. Ct. 1744 at 1758, the Supreme Court stated that it exercised “great caution” to prevent the government from “silenc[ing] or muffl[ing] the expression of disfavored viewpoints” under the guise of the government speech doctrine. The district court hearing the case against Trump followed this precedent, as did the Second Circuit Court of Appeals. The case reached the Supreme Court, but since Trump was no longer president, the case was declared moot, and the Supreme Court vacated the lower courts’ decisions.
Nevertheless, the case sets a legal precedent for future cases of the possibility of social media platforms as public forums in certain circumstances — specifically, if and when a public official posts online and is operating in their official governmental capacity, the forum they create is constitutionally protected. It is also worth noting that Knight Institute v. Trump (2017) dealt with, for lack of better words, a forum-within-a-forum. In Knight, Trump’s lawyers argued that his profile was a private, non-regulatable forum, and thus Trump could ban individuals from his feed. But when Trump was banned from X, he and his followers claimed that X as a whole, as a platform, was a public forum that should not be able to ban individuals.
Private Enterprise and Public Forums
At first glance, the argument that social media should be treated in the same way as a tangible public forum in the real world holds some appeal. After all, social media has generally grown in importance in the weaving of the fabric of American society, especially during the COVID-19 pandemic when physical gatherings were proscribed. One can easily argue that there is no practical difference between a physical and a virtual public forum in terms of the function it performs in society.
On the other hand, a key distinction between social media and a literal town square is who has created it. Social media platforms are creatures of private enterprise. Their prominence in American society does not change this fact. If private actors are to be held to the same standard as the government itself in terms of the parameters of free speech, it seems that this could set a dangerous precedent. It is one thing to say that someone has the right to air their grievances in the town square; it is another to say someone has the right to air their grievances on your front lawn. Unlike a literal town square, social media do not have the benefit of tax dollars to support their upkeep. Instead, they largely rely on advertising for their revenues. Needless to say, manufacturers of household products are less than enthusiastic to have their advertisements appear alongside a posting filled with hate speech, lest they be seen as supporting that hate speech or having their product associated with it. For this reason, when Elon Musk loosened up X’s policies restricting speech, advertising revenues dropped sharply. Meanwhile, former President Trump found a way around his X ban by launching his own social media site, Truth Social.
As most other social media companies tried to protect their revenue streams by maintaining policies curtailing speech on their platforms, red states like Florida and Texas passed laws in 2021 championing free speech in the views of their proponents. SB 7072 was passed in Florida to make it illegal to ban a candidate for office in the state from social media. Texas bill H.B.20 prohibits platforms from removing political content. Industry groups sued to block these laws, and the cases were recently argued before the Supreme Court in Moody v. NetChoice (2023) and NetChoice v. Paxton (2023). Critics view these laws as undermining free speech, as social media companies should have control over their platforms in the same way a newspaper controls what it prints. In other words, the case reveals some tension between freedom of speech and freedom of the press. The Supreme Court’s decision on these cases is expected in June 2024.
Social Media and Traditional Limits on Speech
Another aspect of this debate worth noting is why First Amendment claims are made in the first place. In the past few years, calls for greater protection of free speech have often been attempts to shield violent, extremist, or hateful speech made online. Speech that involves any sort of threat of violence or otherwise imminent lawless action is not and has never been protected by the First Amendment, nor would it be if X was considered a quintessential public forum.
Therefore, when politicians and pundits cry out that their constitutional rights have been violated because of a social media ban, consider what speech was at issue. Over the years, the First Amendment has been interpreted as a defense of each individual’s ability to contribute to the general marketplace of ideas. But individuals egging on violent supporters do not contribute anything to the marketplace of ideas.
Conclusion
As it stands today, X, formally known as X, is a private enterprise, and the First Amendment does not apply in the same way to messages as it does to a protest on city streets. Whether or not its users are government officials does not change the fact that X is a corporation, not a government agency. Courts on certain occasions have held that government officials’ social media pages were private, not public, too. Thus, it remains to be seen whether the growing popularity of social media platforms as areas for general public expression will lead to an expansion of the First Amendment with it.
Legislatures and courts must tread carefully before placing burdens of social media companies in the name of free speech under the First Amendment. To do otherwise would place at risk another foundational principle of American society, namely private enterprise and free markets. Popularity of social media has a notoriously short shelf life; yesterday’s YouTube gave way to today’s TikTok. Rather than impose government restrictions on privately-sponsored social media, we should let the free market that created social media solve the problem. Whether you like or hate Trump or what he stands for, his creation of a new platform — like many others have done — addressed the issue of speech policies on X. We must empower the free market to protect the marketplace of ideas so that diverse perspectives flourish and robust debate thrives.
Legislatures and courts must tread carefully before placing burdens of social media companies in the name of free speech under the First Amendment.