Social media sites have uprooted traditional ways of communication, expression of ideas and participation in public discourse. But this facility comes with enormous responsibility since, with it, these platforms also control what information gets shared, removed or promoted. With their growing influence, the platforms of Facebook, Twitter, Instagram and YouTube have increasingly become both drivers of the public conversation and adjudicators of free speech. Which raises perhaps the most critical questions: are social media companies just gatekeepers that keep order online, or do they represent a class of speech arbiter with the right to declare what is and isn’t acceptable?
They are some of the most central ways modern societies operate, from political discussion to entertainment and social connectivity. While their algorithms, which decide what millions of users see and read every day, are generally not transparent, this fact grants social media outlets more influence than most traditional media outlets.
Most importantly, their role as moderators becomes very crucial in the context of harming bad content. The internet is fermenting with hate speech, disinformation, terror propaganda and calls for violence, and the social media firms are regularly being pushed by governments, organizations and the public to fix the problem. In this regard, the challenge for these platforms is to maintain the proper balance between harmful content and the protection of freedom of speech.
One of the fundamental debates concerning social media companies is the idea that their actions in taking down or suppressing content involve elements of censorship. Critics, therefore, argue that such companies sometimes act as personal editors, removing content that may not violate any legal thresholds but fails to meet their community guidelines or political preference. This has been subject to allegations of bias in some quarters where claims are made that platforms are disproportionate in their targeting of particular political views or ideologies.
For instance, Facebook and Twitter have been criticized from both sides of the political divide, with some feeling they have been silent enough in censorship matters affecting conservatives, while others feel they have not adequately cut down hate speech and extremism. This points out the complexity of the situation surrounding content moderation in the global nature as represented by Facebook, where one-size-fits-all rules are out of place against rapidly changing cultural norms with huge political climates.
But the central issue here is that of private companies controlling speech. Control of speech is traditionally left to governments, which have legal structures on what speech can be limited, yet social media companies operate in a legal gray area and exercise enormous powers in determining what speech is acceptable without the same sense of public accountability.
The different relationship between social media companies and governments represents another major factor in the debate on censorship. Sometimes, the governments lean on the platforms to remove content deemed harmful or illegal. An example is Germany’s NetzDG law; the social media platform is supposed to take down illegal content within a 24-hour window upon detection. Similarly, the Digital Services Act of the European Union holds the platforms responsible for policing harmful content.
These regulations often lead to over-censorship. Companies remove a lot of content that might prove to be controversial, at times without any basis regarding any legal standard, to avoid possible legal consequences. This might have an effect on chilled speech, specifically within those countries where governments try to control political discourse. In stronger cases, social media companies could be forced to show respect for strict censorship legislation in countries such as Russia or China or be banned altogether from operating in those markets.
Apart from issues related to the law, these are ethical questions of responsibility and power, regarding how decisions made would affect society. In a nutshell, should private companies be allowed to de-platform individuals or groups from these platforms, which basically silences them on the world’s largest communication platforms? Should they be responsible for content posted by users, or should they enjoy neutrality, without an obligation to police speech?
Another critique against social media platforms is the lack of transparency into decision-making. Policies on content moderation are often quite vague and thus inconsistently enforced, which incentivizes accusations of bias and unfair treatment. The operators can be banned, restricted or removed from posting anything without disclosure of a reason, which foments distrust in motive from the platform.
Moreover, they operate across the world, with guidance by rules that may not represent a certain cultural or political reality of a place. A post acceptable in one country could be defamatory or illegal in another. This makes it practically unfeasible for social media companies to enforce one set of standards across their user base.
With such wide platforms and influence over public discourse, some would say that social media companies can no longer be characterized as gatekeepers but have, in fact, become powerful arbitrators of free speech. In their determination of what is allowed, promoted or taken down, they define the boundaries of acceptable discourse in a manner unprecedented by more traditional institutions like governments, courts and media outlets.
The issue is further convoluted by an increasing reliance on algorithms to police the content. Usually, such algorithms work in an opaque manner, promoting virality and engagement over any notion of truth or veracity, which, again, leans toward sensationalist or hurtful content. Accordingly, automated systems frequently make mistakes that wipe legitimate content or else fail to flag posts when it is dangerous not to.
With this growing power, there comes an array of accountability questions. Unlike governments, social media platforms do not derive their mandate from the people and are pretty opaque in their decision-making mechanisms. But even as they take center stage in shaping public discourse, their primary duty still lies with their shareholders and not the greater good. It’s this tension between profit and responsibility that’s really at the heart of the debate over their role in content moderation.
The more social media companies grow and change, the more intense this debate about their role in internet censorship will get. Central to this debate has been this key question: how these platforms are to safeguard users from harm, while upholding the tenets of free speech. But finding that balance between two competing priorities would not be easy, and the future of online discourse may hinge on how well social media companies navigate these challenges.
But the preservation of free speech online is a critical task that also requires governments. Any regulation will have to be so finely tuned that it holds platforms to account for powerful positions that will do little to damage basic freedoms. If free speech is to be preserved in the digital era, then governments, technology companies and civil society must work together to ensure the internet stays a place for open, diverse, and free expression.