Informative Article
Should social media corporations and tech giants be able to freely ban and restrict users, allowing them to clean up their site of trouble, or is this much power a danger to its users, especially as society becomes increasingly connected to the internet?
by Maanas Shah
Free speech on the internet and the powers of tech companies have been a hotly debated topic since their creation. Though their actions are officially legal in the eyes of the United States, many people claim that the abilities these companies hold to silence its users are unconstitutional and unjust. Still, many others believe that these abilities are necessary to maintain a healthy and safe social space, and that their actions are justified. However, as more and more people become increasingly dependent on the internet, an important question comes into the light: Is it safe to allow these massively influential companies to have this much say over so many people?
Almost 80 percent of Americans have a social media account, and with only a few brands controlling almost all of these sites, these few companies hold an incredible amount of power over much of the American population. Recently, we have seen an influx of bans made by these companies against individuals, and even influences, and while most of the content was allegedly hate speech, some of it was just opposing criticisms or unfavorable content that these sites wanted gone.
All these arguments and perspectives join together to form the trilemma of this article: should tech companies be allowed to ban their users from their platforms, especially just based on what they say?
Banning users should be allowed:
As more and more people flock to social media to communicate, post news stories, and discuss events, it is essential that the companies in charge maintain their power. More users means there will be more people who can potentially harm the platform and its users by using it for their own malicious intent. To keep these forums a safe environment for all users and to counter those trying to use it for harmful intentions, it is understandable that these offending users would need to be banned from the site as it is the only way to maintain a relatively peaceful social environment with so many people involved.
Secondly, many people point out how this banning of users goes against the first amendment (the right to free speech). However, this is not true for two distinct reasons:
The first amendment has limitations. Speech that incites or can cause danger is still illegal. This includes expressions such as threats. For example, one cannot scream “fire!” in a public area, because it could incite panic and danger.
Social media platforms like Twitter, Facebook, and Youtube, are all private corporations and are under no obligation to allow their users to do whatever they please. If you choose to use their platform, you must choose to abide by their rules, similar to how you must follow the rules of any other establishment, group, or building you enter in the real world.
Therefore, it makes sense for companies to ban users based on their companies’ morals.
Banning users should not be allowed:
As stated before, a large majority of the American population uses social media and these platforms are all run by huge oligopolies [a few companies who completely control a certain market] who likely exert some other influence over their lives as well (i.e. through their phones or smart assistants). These massive corporations (Google, Facebook, Twitter, etc.) already have a lot of power over the average American as they are the go-to sources for anything technology-related. Allowing these corporations the power to ban whoever they want, whenever they want would simply add on to their reach and power. And, with any corporation, especially monopolies/oligopolies, too much power can be damaging to the livelihoods of their consumers.
A more specific danger, however, is the corporations overstepping their bounds using this ability to ban users freely. It is one thing to ban users for hate speech which is perfectly understandable, but who should actually get the final say in that?
In this scenario, it is the company that gets the say, but why should they have the authority to do that? Perhaps the greater issue is not the fact that they can ban users for breaking their rules, but the fact that they can set the basis of those rules to be whatever they want. We have already seen many instances in which influencers have been banned or censored simply for creating content that the site did not approve of. This “inappropriate” content can range from just swearing to making critiques against the site itself. For example, on YouTube we see many famous stars struggle to monetize their videos as anything that the platform deems as not adversity friendly is at risk of demonization, and even deletion in some cases. This oftentimes forces even content made for mature audiences to be censored heavily as even a stray swear could put the creator at risk, which is incredibly unfair, especially for those who have adults as their audience and rely on the site as their job. Of course, we also see many examples of criticism against these social platforms themselves be taken down or censored on almost every platform, from Twitter to Youtube to Twitch. These are just mere examples of the abuse of power that can come from allowing social media sites to be the judge, jury and executioner by allowing them to ban their users.
A middle ground:
Each side of the argument tries to protect the average user, though with differing ideas. However, because both sides have a common goal - to protect their users - is it possible to try and find a middle ground between the two stances?
Well, knowing that hate speech and threats are a guaranteed risk on major social media forums, it would definitely benefit the majority of their population to allow the removal of these toxic users. However, what is not beneficial is a user being banned for no legitimate reason. To solve this issue, social media sites could implement a stronger appeal process. Almost all of these sites suffer from a terribly weak ban appeal system, which makes those who were falsely banned often unable to get themselves unbanned. Either the process is too long, overly complicated, or just plain ineffective. Having a stronger system would allow those who were unjustly banned to argue their case against the ban, while also making sure that those who post hate speech and other inappropriate content are still removed. With this, the benefits to the users from both sides could be utilized in a more moderate way.
Comments