PRO: Banning from social media platforms is warranted, though more regulations are needed
In what is a fitting end to his presidency, Donald Trump has been banned from just about every major social media platform in existence.
Following a Twitter ban on Jan. 8, Trump has had posts and accounts related to him removed from major social media sites such as Facebook and Instagram. He’s also been from YouTube.
Many people are concerned about their privacy rights and freedom of speech being threatened by private companies out of the government’s control. Even major news sources such as the Washington Post are headlining their articles with, “Trump’s removal from social media was warranted — but arbitrary”.
If there is one thing I am in favor of, it’s the right to free speech and expression. Even if voiced in a hateful way, I’d much rather face hate head-on than have it fester beneath the surface of society permanently. But we face a new set of challenges with the internet, where harmful rhetoric stays up permanently for billions of people to see.
Legally, Twitter and other social media platforms have the right to ban users and content that they deem excessively harmful.
In the 1919 Supreme Court case Schenk vs. United States, the Supreme Court debated on controversial aspects of the First Amendment. They created the “clear and present danger” argument to tackle issues concerning national security.
Essentially, if the government thinks the words or actions of a citizen pose a “clear and present danger” to the country, the First Amendment is not a viable defense. In other words, though the Constitution lets you express yourself, you can’t yell “fire” in a crowded room or “bomb” on an airplane.
This obviously doesn’t apply to private companies like Twitter and Facebook, but Section 230 of the U.S. Code 47 involving Internet protocol covers that. It states, “No provider or user of an interactive computer service shall be held liable on account of… any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”
So, in a legal sense, it’s well established that private social media companies have relatively free reign. But it is a tricky ethical situation. I agree with the Washington Post when it stated in an article covering Trump’s Twitter ban, “The two tweets that Twitter cited as sufficiently incendiary to justify Mr. Trump’s permanent ban had hardly more fire and fury to them than so many others he had gotten away with.”
There is a very murky line between what social media platforms deem worthy of staying and worthy of removal at any given time. Though they acted accordingly to the Jan. 6 riots, it couldn’t help but seem like they were simply matching the energy of the outrage toward the attacks. There need to be clear-cut boundaries set by the government of what counts as hateful, and how many infractions are allowed before accounts are removed from social media sites.
When only a handful of private companies enforce these restrictions, it only leads to higher concentrations of harmful rhetoric on other platforms. Mark Weinstein, the head of alternative social media platform MeWe, reported users more than doubling throughout 2020, and he said in an interview with NPR, “It’s the middle of January, and we’re already over 15.5 million [users]”.
Many fringe right-wing groups and Trump supporters, who are disillusioned with Facebook’s banning of misinformation, have moved to MeWe and other sites that are dedicated to privacy and staying out of users’ content and data.
When there are internet platforms like this that serve as “safe havens” for harmful speech and misinformation, it’s an even worse situation than outright violations of the First Amendment. I said before that I’d rather face hate head-on than have it fester somewhere out of sight, but platforms like these do exactly that – let hateful, misinformed individuals gather and build their harmful rhetoric with nobody to confront the issue.
If I were the government (in all my 17 years of glory), I’d amend Section 230 with a board of representatives from private social media companies to offer their input. I’d set strict boundaries for how and when platforms and content can be banned from social media. Every platform and site should be in sync with each other going forward to prevent misinformation and confusion regarding what is or isn’t legal online.
Though I do believe strongly in the First Amendment, and I do see problems with the current system for weeding out content to ban on social media platforms. Twitter and other companies were correct in banning Trump from their sites. This outline for banning users, going forward, should be made concrete, and adhered to by every platform with a sizable user base.
At the end of the day, whether or not we agree with the system in place, banning users from social media is a question of whether or not violence or harm came as a direct result of the content. If it does, I see social media banning users as warranted.