Twitter: A platform for hate

Over the course of the last weeks, Twitter has brought updates to users. They have increased the character count of a tweet from the classic 140, to a larger 280. Doubling the characters, in the mind of Twitter’s staff, will enhance users’ experience and allow them to communicate their thoughts with more nuance. Furthermore, Twitter has increased the amount of characters users can have in their username.

As enjoyable as these updates can be, Twitter has continued to ignore its users’ ongoing demand to tighten restriction and work to have a more effective flagging service.

As it currently stands, Twitter has a vague set of policies which are only somewhat enforced. Supposedly, Twitter opposes hate speech and bullying online, but this policy has done absolutely nothing to prevent the rise of white supremacist groups and sexist attacks against female public figures.

Even though Twitter holds these vague policies, when asked to enforce them it appears they shrug off this responsibility. Twitter is just a platform, some argue, it is not responsible for the content on their website.

I don’t hold that Twitter is endorsing white supremacy or sexism by any means, but any platform that exists must have rules of engagement. While free speech is a constitutional right, online anonymity grants malicious people the ability to have no ramifications for what they choose to say and what harm they cause.

Online hate groups can attack and threaten anyone; they can target minorities and taunt them with comments about rape, deportation, or genocide, belligerently breaking Twitter’s code of conduct, only for Twitter to remain silent. A platform needs a referee or else it simply becomes an online mob.

If Facebook’s battle is fake news, then Twitter’s battle is how to maintain a platform of free speech while still maintaining civility. Twitter has begun to take measures to accomplish this, but they are happening slowly. Recently, several white nationalists, including Richard Spencer, Jason Kessler, and Tim Gionet have lost their account verification; the accounts were not deleted. While major figureheads are beginning to be dealt with, though, the real trouble of anonymous hate accounts has had little opposition from Twitter.

While any method of screening will have its problems, and some malicious individuals or some racist comments will inevitably still make it through no matter what, it is better to attempt to fix the problem than ignore it. A few rogue individuals can be dealt with, but at present there are entire communities of racist groups within Twitter which could be easily detected by algorithmic software if they put in the time and effort.

When Twitter made recent updates, it appeared to be deliberate ignorance: while users demanded a safer place for discussion, tighter policy enforcement, and to stop verifying racist Twitter accounts, Twitter decided the proper response was to grant white supremacists double the characters and longer usernames.

Leave a Reply