Twitter’s new rules: ‘OK Boomer’ not OK but Trump-bashing may continue

Twitter is expanding upon its never-ceasing, ever-worsening terms of service that do nothing to protect its users from actual harm.

Twitter is expanding upon its never-ceasing, ever-worsening terms of service that do nothing to protect its users from actual harm, by adding several new clauses to its hateful conduct rules.

In addition to the already present anti-misgendering rules that ban users for accidentally referring to Jessica Yaniv as a “he,” the social media platform is now going to include “language that dehumanizes on the basis of age, disability or disease.”

This means that the remark, “OK boomer,” may result in a swift suspension, as will jokes about former Vice President and current Democratic presidential forerunner Joe Biden, who may or may not be showing signs of early onset dementia.

While the rules can be broadly misinterpreted depending on who’s doing the reporting (see: Yaniv), some blue checkmarks on Twitter have expressed approval of the new policy. Writing on Twitter, NBC reporter Ben Collins implied that the only people upset by the changes are going to be gamers.

“A lot of people whose entire identities are tied to being disproportionately mean on Twitter to anyone who makes them feel deeply insecure while eating Doritos at 3 in the morning are furious at this,” he wrote.

The new policy also raises questions as to whether anyone who responds to jokes by disabled comedians like legally blind Johnny Walsh, and Ricky Berwick and Donovan Castillo (aka RealYungCripp), could be banned if reported for making jokes that could be interpreted as dehumanizing.

According to Twitter, the company is instituting its new policy because “research shows that dehumanizing language increases [the risks of offline harm].” The company provided examples of tweets that could get users suspended, including:

“All [Age Group] are leeches and don’t deserve any support from us.”

“People with [Disease] are rats that contaminate everyone around them.”

“People with [Disability] are subhuman and shouldn’t be seen in public.”

“[Religious Group] should be punished. We are not doing enough to get rid of those filthy animals.”

The company says that based on feedback, it is going to narrow down what is considered an “identifiable group,” as users complained that they should be “allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language.” According to the company, many people wanted to “call out hate groups in any way, any time, without fear.”

Twitter says it plans to protect those who have conversations within marginalized groups, including people who use “reclaimed terminology,” and that it plans to account for “power dynamics that come into play across different groups.”

In other words, don’t expect anyone to be banned for calling for the eradication of white Trump-supporting conservatives, but do expect to be suspended for speaking out against illegal immigration.