Twitter fails to police child exploitation

Twitter seems overly fixated on controlling controversial political speech, but these resources should be used towards real issues of online safety and legality.

ADVERTISEMENT
ADVERTISEMENT

Why was it so difficult for a minor to get Twitter to remove sexually explicit material of themselves from its platform? As reported by the New York Post, a 13-year-old boy was sexually exploited on film and that video was shared widely on Twitter, amassing 167,000 views with 2,223 retweets. According to a lawsuit surrounding the matter, Twitter dismissed the complaint, which involved multiple reports, as not violating their rules. The mother of the young man reached out directly to an agent with the Department of Homeland Security who successfully had the videos removed through official means.

The question is, why did it take this level of effort and persistence, and in this case luck, to accomplish what should have been a straight-forward policy and law enforcement action? According to Twitter's rules on child sexual exploitation, "Twitter has zero tolerance towards any material that features or promotes child sexual exploitation, one of the most serious violations of the Twitter Rules."

The rules are explicit and comprehensive from sharing illegal content to discussing fantasies or other fictional content between users. There is also a dedicated reporting form, which covers Twitter, Vine and Periscope and allows a person to report the username of the person sharing the illegal content and links to the content.

However, on the platform itself there is no method of reporting exploitation of a minor, sex trafficking or other illegal content discovered on the the site. While you can report a tweet for being offensive to you or for promoting hate, for example, you have to hover among the options in "It's Abusive or Harmful" and hope it gets someone's attention. The reporting feature is designed to report mostly controversial content, but reporting actual illegal behavior takes more research to achieve. In fact, according to the Canadian Centre for Child Protection, cited in the lawsuit above, you cannot report exploitation or threats from direct messages as the form will not accept those links. The platform makes it difficult to report the only behavior it, as a platform, should really be concerned with.

Twitter has long been very secretive about its processes for handling reports, whether it involves a direct threat or a mean tweet, users simply have no way of knowing what happens. It can take weeks or longer to get a response back, if you get one at all, through automated systems. On occasion you may receive an auto-generated template email, as the victim in this case did, vaguely thanking you for reporting and informing you the content was not a violation. There is absolutely no consistency regardless of the severity of the complaint.

More disturbingly, according to the complaint, the minor, now 17, was asked by an agent to provide ID so they could confirm the images in the video were actually him. Consider how deeply troubling this is. A minor reports that a video of himself and another child, both aged 13, being sexually exploited is being shared on the platform and the only response is to prove its really them in the video? It took a week to get a response. The final response to the multiple claims, "Thanks for reaching out. We've reviewed the content, and didn't find a violation of our policies, so no action will be taken at this time,"

Even for Twitter this is beyond obscene. In January, Twitter boasted it removed 70,000 "harmful" accounts over a weekend, stating, "These accounts were engaged in sharing harmful QAnon-associated content at scale and were primarily dedicated to the propagation of this conspiracy theory across the service," It acted to stamp out accounts it argued had, "the potential to lead to offline harm."

The company of course did not go into detail of how this was accomplished or what criteria they used. But it became clear they had the power to micro-target and remove offending accounts. It is a reasonable question as to why illegal content and especially child exploitation continues to be such a remarkable challenge for them to handle.

The National Center for Missing and Exploited Children reported in March 2020, 2,027,520 reports of child exploitation on social media. Yet this has been a known issue for years. In 2012, Louis Barfe writing for The Guardian wrote of his experience reporting child sexual exploitation in an article titled, Twitter is failing to police child pornography efficiently. Back then he had to send an email to twitter to which he received a generic auto-reply. It took a day-long online outcry and multiple groups and media reporting on the incident for Twitter to act. Then CEO Dick Costolo simply said nothing of the incident according to the account.

Some issues are straight-forward and should not be controversial. It should be simple to report sexual exploitation of a child and the platform should highly prioritize the report, involve the authorities and act to remove it proactively rather than waiting days or weeks to make a decision. A minor reporting a video of themselves being exploited should invoke an instantaneous response from the platform. While Twitter seems overly fixated on controlling controversial political speech on its platform and ensuring accounts banned for minor offenses cannot open another account, these resources should be used towards real issues of online safety and legality.

While Twitter has chosen to permit adult entertainment on its platform, with options in place to make the content "sensitive" to average viewers, it seems to prefer to avoid the issue of exploitation and abuse altogether. The most challenging thing about Twitter is its stubborn refusal to be transparent in its actions. Minors aged 13 and older can make an account which is in no way distinguished from any other account. They can engage in the full range of Twitter features, including direct messages with predators. With the platform welcoming adult content, it should be heavily restricting if not preventing minors from using the service at all.

Sen. Richard Blumenthal (D-CT) demanded of Twitter and other platforms after the DC Capitol Riots, "They bear major responsibility for ignoring repeated red flags and demands for fixes," and demanded reforms to hold big tech accountable for extremism on its platform. So, shouldn't it go without saying that these platforms should be held to an even higher standard when it comes to negligence or complete refusal to act on reports of child sexual exploitation?

ADVERTISEMENT
ADVERTISEMENT

Join and support independent free thinkers!

We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.

Support The Post Millennial

Remind me next month

To find out what personal data we collect and how we use it, please visit our Privacy Policy

ADVERTISEMENT
ADVERTISEMENT
By signing up you agree to our Terms of Use and Privacy Policy
ADVERTISEMENT
© 2024 The Post Millennial, Privacy Policy | Do Not Sell My Personal Information