Twitter has a child porn problem and no plans to fix it

"While the amount of [Child Sexual Exploitation] online has grown exponentially, Twitter’s investment in technologies to detect and manage the growth has not."

ADVERTISEMENT
Image
Christina Buttons Nashville TN
ADVERTISEMENT

Twitter has a problem with child sexual exploitation content that executives are apparently well-informed about, but the company is doing little to fix it. A bombshell report from the Verge dropped on Tuesday, revealing the social media platform's secret child porn problem.

The Verge obtained 58 pages of internal documents and interviewed current and former staffers who say that Twitter’s executives know about the problem, but the company has repeatedly failed to act.

Last year, Twitter executives thought they could cash in on monetizing the adult content that was already allowed on the platform. Twitter executives created a task force they called the "Red Team" for a new project they named, ACM: Adult Content Monetization. Over the past two years, Twitter was very serious about exploring an OnlyFans-like service for its users. The problem, the Red Team discovered that ultimately derailed the project: their inability to police the abundance of child pornography on Twitter.

"Twitter cannot accurately detect child sexual exploitation and non-consensual nudity at scale," the Red Team concluded in April 2022, according to documents obtained by the Verge. The team found that the company lacked tools to verify that creators and consumers of adult content were of legal age. The team also predicted that launching ACM would worsen the existing child sexual exploitation material problem, because creators would be able to hide their content behind a paywall.

The Verge's investigation found that Twitter executives were made aware 15 months earlier that they lacked adequate tools for detecting child sexual exploitation (CSE) and were implored to add more resources to fix it.

"While the amount of [Child Sexual Exploitation] online has grown exponentially, Twitter’s investment in technologies to detect and manage the growth has not," begins a February 2021 report from the company’s Health team, obtained by the Verge. "Teams are managing the workload using legacy tools with known broken windows. In short (and outlined at length below), [content moderators] are keeping the ship afloat with limited-to-no-support from Health."

"Employees we spoke to reiterated that despite executives knowing about the company’s [Child Sexual Exploitation] problems, Twitter has not committed sufficient resources to detect, remove, and prevent harmful content from the platform," reported the Verge.

The 2021 report found that the technology Twitter uses to identify and remove CSE is "by far one of the most fragile, inefficient, and under-supported tools we have on offer," one engineer quoted in the report said.

The technology Twitter deploys currently only recognizes "known" CSE images from a database, and cannot detect new instances of CSE in tweets or live video, the report found. "These gaps also put Twitter at legal and reputation risk," the group wrote in its report.

Earlier this year, The National Center for Missing & Exploited Children filed a lawsuit against Twitter for failing to remove videos containing "obvious" and "graphic" child sexual abuse material.

"The children informed the company that they were minors, that they had been 'baited, harassed, and threatened' into making the videos, that they were victims of 'sex abuse' under investigation by law enforcement," reads the amicus brief submitted to the ninth circuit in John Doe #1 et al. v. Twitter. Twitter left the videos up, "allowing them to be viewed by hundreds of thousands of the platform’s users."

Little progress has been made on the group’s recommendations for Twitter to reduce the amount of child sexual abuse material on their platform. "Today we cannot proactively identify violative content and have inconsistent adult content [policies] and enforcement," the team wrote. "We have weak security capabilities to keep the products secure." They added that non-consensual nudes "can ruin lives when posted and monetized."

After Elon Musk sought to buy Twitter, but backed out claiming the company was lying about the number of bots on the platform, Twitter announced on August 23rd that the health team would be tasked with identifying spam accounts.

ADVERTISEMENT
ADVERTISEMENT

Join and support independent free thinkers!

We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.

Support The Post Millennial

Remind me next month

To find out what personal data we collect and how we use it, please visit our Privacy Policy

ADVERTISEMENT
ADVERTISEMENT
By signing up you agree to our Terms of Use and Privacy Policy
ADVERTISEMENT
© 2024 The Post Millennial, Privacy Policy | Do Not Sell My Personal Information