OpenAI used Kenyan workers to filter child sex abuse, bestiality material from ChatGPT for less than $2/hour

Some of the disturbing material included was described as "an explicit story about Batman's sidekick, Robin, being raped in a villain's lair," and a "graphic description of a man having sex with a dog in the presence of a young child."

ADVERTISEMENT
ADVERTISEMENT
In a disturbing investigative report, TIME Magazine revealed that OpenAI, the artificial intelligence company behind the popular AI chatbot ChatGPT, outsourced the moderation of sex abuse, bestiality, and other explicit content to Kenya, where workers took home less than $2 per hour from subpar working conditions.

ChatGPT, which garnered over a million users within the first week of its launch in late November, is a particularly powerful program, in that it can "generate text on almost any topic or theme," according to TIME. However, some of the company's business practices are controversial.



According to four anonymous Kenyan employees of Sama, the outsourcing company that OpenAI had a contract with, the work was "mentally scarring." Due to problems in the bot's program which caused it to spew "violent, sexist and racist remarks," these employees were tasked with doing the dirty work: viewing such material and labeling it as a no-go for the AI.

Some of the disturbing material included was described as "an explicit story about Batman's sidekick, Robin, being raped in a villain's lair," and a "graphic description of a man having sex with a dog in the presence of a young child."

"That was torture," one employee said. "You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture."

Depending on performance and seniority, "the data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour," wrote TIME's Billy Perrigo.

A bottom-tier labeler would be taking home 7,368.57 Kenyan shillings, or $59.40 USD in their 45-hour work week, reported the magazine. Based on that, their yearly income rate would be approximately $2,851 USD which is above the national average of $1,979 USD, according to the Kenya National Bureau of Statistics.

Still, the workers were displeased given the nature of the job and lack of mental health support.

"Although they were entitled to attend sessions with 'wellness' counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work. Two said they were only given the option to attend group sessions, and one said their requests to see counselors on a one-to-one basis instead were repeatedly denied by Sama management," Perrigo wrote.

Another controversial project from OpenAI in February 2022 led to contracts between Sama and the AI developer being terminated, with even more disturbing content requests. This time, the outsourcing company hired by the ChatGPT developers were working on a different project — collecting a "sample batch" of illegal material.

"Some of those images were categorized as 'C4'—OpenAI's internal label denoting child sexual abuse—according to the document," reported TIME. "Also included in the batch were 'C3' images (including bestiality, rape, and sexual slavery,) and 'V3' images depicting graphic detail of death, violence or serious physical injury, according to the billing document."

According to the report, OpenAI paid Sama a total of $787.50 for collecting the images.

Though denying that employees were restricted from accessing mental health support, Sama eventually canceled all its work with OpenAI that month, reportedly eight months earlier than their contract was scheduled to end.

"The East Africa team raised concerns to our executives right away. Sama immediately ended the image classification pilot and gave notice that we would cancel all remaining [projects] with OpenAI," a Sama spokesperson said. "The individuals working with the client did not vet the request through the proper channels. After a review of the situation, individuals were terminated and new sales vetting policies and guardrails were put in place."

OpenAI shrugged off responsibility for the project, saying they could neither confirm nor deny if they obtained images in the child sexual abuse category.

In a statement to the publication, the company confirmed that it had received 1,400 images from Sama that "​​included, but were not limited to, C4, C3, C2, V3, V2, and V1 images."

Those Kenyan workers were also paid around $2/hour.

In a later statement, the company said: "We engaged Sama as part of our ongoing work to create safer AI systems and prevent harmful outputs. We never intended for any content in the C4 category to be collected. This content is not needed as an input to our pretraining filters and we instruct our employees to actively avoid it. As soon as Sama told us they had attempted to collect content in this category, we clarified that there had been a miscommunication and that we didn't want that content. And after realizing that there had been a miscommunication, we did not open or view the content in question — so we cannot confirm if it contained images in the C4 category."
ADVERTISEMENT
ADVERTISEMENT

Join and support independent free thinkers!

We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.

Support The Post Millennial

Remind me next month

To find out what personal data we collect and how we use it, please visit our Privacy Policy

ADVERTISEMENT
ADVERTISEMENT
By signing up you agree to our Terms of Use and Privacy Policy
ADVERTISEMENT
© 2024 The Post Millennial, Privacy Policy | Do Not Sell My Personal Information