img

X blocks searches for Taylor Swift after explicit AI fakes circulate

"This is a temporary action and done with an abundance of caution as we prioritize safety on this issue."

ADVERTISEMENT

"This is a temporary action and done with an abundance of caution as we prioritize safety on this issue."

Image
Jarryd Jaeger Vancouver, BC
ADVERTISEMENT

X, formerly known as Twitter, has taken steps to protect Taylor Swift after explicit, artificial intelligence-generated images of the singer went viral on the platform.

Searches for her full name no longer yield any results, though X has noted that this is temporary and they are working on a permanent solution to the problem.



Beginning on Sunday, users who typed "Taylor Swift" into the search bar were met with a message that read, "Something went wrong. Try reloading." No amount of clicking the retry button worked, however, with results being blocked for every search category.

Even the People tab failed to return a result for the pop star's account with 94.9 million followers and one of the largest on the entire platform.

While "Taylor" or "Swift" produced results, any combination of her first or last name with "AI" was also blocked.

"This is a temporary action and done with an abundance of caution as we prioritize safety on this issue," X head of business operations Joe Benarroch said in a statement, per the Daily Mail.

The AI-generated images began circulating on the site last week, with one managing to be seen upwards of 47 million times before it was finally removed.



"Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content," the X safety team wrote in a statement. "Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed. We're committed to maintaining a safe and respectful environment for all users."

The situation has led to a call for better protections against such content, with members of Congress calling on their fellow lawmakers to take action.



"What’s happened to Taylor Swift is nothing new," Rep. Yvette Clarke wrote in a post on X. "For yrs, women have been targets of deepfakes w/o their consent. And w/ advancements in AI, creating deepfakes is easier & cheaper. This is an issue both sides of the aisle & even Swifties should be able to come together to solve."

As the BBC reports, Rep. Tom Kean Jr agreed with his colleague, adding, "Whether the victim is Taylor Swift or any young person across our country, we need to establish safeguards to combat this alarming trend."

ADVERTISEMENT
ADVERTISEMENT
Sign in to comment

Comments

Powered by The Post Millennial CMS™ Comments

Join and support independent free thinkers!

We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.

Support The Post Millennial

Remind me next month

To find out what personal data we collect and how we use it, please visit our Privacy Policy

ADVERTISEMENT
ADVERTISEMENT
By signing up you agree to our Terms of Use and Privacy Policy
ADVERTISEMENT
© 2025 The Post Millennial, Privacy Policy | Do Not Sell My Personal Information