Celebrity
X Takes Action to Protect Taylor Swift After AI-Generated Explicit Images Circulate
Social media platform X has taken decisive action to block searches for Taylor Swift following the circulation of explicit AI-generated images of the singer on the site.
In a statement to the BBC, X’s head of business operations, Joe Benarroch, described the move as a “temporary action” to prioritise user safety.
Users searching for Swift on the platform receive a message: “Something went wrong. Try reloading.”
The emergence of fake graphic images of the singer earlier this week sparked widespread concern. Some of these images quickly went viral and amassed millions of views, provoking alarm not only among US officials but also among Swift’s fanbase.
Fans swiftly took action by flagging posts and accounts sharing the fake images and flooded the platform with authentic photos and videos of Swift, accompanied by the hashtag “#ProtectTaylorSwift.”
In response to the controversy, X, formerly known as Twitter, issued a statement on Friday reiterating its stance against posting non-consensual nudity, stating that such content is “strictly prohibited.”
The statement emphasised, “We have a zero-tolerance policy towards such content,” adding that X’s teams are actively removing identified images and taking appropriate action against the responsible accounts.
While it remains unclear when X implemented the block on searches for Swift or if similar measures have been taken for other public figures or terms in the past, Mr Benarroch emphasised in his email to the BBC that the action was taken “with an abundance of caution” to prioritise user safety.
The issue has garnered attention at the highest levels of government, with the White House weighing in on Friday. White House press secretary Karine Jean-Pierre described the spread of AI-generated photos as “alarming”. He stressed the need for legislative measures to address the misuse of AI technology on social media platforms.
“We believe they have an important role in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people,” Ms. Jean-Pierre emphasised.
In the United States, politicians have called for new legislation to criminalise the creation of deepfake images. Deepfakes, which use artificial intelligence to manipulate videos of individuals, have seen a significant rise in output since 2019, as highlighted in a 2023 study.
While no federal laws explicitly address the sharing or creation of deepfake images, some states have taken steps to tackle the issue. In the UK, the sharing of deepfakes was made illegal as part of the Online Safety Act passed in 2023.
- World News1 week ago
New Yorker Kidnapped by Hamas Confirmed Dead, Says Israeli Military
- Crime5 days ago
Kentucky Judge Gunned Down by Sheriff Allegedly Ran Courthouse as ‘Brothel’ in Sextortion Scheme
- Celebrity1 week ago
Eamonn Holmes Attends Tric Awards with New Girlfriend, Looking Loved-Up
- Crime6 days ago
Norfolk Man Jailed for 12 Years for Spiking Pregnant Woman’s Drink With Abortion Drugs