Celebs
From Taylor Swift to Scarlett Johansson: AI Deepfake Controversy Sparks Legal and Safety Debate

A new AI video tool linked to Elon Musk is facing criticism after reports that it generated explicit deepfake videos of celebrities, including Taylor Swift and Sydney Sweeney, without users explicitly requesting sexual content.
According to reports, the feature—part of the Grok AI system—has been accused of producing NSFW (not safe for work) content through its “Spicy Mode,” raising concerns about safety, consent, and platform accountability.
‘Spicy Mode’ and Unprompted Explicit Content
The controversy gained attention after testing by media outlets found that selecting the “Spicy” setting could result in explicit outputs, even when prompts were neutral.
In one reported test, a prompt describing Taylor Swift celebrating at a festival led to a generated video that included explicit imagery, despite no request for nudity. The report noted that the user had only selected the “Spicy” option without instructing the AI to create sexual content.
Legal expert Clare McGlynn described the issue as intentional rather than accidental, stating:
“This is not misogyny by accident, it is by design.”
Multiple Celebrities Affected
Tests conducted by different outlets indicated that the tool could generate suggestive or explicit depictions of several public figures. These reportedly included Scarlett Johansson, Jenna Ortega, Nicole Kidman, Kristen Bell, Timothée Chalamet, and Nicolas Cage.
While some attempts were blocked with moderation messages, others reportedly succeeded, highlighting inconsistencies in content safeguards.
Regulatory and Safety Concerns
The reports have raised questions about compliance with digital safety laws. In some regions, platforms hosting explicit content are required to implement strict age verification systems. However, the tool reportedly relied only on basic self-declared age input.
Experts have pointed to earlier incidents involving Taylor Swift, where explicit AI-generated images circulated widely online, prompting platform intervention and public criticism. Officials have also warned that weak enforcement disproportionately affects women and contributes to online harassment.
Growing Legal Push Against Deepfakes
The controversy comes amid increasing global efforts to regulate deepfake technology. Several countries have introduced or proposed laws targeting non-consensual explicit content, including digitally created material.
In the United States, legislation such as the 2025 “Take It Down Act” prohibits the non-consensual publication of intimate depictions, including digital forgeries.
Celebrities are also turning to legal action. In India, actors such as Aishwarya Rai Bachchan and Anil Kapoor have approached courts to remove deepfake content and protect their personality rights. Courts have recognised that such misuse affects an individual’s dignity and privacy.
Taylor Swift Takes Trademark Route
Taylor Swift has also explored legal protections by applying to trademark her voice and image. The move reflects growing concern within the entertainment industry over AI-driven impersonation and misuse of identity.
Legal experts suggest such steps could provide additional tools to challenge unauthorized AI-generated content.
Wider Impact Beyond Celebrities
While high-profile figures often draw attention, experts warn that deepfake technology affects a much broader population. Reports indicate that non-celebrities, including young women and teenagers, are increasingly targeted with manipulated explicit content.
Lawmakers and researchers have emphasized that technological, legal, and regulatory responses will all be necessary to address the issue effectively.
The Grok AI controversy has intensified ongoing debates about the ethical use of artificial intelligence, platform responsibility, and the need for stronger safeguards against misuse of digital likeness.




