Assuming a courageous position that’s creating waves in the entertainment, tech, and legal worlds, Janhvi Kapoor spoke up against the abuse of AI images. At the trailer launch of her most recent movie Sunny Sanskari Ki Tulsi Kumari, she also spoke of how many spurious photos of her circulate over social media platforms — photos not from her own quarters but often warping reality. Her message comes through stark and clear: the safeguarding of identity, consent, and authenticity, all the more so amidst the age of AI.

What Janhvi Said & When
At the premiere for the trailer of Sunny Sanskari Ki Tulsi Kumari, Janhvi admitted, “When I log onto social media, I see so many AI images being put up against my wishes.” She noted the reality that, even though she realizes a photo is AI-generated, a layperson might believe it’s real. This gap, she suggests, does harm. She also referred to herself as “old-school,” believing in respecting the work of humans and originality, and worrying about how quickly AI is advancing and also getting used into contexts which blur the real and the faked
Variants of Misuse: AI Trends & Viral Images
The issue is not purely speculative. Janhvi’s concerns come against the larger trends: i.e., AI-edited looks (“Nano Banana trend”, surreal edits, photoshopped / stylised images) are all over Instagram, X, etc . There also come reports of spoofed / morphed images which show her dressed / finding herself in a context she never did, which causes confusion among the public. She mentioned the factor that news websites also put up such images without proper confirmation, which once again spreads misinformation.
Legal, Ethical & Emotional Implications
Janhvi’s public comments extend beyond image abuse — they speak also to larger questions of personality rights, consent, and the safeguarding by the law of one’s image or likeness. She spoke of the need for the regulation of technology. Varun Dhawan, the co-star for the evening, shared her view: technology does have positives, but it also brings the potential for abuse. Laws and regulations for the protection of the artists come into the picture, she felt.
At an emotional level, Janhvi’s statement encapsulates the hurt stars feel when fake content hits the internet — images or graphics which lie, cheat, or snoop into their privacy. There’s a reputational hazard as well: even when one’s aware a photo’s a lie, others may believe the same and it influences opinion.
The Incident with Varun Dhawan: Timing & Backlash
In the same interview, Varun Dhawan had interrupted Janhvi with a joke — “There is no AI na in this movie, Shashank Khaitan?” — when she was discussing the misuse question. At what might well have been a lighthearted moment with a change of context, the interruption came as a dismissive moment to many. The interruption was attacked by the netizens as disrespectful, considering Janhvi was highlighting a serious issue. The moment signifies how sensitive the issues of AI misuse are, as well as how crucial the aspect of timing and respect is when the issues arise.
Responses & Support: What Others Are Saying
Reactions to Janhvi’s worries were a mix but predominantly positive. Most on Twitter praised her for speaking up, affirming how much authenticity and artistry are valued. Others saw the moment as a chance for more conclusive policies for AI, especially those for entertainment and media. Some pointed back to recent court actions — such as Aishwarya Rai Bachchan pursuing the Delhi High Court case against misuse of her image — as a sign the debate wasn’t merely celebrity discomfort but forming precedent.
They also seem to be discussing how to put measures in place: authentication labels for AI-edited content, stricter content policies for social media sites, and clearer laws for personality rights. Janhvi and Varun belong to those calling for these changes.
The Technology & How It Works: AI, Deepfakes & Identity
Understanding the technology sheds light on the reasons why this matters. The current AI technology can generate or alter images using a mixture of training set data, generative adversarial networks (GANs), etc., so as to generate natural-looking faux images — deepfakes, morphs, AI paintings or stylised edits. Some may be harmless or artistic, but others alter appearance or insinuate behaviors or contexts not mutually agreed. That does harm when the audiences receive the images as real images, as Janhvi mentioned.
The factor which enhances risk is the speed of the spread of content: platforms with little verification, share-without-credit, and inability to take down content once it goes viral. Janhvi’s question alludes to these gaps. She wanted to know: what is the process for misuse? What laws protect her and others like her?