Aespa’s Karina was targeted by a sexualized AI deepfake on January 14, 2026, and the clip reportedly pulled in around 2.3 million views before it was removed. This video was edited to make her outfit appear far more revealing, turning an ordinary clip into a case of non-consensual digital sexual harassment.
The video spread fast across social media, and fans reacted with immediate disgust. Many called it ‘repulsive’ and ‘illegal,’ while others reposted the original footage to show exactly how someone had edited it.
Fans criticized the video as degrading and criminal, with many urging SM Entertainment to move quickly and take firm legal action.
SM Entertainment Is Pursuing Legal Action
In an April 2026 legal update, SM Entertainment said it had been collecting evidence on sexual harassment, and deepfake material targeting its artists. The agency also confirmed that authorities had identified, and sentenced 12 people tied to deepfake-related crimes. And several receiving prison terms.
SM further warned that creating, distributing, or possessing explicit manipulated content can carry criminal punishment.
Karina’s Case Is Being Compared To Chaewon’s
Online, many immediately drew comparisons to the recent LE SSERAFIM Chaewon deepfake controversy, where sexualized AI-manipulated images also circulated widely. The similarities between the two cases have fueled anger over how often people target female idols through AI-generated content.






