Why Taylor Swift AI-Generated Deepfake Images Raise Wider Worries

The world is awash in deepfakes — video, audio or images that make people appear to do or say things they didn’t, or be somewhere they weren’t. Many are devised to give credibility to falsehoods and damage the reputations of politicians and other people in the public eye. But most deepfakes are explicit videos and pictures concocted by mapping the face of a celebrity onto the body of someone else. That’s what happened in late January, when fake explicit images of pop star Taylor Swift cascaded across social media. Now that artificial intelligence allows almost anyone to conjure up lifelike images and sound with a few taps on a keyboard, it’s getting harder to tell if what you see and hear online is real.

The phony images of Swift were widely shared on social media sites, drawing the ire of her legions of fans. One image shared on X, the site formerly known as Twitter, was viewed 47 million times before the account was suspended, the New York Times reportedBloomberg Terminal. X said it was working to remove “all identified images” and would take “appropriate action” against those who posted them. Swift was also among the celebrities whose voices and images were manipulated into appearing to endorse commercial products — a popular brand of cookware, in Swift’s caseBloomberg Terminal.



Source link

credite