Well, tech experts are sounding alarm bells over the rapid spread of AI-generated explicit images of women online. Concerns grew this week after sexual deep fake photos of Taylor Swift went viral on social media. A slew of fake pornographic images of the singer made their rounds on X, formerly Twitter, all without her consent or her knowledge and they were seen tens of millions of times before being taken down.
Swifties drowned out the post by flooding timelines with quote protect Taylor Swift. Most of the instigators have been suspended by X and the site says it's taking action on any more images it finds. For more on this, we're joined now by Ben Coleman. He's the co-founder and CEO of Reality Defender, a deep fake detection firm.
Ben. I want to ask, you know, when you saw these, how did you know they were fake?
How to detect that?
Well, unfortunately, we've seen these for many months. I think the only reason they're becoming topical is because it went viral on Twitter on X, but these have been moving around the dark web and public web for close to five months now.
And what our platform Reality Defender does is detects AI generated or manipulated media, whether it's audio, video, images, or text. And this one was quite simple. We are over 80% confident that this was a diffusion based deep fake.
And diffusion is one of the more popular generative models, platforms like stable diffusion and mid journey allow anybody with a internet connection, anyone with a Google search to, for free, create quite dangerous media. Dangerous. And you know, of course, this was brought to light by this happening to one of the biggest celebrities right now, Taylor Swift. But this could happen to anyone.
How vulnerable are people right now with this AI technology advancing so quickly?
That's what makes it incredibly scary, you know, for our parents, for our children. You know, you can only use your imagination to think about how dangerous a potential piece of media can be.
You know, with only a few seconds of your voice or a picture of your face from social media or LinkedIn, you can put a person into any kind of compromising position. And given the technology is available to anybody without having to have any expertise in technology at all, just Google it.
There's over 10,000 platforms that do this. And we're hearing reports, you know, from police agencies as well, you know, that there are students in schools being victimized by this. There are there's child pornography being made using this technology. There's also calls for, you know, governments to do more, lawmakers to do more to crack down on this. How difficult is that? So the software using it is not difficult.
You know, platforms can use our software today to identify a generated image upon upload before it goes viral. The problem is, is that government regulations just don't exist yet.
Things are moving way too quickly. And the requirement should be on the platforms to flag deep fakes, not on the consumer. People on our team with PhDs can't see real versus fake.
Average people don't stand a chance. So we absolutely need regulation to require the platforms to protect consumers. And what kind of regulations would that include? Because, you know, this is people are people are vulnerable online. We're hearing about, you know, people being bullied and things like that. And they can go widespread so quickly. So when you talk about regulations, what does that look like? You know, taking a step back quickly, you know, we think AI has a lot of great, great abilities. You know, things can can can be helped for everybody with AI in this space. Unfortunately, it's very dangerous. And what we're looking at in other countries, European Commission, UK, Taiwan, Singapore, they have brought up regulations that at minimum just require the flagging of AI generated media, not saying it's good or it's bad or truthful or untruthful, but just saying that this image, this video has indicators of AI to give the viewers, the consumers the ability to not see it or perhaps it won't go viral.
But again, in the US right now, we don't have any regulations. We're hoping the next few months, our current legislative teams will push forward with some minimum regulations.
And in the next few months, of course, we could see major advances in this technology as well.
Swift's Reaction
Taylor Swift responded assertively against the unauthorized circulation of her unblurred pictures. Her stance emphasized the importance of respecting an individual’s right to control their own image and its usage. The dissemination of these unblurred images had a notable impact on Taylor Swift's public image, prompting her legal team to take action against those responsible for sharing the unauthorized content.
WATCH VIRAL PHOTOS - CLICK HERE
WATCH VIRAL PHOTOS - CLICK HERE
WATCH VIRAL PHOTOS - CLICK HERE
WATCH VIRAL PHOTOS - CLICK HERE
WATCH VIRAL PHOTOS - CLICK HERE
WATCH ONLINE - https://www.sociallykeeda.com/search?q=viral
Swift leveraged social media platforms and public statements to address the situation directly, expressing her disapproval regarding the release of these altered images without her consent.
Taylor Swift's AI Pictures
The circulation of unblurred snap shots of Taylor Swift has a massive impact on her reputation and privacy. When unauthorized snap shots spread extensively, it is able to intrude upon a superstar's personal life and affect their career. The media coverage and public interest generated through the circulate of unblurred photographs can cause accelerated scrutiny at the movie star's each flow, probably causing misery and pain.
This sizeable circulate also increases implications for future interactions between celebrities, lovers, and the media. It might also necessitate more stringent measures to protect celebrities' privateness rights at the same time as navigating public visibility. Managing reputational damage because of such big flow turns into essential in keeping a advantageous image amidst heightened scrutiny.
WATCH ONLINE - https://www.sociallykeeda.com/trending
"The images may be fake, but their impacts are very real," Morelle said in a statement. "Deepfakes are happening every day to women everywhere in our increasingly digital world, and it's time to put a stop to them."