This article is real — but AI-generated deepfakes look damn close and are scamming people

How amazing is it that Canadian celebrities like TV chef Mary Berg, crooner Michael Bublé, comedian Rick Mercer and hockey megastar Sidney Crosby are finally revealing their secrets to financial success? That is, until the Bank of Canada tried to stop them.

None of that is true, of course, but it was the figurative bag of magic beans apparent scammers on social media tried to sell to people, enticing users to click on sensational posts — Berg under arrest, Bublé being dragged away — and leading them to what looks like, at first glance, a legitimate news story on CTV News’s website. 

If you’re further intrigued by what appears to be an AI-generated article, you’ll have ample opportunity to click on the many links — about 225 on a single page — that direct you to sign up and hand over your first investment of $350, which will purportedly increase more than 10-fold in just seven days.

These were just the latest in a batch of deepfake ads, articles and videos exploiting the names, images, footage and even voices of prominent Canadians to promote investment or cryptocurrency schemes. 

Lawyers with expertise in deepfake and AI-generated content warn they currently have little legal recourse, and that Canadian laws haven’t advanced nearly as rapidly as the technology itself. 

Financial scams and schemes appropriating the likenesses of famous people are nothing new, but the use of rapidly-advancing generative-AI technology puts “a new twist on a pretty old concept,” said lawyer Molly Reynolds, a partner at Torys LLP in Toronto. 

And it’s going to get worse before it gets better. Developing the tools and laws to prevent it from happening is a game of catch-up that we’re already losing, she said.

LISTEN | What you should know about deepfake ads on social media:

Information Morning – NS7:01Implications of deep-fake scam ads

You’ve likely seen them if you spend any time online. Ads that show a CBC host, or a personality such as Elon Musk, shilling for some sort of get-rich-quick scheme. They fall into the category of “deep fakes”, or AI-generated video. Our tech columnist Nur Zincir-Heywood looks at this.

Detecting deepfakes

While there is a lot of content on the internet that has obvious signs of being AI-generated, University of Ottawa computer science professor WonSook Lee said some of it is so good now that it’s getting much harder to discern what’s real.

She said even a couple of years ago she could immediately detect an AI-generated image or deepfake video of a person just by glancing at it and noticing differences in pixelation or composition. But some programs can now create near-perfect photos and videos. 

What isn’t perfectly generated can be further altered with photo and video editing software, she added. 

While we are learning about AI, it’s getting smarter, too. 

“If we find a method to detect deepfakes, we are helping the deepfakes to improve,” she said.

WATCH | The National’s Ian Hanomansing addresses deepfakes of himself:

Anyone can be deep-faked in a scam ad. Even Ian Hanomansing

Scammers are turning to deep fakes of trusted public figures to take your money through bogus online ads. The National’s Ian Hanomansing is among them. He found out what the law says and what social media companies are doing about it. 

Star power

It seems X has curtailed the swarm of Canadian celebrity scam ads, to some extent, and suspended some — but not all — of the accounts sharing them. CBC News attempted to contact a spokesperson for X Corp., the social media platform’s parent company, but only received an automated response. 

X, and other social media and website hosting companies may have policies aimed at preventing spam and financial scams on their platforms. But Reynolds said they face a “question of moral obligations versus legal obligations.” 

That’s because there aren’t many legal obligations spurring platforms to remove fraudulent materials, she said. 

“There are individuals who are deeply impacted with no legal recourse, with no help from the technological companies and maybe without a big, you know, social network to rely on the way that Taylor Swift has,” Reynolds said.

After all, prominent Canadians don’t wield nearly as much influence as Taylor Swift. If they did, perhaps the story would play out differently.

The rapid spread of sexualized AI-generated images of the pop music superstar last month prompted social media companies to take near-immediate action. Even the White House weighed in.

X promptly removed the images and blocked searches for Swift’s name. Within days, U.S. lawmakers tabled a bill to combat such deepfake pornography.

WATCH | Role of social media companies in stopping spead of sexualized deepfakes:

White House ‘alarmed’ by AI-generated explicit images of Taylor Swift on social media

U.S. White House spokesperson Karine Jean-Pierre responded to a question from a reporter about fake, explicit images of Taylor Swift generated by artificial intelligence being spread on social media, saying social media companies have a clear role in enforcing policies to prevent that kind of material from being distributed across their platforms.

But Reynolds said it’s not just situations involving non-consensual, sexualized imagery that can cause harm — especially when it comes to people whose names and faces are their brands.

CBC News requested interviews with Berg and Mercer to ask if either had taken any action in response to the fake ads appropriating their likenesses. Mercer declined to be interviewed for this story. Berg’s publicist forwarded the request to CTV parent company Bell Media, which turned it down. 

LISTEN | What impact will the Taylor Swift deepfakes on AI laws: 

13:52Will the Taylor Swift AI deepfakes finally make governments take action?

Last week, AI-generated explicit images of Taylor Swift’s likeness were shared on X, previously known as Twitter, without her consent. These photos racked up millions of views before being taken down. Reporters Sam Cole and Melissa Heikkilä — who have been tracking the rise of deepfakes for years — talk about why this story has hit a nerve with Hollywood and Washington.

New legal landscape

Whether someone is famous is irrelevant if your image is being used in a way you haven’t consented to, said Pablo Tseng, a Vancouver-based lawyer specializing in intellectual property at McMillan LLP. 

“You are in control of how you should be presented,” said Tseng, a partner at McMilllan LLP. “The law will still see this as a wrong that’s been committed against you. Of course, the question is: Do you think it’s worth your while to pursue this in court?”

Canada hasn’t followed the U.S.’s lead on new legislation for deepfakes, but there are some existing torts — laws mostly established by judges in an effort to provide damages to people who have suffered wrongdoing — that could possibly be applied in a lawsuit involving AI-generated deepfakes, according to Tseng.

The tort of misappropriation of personality, he said, could apply because it’s often a case of someone’s image being digitally manipulated or grafted onto another image. 

The tort of false light, which pertains to misrepresenting a person publicly, is a more recent option that is based on U.S.law and was first recognized in Canada in Superior Court in 2019. But so far, it’s only been recognized in two provinces (British Columbia is the other). 

WATCH | The consequences of faking photos of high-profile people: 

When AI fakery fooled us | About That

Andrew Chang breaks down the consequences of faking high-profile photos after a couple of recent AI images went viral: the Pope in a puffer coat and former U.S. president Donald Trump getting arrested.

Playing the long game

Anyone who wants to pursue any sort of legal action over the production and distribution of deepfakes will have to be in it for the long haul, said Reynolds. Any case would take time to work through the court system — and it’s likely to be expensive.

The fight can, however, pay off.

Reynolds pointed to a recent class-action lawsuit against Meta over “Sponsored Stories” advertisements on Facebook, between 2011 and 2014, that generated endorsements using names and profile photos of users to promote products without their consent. 

Meta proposed a $51-million settlement to users in Canada last month. Lawyers estimate 4.3 million people who had their real name or photo used in a sponsored story could qualify.

“It’s not a particularly quick avenue for individuals, but it can be more cost effective when you have a class action,” said Reynolds. 

But the pursuit of justice or damages also requires knowing who bears responsibility for these deepfake scams. The University of Ottawa’s Lee said what’s already challenging will become nearly impossible to do with further advancements to generative AI technology. 

Much of the research that has been published on artificial intelligence includes source code that is openly accessible, she explained, meaning anyone with the know-how can create their own program without any sort of traceable markers.

WATCH | What happens when deepfakes are used to interfere in elections:

Can you spot the deepfake? How AI is threatening elections

AI-generated fake videos are being used for scams and internet gags, but what happens when they’re created to interfere in elections? CBC’s Catharine Tunney breaks down how the technology can be weaponized and looks at whether Canada is ready for a deepfake election.

Read original article here

Denial of responsibility! Pioneer Newz is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment