Grieving man, 33, uses AI chatbot to bring girlfriend ‘back from the dead’

A man used an AI chatbot to bring his fiancée ‘back from the dead’ eight years after she passed away – as the software’s own creators warned about its dangerous potential to spread disinformation by imitating human speech.

Freelance writer Joshua Barbeau, 33, from Bradford in Canada, lost Jessica Pereira in 2012 when she succumbed to a rare liver disease.

Still grieving, Barbeau last year came across a website called Project December and after paying $5 for an account fed information its service to create a new bot named ‘Jessica Courtney Pereira’, which he then started communicating with.

All Barbeau had to do was input Pereira’s old Facebook and text messages and provide some background information for the software to mimic her messages with stunning accuracy, the San Francisco Chronicle reported. 

Freelance writer Joshua Barbeau, 33, from Bradford in Canada, lost Jessica Pereira in 2012 when she succumbed to a rare liver disease (they are pictured together) 

Some of the example conversations that Barbeau had with the bot he helped create

Some of the example conversations that Barbeau had with the bot he helped create 

 

The story has drawn comparisons to Black Mirror, the British TV series where characters use a new service to stay in touch with their deceased loved ones.

Project December is powered by GPT-3, an AI model designed by OpenAI, a research group backed by Elon Musk.

The software works by consuming vast amounts of human-created text, such as Reddit threads, to allow it to imitate human writing ranging from academic texts to love letters.

Experts have warned the technology could be dangerous, with OpenAI admitting when it released GPT-3’s predecessor GPT-2 that it could be used in ‘malicious ways’, including to produce abusive content on social media, ‘generate misleading news articles’ and ‘impersonate others online’.

The company issued GPT-2 as a staggered release, and is restricting access to the newer version to ‘give people time’ to understand the ‘societal implications’ of the technology.

There is already concern about the potential of AI to fuel misinformation, with the director of a new Anthony Bourdain documentary earlier this month admitting to using it to get the late food personality to utter things he never said on the record.

Bourdain, who killed himself in a Paris hotel suite in June 2018, is the subject of the new documentary, Roadrunner: A Film About Anthony Bourdain.

It features the prolific author, chef and TV host in his own words—taken from television and radio appearances, podcasts, and audiobooks.

But, in a few instances, filmmaker Morgan Neville says he used some technological tricks to put words in Bourdain’s mouth.

As The New Yorker’s Helen Rosner reported, in the second half of the film, L.A. artist David Choe reads from an email Bourdain sent him: ‘Dude, this is a crazy thing to ask, but I’m curious…’

Then the voice reciting the email shifts—suddenly it’s Bourdain’s, declaring, ‘. . . and my life is sort of s**t now. You are successful, and I am successful, and I’m wondering: Are you happy?’

Still grieving, Barbeau last year came across a website called Project December and after paying $5 for an account fed information its service to create a new bot named 'Jessica Courtney Pereira'

Still grieving, Barbeau last year came across a website called Project December and after paying $5 for an account fed information its service to create a new bot named ‘Jessica Courtney Pereira’

Rosner asked Neville, who also directed the 2018 Mr. Rogers documentary, Won’t You Be My Neighbor?, how he possibly found audio of Bourdain reading an email he sent someone else.

It turns out, he didn’t.

‘There were three quotes there I wanted his voice for that there were no recordings of,’ Neville said.

So he gave a software company dozens of hours of audio recordings of Bourdain and they developed, according to Neville, an ‘A.I. model of his voice.’

Ian Goodfellow, director of machine learning at Apple’s Special Projects Group, coined the phrase ‘deepfake’ in 2014, a portmanteau of ‘deep learning’ and ‘fake’.

It’s a video, audio or photo that appears authentic but is really the result of artificial-intelligence manipulation.

A system studies input of a target from multiple angles—photographs, videos, sound clips or other input— and develops an algorithm to mimic their behavior, movements, and speech patterns.

The story has drawn comparisons to Black Mirror, the British TV series where characters use a new service to stay in touch with their deceased loved ones

The story has drawn comparisons to Black Mirror, the British TV series where characters use a new service to stay in touch with their deceased loved ones

Rosner was only able to detect the one scene where the deepfake audio was used, but Neville admits there were more.

Another deepfake video, of Speaker Nancy Pelosi seemingly slurring her words, helped spur Facebook’s decision to ban the manufactured clips in January 2020 ahead of the presidential election later that year.

In a blog post, Facebook said it would remove misleading manipulated media edited in ways that ‘aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.’

It’s not clear if the Bourdain lines, which he wrote but never uttered, would be banned from the platform.

After the Cruise video went viral, Rachel Tobac, CEO of online security company SocialProof, tweeted that we had reached a stage of almost ‘undetectable deepfakes.’

‘Deepfakes will impact public trust, provide cover & plausible deniability for criminals/abusers caught on video or audio, and will be (and are) used to manipulate, humiliate, & hurt people,’ Tobac wrote.

‘If you’re building manipulated/synthetic media detection technology, get it moving.’           

Read more at DailyMail.co.uk