Elon Musk’s AI chatbot Grok is unleashing ‘torrent of misinformation’, expert says – as images showing politicians carrying out 9/11 and cartoon characters as killers spread on social media

Elon Musk’s AI chatbot Grok is unleashing a ‘torrent of misinformation’ through its image generation tool, an expert has warned, as harmful images depicting politicians carrying out 9/11 and cartoon characters as killers are spreading on X.

A new version of Grok, which is available to paid subscribers on the social media platform, was launched on Wednesday complete with a new AI image generation tool – prompting the flood of bizarre images to appear.

The image tool seemingly has few limits on what it can generate – lacking guardrails which have become industry standard among rivals such as ChatGPT, which rejects prompts for images depicting real-world violence and explicit content for example.

Grok by contrast has allowed the creation of degrading and offensive images, often depicting politicians or celebrities or religious figures in the nude or carrying out violent acts.

The chatbot also does not appear to refuse to generate images of copyrighted characters, with many images of cartoon and comic book characters taking part in nefarious or illegal activities also being posted.

Elon Musk ‘s AI chatbot Grok is unleashing a ‘torrent of misinformation’ through its image generation tool, an expert has warned

Daniel Card, fellow of BCS, the Chartered Institute for IT, said the issue of misinformation and disinformation on X was a ‘societal crisis’ because of its potential impact.

‘Grok may have some guardrails but it’s unleashing a torrent of misinformation, copyright chaos and explicit deepfakes,’ he said.

‘This isn’t just a defence issue – it’s a societal crisis. Information warfare has become a greater threat than cyber attacks, infiltrating our daily lives and warping global perceptions.

‘These challenges demand bold, modern solutions. By the time regulators step in, disinformation has already reached millions, spreading at a pace we’re simply not prepared for.

‘In the US, distorted views of countries like the UK are spreading, fuelled by exaggerated reports of danger. We’re at a critical juncture in navigating truth in the AI era.

‘Our current strategies are falling short. As we move into a digital-physical hybrid world, this threat could become society’s greatest challenge. We must act now – authorities, governments and tech leaders need to step up.’

But Musk appeared to revel in the controversial nature of the update to the chatbot, posting to X on Wednesday: ‘Grok is the most fun AI in the world!’

Some users responded to Musk by using the tool to mock him, for example asking the tool to picture him holding up offensive signs or in one case showing the staunch Trump supporter with a Harris-Walz placard.

Further fake images show Kamala Harris and Donald Trump working together in an Amazon warehouse, enjoying a trip to the beach together and even kissing.

More sinister AI creations included images of Musk, Trump and others taking part in school shootings, while some have also depicted public figures carrying out the September 11 terror attacks.

Other users asked Grok to create highly offensive images including of prophet Muhammad, in one case holding a bomb.

Several also showed politicians depicted in Nazi uniform and as historical dictators.

Alejandra Caraballo, an American civil rights attorney and clinical instructor at the Harvard Law School Cyberlaw Clinic, slammed the apparent lack of filters in the Grok application.

Writing on X, she described it as ‘one of the most reckless and irresponsible AI implementations I’ve ever seen.’

The wave of misleading images will cause particular concern ahead of the US election in November, with very few of the images accompanied by warnings or X’s community notes. 

It comes in the wake of X and Musk being heavily criticised for the role the platform played in the recent riots in Britain, with misinformation allowed to spread which sparked much of the disorder, while Musk interacted with far-right figures on the site and reiterated his belief in ‘absolute free speech’. 

And last month, he was accused of breaking his platform’s own rules on deepfakes after he posted a doctored video mocking Vice President Harris by dubbing her with a manipulated voice.

The clip was viewed nearly 130 million times by X users. In the clip, the fake Harris’ voice says: ‘I was selected because I am the ultimate diversity hire.’ 

It then adds that anyone who criticizes her is ‘both sexist and racist.’

Other generative AI deepfakes in both the U.S. and elsewhere would have tried to influence voters with misinformation, humor or both.

In Slovakia in 2023, fake audio clips impersonated a candidate discussing plans to rig an election and raise the price of beer days before the vote.

In 2022, a political action committee’s satirical ad superimposed a Louisiana mayoral candidate’s face onto an actor portraying him as an underachieving high school student.

Congress has yet to pass legislation on AI in politics, and federal agencies have only taken limited steps, leaving most existing US regulation to the states.

More than one-third of states have created their own laws regulating the use of AI in campaigns and elections, according to the National Conference of State Legislatures.

Beyond X, other social media companies also have created policies regarding synthetic and manipulated media shared on their platforms.

Users on the video platform YouTube, for example, must reveal whether they have used generative artificial intelligence to create videos or face suspension.



***
Read more at DailyMail.co.uk