Elon Musk AI creates news generator that’s ‘too dangerous’ to release!

Elon Musk AI text generator that’s ‘too dangerous’ to release! Project backed by Tesla billionaire which creates fake news won’t publish its research for fear of potential misuse

  • Open AI is a research group from Elon Musk and US entrepreneur Sam Altman 
  • It has created a model that can generate stories from little more than headlines
  • It does so using AI language models that can already translate, read and write
  • The new model won’t be released yet due to ‘danger’ of misuse and fake news

Elon Musk’s AI research group announced in a paper on Thursday that it has generated a model that can generate real news stories from little more than a headline. 

The group, known as Open AI, is however holding back on the details of the advancement which it says could be misused in the wrong hands, in much the same way as nuclear energy. 

Scientists suggested that the technology will also be rapidly advance in the coming years and will be publicly released in a safe and updated form. 

 

Elon Musk’s AI research group announced in a paper on Thursday that it has generated a model that can generate real news stories from little more than a headline. 

Open AI is a group founded by Musk and backed by Silicon Valley heavyweights like LinkedIn’s Reid Hoffman. 

It aims to develop increasingly powerful artificial intelligence tools in a safer way.

The paper showed that the new model is able to ‘write news articles about scientists discovering talking unicorns’.

The model has been developed using technology that already allows computers to write short news reports using press releases.   

So-called language models that let computers read and write are usually ‘trained’ for specific tasks such as translating languages, answering questions or summarising text. 

 Researchers have however found the model to read and write longer passages more easily than thought, and with little human intervention.   

These general purpose language models could write longer blocks of information by sourcing from text openly available on the internet. 

It will take a few years however until the model can be reliably used and the process will require costly cloud computing, although such this could come down rapidly.

Sam Bowman, an assistant professor at New York University who has reviewed the research, said: ‘We’re within a couple of years of this being something that an enthusiastic hobbyist could do at home reasonably easily,’  

‘It’s already something that a well-funded hobbyist with an advanced degree could put together with a lot of work.’

CEO Elon Musk (above) has been under intense pressure to deliver on his promise of stabilizing production for the company's Model 3. Tesla has cut headcount to trim the price of the car

CEO Elon Musk (above) has been under intense pressure to deliver on his promise of stabilizing production for the company’s Model 3. Tesla has cut headcount to trim the price of the car

While OpenAI is describing its work in the paper, it is not releasing the model itself out of concern it could be misused.

The researchers also warned of the negative consequences of the technology in the wrong hands and want fellow AI scientists to help address this. 

This was compared to how nuclear physicists and geneticists have to ensure their work isn’t easily misused before making it public. 

Alec Radford, one of the paper’s co-authors, said ‘It seems like there is a likely scenario where there would be steady progress’.

‘We should be having the discussion around, if this does continue to improve, what are the things we should consider? ‘ 

Dario Amodei, OpenAI’s research director, said: ‘We’re not at a stage yet where we’re saying, this is a danger,’ said ‘We’re trying to make people aware of these issues and start a conversation.’   

HALF OF CURRENT JOBS WILL BE LOST TO AI WITHIN 15 YEARS 

Kai-Fu Lee, the author of AI Superpowers: China, Silicon Valley, and the New World Order, told Dailymail.com the world of employments was facing a crisis 'akin to that faced by farmers during the industrial revolution.'

Kai-Fu Lee, the author of AI Superpowers: China, Silicon Valley, and the New World Order, told Dailymail.com the world of employments was facing a crisis ‘akin to that faced by farmers during the industrial revolution.’

Half of current jobs will be taken over by AI within 15 years, one of China’s leading AI experts has warned.

Kai-Fu Lee, the author of bestselling book AI Superpowers: China, Silicon Valley, and the New World Order, told Dailymail.com the world of employments was facing a crisis ‘akin to that faced by farmers during the industrial revolution.’

‘People aren’t really fully aware of the effect AI will have on their jobs,’ he said.

Lee, who is a VC in China and once headed up Google in the region, has over 30 years of experience in AI.

He is set to reiterate his views on a Scott Pelley report about AI on the next edition of 60 Minutes, Sunday, Jan. 13 at 7 p.m., ET/PT on CBS. 

He believes it is imperative to ‘warn people there is displacement coming, and to tell them how they can start retraining.’

Luckily, he said all is not lost for humanity.

 ‘AI is powerful and adaptable, but it can’t do everything that humans do.’ 

Lee believe AI cannot create, conceptualize, or do complex strategic planning, or undertake complex work that requires precise hand-eye coordination.

He also says it is poor at dealing with unknown and unstructured spaces.

Crucially, he says AI cannot interact with humans ‘exactly like humans’, with empathy, human-human connection, and compassion.

 

 

 

Read more at DailyMail.co.uk