Google CEO calls for regulation of AI to protect against deepfakes and facial recognition

The chief executive of Google has called for international cooperation on regulating artificial intelligence technology to ensure it is ‘harnessed for good’.

Sundar Pichai said that while regulation by individual governments and existing rules such as GDPR can provide a ‘strong foundation’ for the regulation of AI, a more coordinated international effort is ‘critical’ to making global standards work. 

The CEO said that history is full of examples of how ‘technology’s virtues aren’t guaranteed’ and that with technological innovations come side effects.

These range from internal combustion engines, which allowed people to travel beyond their own areas but also caused more accidents, to the internet, which helped people connect but also made it easier for misinformation to spread.

These lessons teach us ‘we need to be clear-eyed about what could go wrong’ in the development of AI-based technologies, he says. 

He referenced nefarious uses of facial recognition and the proliferation of misinformation on the internet in the form of deepfakes as examples of the potential negative consequences of AI. 

Google CEO Sundar Pichai (pictured) has asked governments to step up regarding how they regulate AI

‘Companies such as ours cannot just build promising new technology and let market forces decide how it will be used,’ he said, writing in the Financial Times.

‘It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.

‘Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it.’  

Pichai pointed to Google’s AI Principles, a framework by which the company evaluates its own research and application of technologies.

The list of seven principles help Google avoid bias, test for safety and make the technology accountable to people, such as consumers.

It also vows not to design or deploy technologies that cause harm – such as killer autonomous weapons or surveillance-monitoring.

To enforce these principles, the company is testing AI decisions for fairness and conducting independent human rights assessments of new products. 

Last year Google announced a large dataset of deepfakes to help researchers create detection methods

Last year Google announced a large dataset of deepfakes to help researchers create detection methods

Mr Pichai, who was also made CEO of Google’s parent company Alphabet last month, said that international alignment will be critical to ensure the safety of humanity in the face of AI.

‘We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs.

WHAT IS A DEEPFAKE? 

Deepfakes are so named because they are made using deep learning, a form of artificial intelligence, to create fake videos of a target individual.

They are made by feeding a computer an algorithm, or set of instructions, as well as lots of images and audio of the target person.

The computer program then learns how to mimic the person’s facial expressions, mannerisms, voice and inflections.

With enough video and audio of someone, you can combine a fake video of a person with fake audio and get them to say anything you want.

‘While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone,’ wrote Pichai.

‘We offer our expertise, experience and tools as we navigate these issues together.’

Existing rules such as the General Data Protection Regulation can also serve as a strong foundation for individual governments to enforce regulation of technologies, he said.

However, Pichai’s company does not have an entirely clean record in this regard and the first step for Google will be heeding its own advice.

In the last year, French data regulator CNIL imposed a record €50 million fine on Google for breaching the GDPR.

The company also had to suspend its own facial recognition research programme after reports emerged that its workers had been taking pictures homeless black people to build up its image database. 

The burden of responsibility ultimately lies with companies such as Google and how far they’re willing to go to make sure AI technologies don’t breach privacy laws, spread misinformation or lead to an era of independently-thinking killer robots.

Google's search engine uses AI and machine learning technologies to return search results

Google’s search engine uses AI and machine learning technologies to return search results 

Last year, a Google software engineer expressed fears about a new generation of robots that could carry out ‘atrocities and unlawful killings’.

Laura Nolan, who previously worked on the tech giant’s military drone initiative, Project Maven, called for the ban of all autonomous war drones, as these machines do not have the same common sense or discernment as humans.

‘What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed,’ said Nolan, who is now a member of the International Committee for Robot Arms Control.

‘There could be large-scale accidents because these things will start to behave in unexpected ways,’ she explained to the Guardian. 

While many of today’s drones, missiles, tanks and submarines are semi-autonomous – and have been for decades – they all have human supervision.

Former Google engineer Laura Nolan expressed fears about a new generation of robots that could carryout 'atrocities and unlawful killings'. Pictured is the war drone MQ-9 Reaper, an drone capable of remotely controlled or autonomous flight operations for the US Air Force

Former Google engineer Laura Nolan expressed fears about a new generation of robots that could carryout ‘atrocities and unlawful killings’. Pictured is the war drone MQ-9 Reaper, an drone capable of remotely controlled or autonomous flight operations for the US Air Force

However, a new crop of weapons being developed by nations like the US, Russia and Israel, called lethal autonomous weapons systems (LAWS), can identify, target, and kill a person all on their own, despite no international laws governing their use. 

Consumers, businesses and independent groups alike all fear the point where artificial intelligence becomes so sophisticated that it can outwit or be physically dangerous to humanity – whether it’s programmed to or not.  

National and global AI regulations have been piecemeal and slow to enter into law, although some advances are being made. 

Last May, 42 countries adopted the first set of intergovernmental policy guidelines on AI, including Organisation for Economic Cooperation and Development (OECD) countries the UK, the US, Australia, Japan and Korea.

The OECD Principles on Artificial Intelligence comprise five principles for the ‘responsible deployment of trustworthy AI’ and recommendations for public policy and international co-operation.

But the Principles don’t have force of law, and the UK is still yet to enforce a concrete legal regime regulating to the use of AI.

A report from Drone Wars UK also claims that the Ministry of Defence is funding multiple AI weapon systems, despite not developing them itself. 

As for the US, the Pentagon released a set of recommendations on the ethical use of AI by its Department of Defense last November.

However, both the UK and the US are reportedly among a group of states – also including Australia, Israel, Russia – speaking against legal regulation of killer robots at the UN last March. 

WILL ROBOTS ONE DAY GET AWAY WITH WAR CRIMES?

If a robot unlawfully kills someone in the heat of battle, who is liable for the death?

In a report by the Human Rights Watch in 2017, they highlighted the rather disturbing answer: no one.

The organisation says that something must be done about this lack of accountability – and it is calling for a ban on the development and use of ‘killer robots’.

Called ‘Mind the Gap: The Lack of Accountability for Killer Robots,’ their report details the hurdles of allowing robots to kill without being controlled by humans.

‘No accountability means no deterrence of future crimes, no retribution for victims, no social condemnation of the responsible party,’ said Bonnie Docherty, senior Arms Division researcher at the HRW and the report’s lead author. 

 

 

Read more at DailyMail.co.uk