Evidence robots acquiring racial and class prejudices

Recently, my application for insurance for a classic car I’d bought was refused. It was a first for me and when I inquired why, I was told that the insurance company was concerned that I associate with ‘high-value individuals’.

I don’t, but even if I did, how could this possibly impact my access to insurance? The broker kindly investigated on my behalf and discovered that a robot — or more accurately an ‘automated decision-making machine’ — used by the insurance company had scoured the internet and discovered that in the distant past I’d been the motoring editor of a national newspaper.

I was no wiser as to why this might suddenly have made me a liability. ‘You might give Jeremy Clarkson a lift, have an accident in which he would be injured, and it could cost millions,’ my broker explained. It was an utterly bizarre line of reasoning. I’m no friend of Clarkson, I complained. To no avail.

The robot knew better, its logic deciding that my time as a motoring editor obviously connected me to Jeremy Clarkson, the multi-millionaire former presenter of Top Gear and the nation’s best known petrolhead who writes newspaper columns and whose name is all over the internet.

Now, not only am I designated as a risk to insure, I have ‘refused insurance’ against my name. (And I’m a ‘friend’ of Clarkson, to boot.) It’s an amusing story to dine out on, but also a sinister indication of what is happening as similar robots rapidly assume control over important aspects of our lives.

Robots are making crucial decisions when it comes to giving out loans

Take my friend who applied to NatWest, where he had banked for 30 years, for a small loan.

The financial adviser was apologetic: my friend would have to pay a huge rate of interest, because he had been categorised as ‘high risk’.

But why? The bank knew he always kept a healthy balance in his account and had been mortgage-free for years.

That’s the trouble, said the advisor: because my friend didn’t have a recent credit history, the decision-making robot had concluded that the only possible reason was that he was too dodgy to be trusted.

My friend said he’d try elsewhere. No point, said the adviser. Everyone uses decision-making robots now. They will all say the same thing.

And he was right. Human bank employees are no longer allowed to use common sense to over-rule the robots.

Not only are they being used to decide our access to insurance, loans and whether or not we are a decent credit risk, but they may also be influencing our employment prospects, medical treatment and tax affairs.

Even more ominously, as I shall explain, such robots may soon be ruling on whether police patrol your area, and if you are a potential criminal.

So what exactly are these robots and how do they work?

Automated decision-making engines are super-sophisticated computer programs that run on algorithms — vastly complex sets of mathematical rules — in ways that humans cannot control, let alone understand.

Once they have been programmed for a particular use — in banks, employment agencies, insurance firms, hospitals or marketing companies, for example — the robots become self-teaching, using swathes of accumulated data to make their own rulings on the lives of individuals.

Even experts in the field admit they don’t know how these ‘deep-learning’ robots reach their conclusions.

But there is already evidence to suggest that the robots can rapidly become ‘bigoted’ or ‘irrational’, making decisions that may blight people for decades.

In May, new laws come into force across Europe to try to rein back the march of the machines by making them — or rather the firms that use them — more accountable: we will be able to ask how and why robots have reached particular decisions about our lives.

But legal and scientific experts warn that this legislation will prove ineffective and inadequate — which is why I find it even more worrying that British police forces are also starting to rely on this sort of technology.

Last May, Durham Police started using a robotic system called a Harm Assessment Risk Tool (Hart) that decides whether people taken into custody are at high, medium or low risk of re-offending.

Hart’s use is currently restricted to ‘advising’ the custody officers’ decision-making for a programme called Checkpoint which puts offenders on behaviour-improvement courses, rather than subjecting them to criminal charges.

It gives poor people harsher punishments

Only those labelled as being ‘medium risk’ for re-offending are put on Checkpoint courses as there is good evidence that individuals in this category have the best chance of being educated out of their criminal tendencies.

The Hart algorithm bases its prediction on 34 pieces of information about the offender, including their age and gender, their postcode, their age of first offence, the type of offence and their criminal history.

But even with this apparently benign system, alarm bells are ringing among British and American lawyers about the risk of the algorithms becoming biased on the basis of race or class.

Critics claim that robotic racial bias has already been seen in the U.S. state of Wisconsin, where an automated decision-making robot called COMPAS is used to warn judges if it thinks parole-seekers are at high risk of re-offending and should be kept in jail.

An expert analysis of Wisconsin defendants who did not subsequently re-offend discovered that black people were twice as likely as whites to have been flagged by the robot as ‘high risk’, making them far more likely to be kept in jail without good reason.

This is because there are more black offenders than white in the parole population — which civil liberties campaigners argue is due to historical racism by humans, rather than by computers — but nevertheless, the robot can perpetuate and worsen this.

The robot ‘learns’ from the accumulated data that black people are more likely to offend and then uses that information to keep on selecting black people for incarceration in a jail system that — for multiple reasons such as exposure to other prisoners — makes them more likely to re-offend when released.

Thus the machine creates a spiral of self-fulfilling prediction. The American system explicitly takes a defendant’s race into account. In the British trial in Durham, the robot does not know the offender’s ethnicity which is described as a ‘protected attribute’.

But Marion Oswald, senior fellow in law at the University of Winchester, who has studied the Durham system, warns that it carries a real danger of bias because it takes into account an offender’s postcode.

The robot can ‘learn’ to use this as an alternative marker for race or poverty.

The risk here is that the robot keeps identifying as ‘high risk’ black or white people from poor areas, and directing them into harsher punishments.

Humanoid robot Sophia was created by Hanson Robotics in collaboration with AI developers, including Google’s parent company Alphabet Inc

Humanoid robot Sophia was created by Hanson Robotics in collaboration with AI developers, including Google’s parent company Alphabet Inc

This information — that black/poor people get jailed more — is used to reinforce this bias in what experts call a ‘runaway feedback loop’.

Ms Oswald says that predictive robots may be useful if they prove capable of providing more consistent judgments than individual police officers, but they must be scrutinised for emerging biases.

‘I would like to see proper regulatory oversight for experimental projects like this,’ she told the Mail. ‘All of this information should be made public, so that we can see what the results are.’

Detective Inspector Andy Crowe, Checkpoint co-ordinator at Durham Constabulary, says that the use of the HART system is experimental, and a full assessment will be made in October next year, at the earliest.

Experts warn that a similar ‘runaway feedback loop’ can occur with another American crime-prediction robot called PredPol (Predictive Policing Software), now being used by Kent Police.

The algorithm uses previous crime reports to predict where ‘hotspots’ will occur in towns, and directs officers to patrol them. Kent Police say that since they began rolling out the system in 2013, low-level street crime such as antisocial behaviour has dropped by 7 per cent, though it acknowledges that this may be down to numerous factors, such as improvements in local employment levels.

A spokeswoman for Kent Police says the robot uses data on burglary, criminal damage, antisocial behaviour and theft to identify areas at higher risk of crime, which are designated ‘PredPol boxes’.

‘It is not possible to pin down any reductions in crime or antisocial behaviour to PredPol alone as there are lots of other factors to consider,’ Kent Police say.

‘However, we have anecdotal evidence of the scheme’s success. For example, an officer in Gravesend, who was travelling to speak to someone whose bike had been stolen, discovered the bicycle while patrolling a PredPol box en route.

‘On another occasion a Medway officer patrolled a PredPol box and saw a man acting suspiciously with a computer under his arm. It was established he had just committed a burglary.’ It is hardly convincing evidence for PredPol’s supposed benefits.

More illuminating is the fact that police in the cities of Richmond and Burbank in California have stopped using the software.

In Richmond, the force says that crime in some areas had increased by more than 10 per cent since they started using PredPol, and researchers at the University of Utah have published a mathematical analysis which may explain why.

When the robot keeps sending police to the same area, officers are bound to pick up more crime. The robot then uses this to reinforce its belief that crime in that place is rife. On the other hand, says the Utah University report: ‘If police don’t see crime in a neighbourhood because the robot tells them not to go there, this can cause the opposite feedback loop.’

That’s bad news for residents in both neighbourhoods.

People in the neighbourhood classified as ‘bad’ by the robot are more likely to be over-policed and flagged even more highly as inhabiting a high-crime area (which in turn may be picked up by home-insurance algorithms and raise premiums).

Meanwhile, people living in ‘low crime’ neighbourhoods may see steadily ever fewer police patrols, which leaves minor crime and antisocial behaviour to thrive unchallenged.

The system also demeans police officers’ hard-earned skills. Burbank police stopped relying on PredPol after a survey found that 75 per cent of officers had ‘low or extremely low’ morale.

One officer said: ‘It’s like a robot telling a fisherman with 20 years’ experience that we’re going to tell you how to fish.’

In the UK, similar ‘deep-learning’ robots are at work in the Inland Revenue in search of tax fraud. A £100 million computer with algorithmic decision-making abilities co-ordinates information from multiple Government and commercial databases, including an individual’s credit card transactions and internet usage.

It then estimates probable earnings and compares this with their tax declarations to identify whether they may be hiding income.

Machine-learning algorithms are used by global recruitment companies such as Predictive Hire to identify and analyse potential job candidates based on information found on the internet, such as their work-related discussions on online forums with fellow professionals.

Decisions bear no relation to common sense 

Now numerous employers such as Tesco and Amazon are using robots to track workers’ productivity, while motor insurers have an army of robots to predict your risk as a driver.

And when it comes to insurance, while robots can determine the likelihood of you making a claim based on the information that you provide, they can come also come up with statistical quirks that cost customers money or deny them insurance — as I discovered with my ‘Clarkson’ experience.

Other examples include drivers whose parked cars have been crashed into — through no fault of their own — and who face a rise in premiums as a result.

The Association of British Insurers explains that if your car is involved an accident that wasn’t your fault, then as absurd or as unfair as it seems there happens to be a greater statistical likelihood of your being involved in an accident in the future that was your fault.

In NHS hospitals, deep-learning machines are being trialled to assess scans and so target radiotherapy treatment in patients with cancers of the head and neck.

However, some medical and legal experts fear that their wider use could lead to robots being used in ‘life and death’ decision-making in the future in a health service where treatments are rationed because of budget restraints.

With the proliferation of robots swapping ill-judged and often highly inaccurate information about our lives, how worried should we be?

Well, the data privacy watchdog, the Information Commissioner’s Office, believes there is genuine cause for concern and issued this warning last year: ‘The fuel powering the algorithms is personal data.

And when it comes to insurance, while robots can determine the likelihood of you making a claim based on the information that you provide

And when it comes to insurance, while robots can determine the likelihood of you making a claim based on the information that you provide

‘The increasing variety of sources from which personal data can be obtained, observed, inferred and derived will only lead to the use of algorithms in decision-making in more and more contexts.’ It added: ‘There is a risk that algorithmic decisions may be intrusive and unjustified where they have discriminatory effects on people.’

Dr Brent Mittelstadt, a researcher in data ethics at Oxford University’s Internet Institute, has been studying the rise of super-powered robot decision makers (or ‘neural nets’ as he calls them).

He warns that once they start swapping inaccurate and prejudicial information about you, it will be extremely hard to erase.

‘One neural net might talk to another neural net, and the story will get out and be passed between different players and networks,’ he says.

‘Even if the processes behind this were transparent, it would be very difficult for you to follow all that.’

The processes involved are, however, anything but transparent. It’s called the ‘black box’ problem.

The robots are self-teaching. The more they acquire data, the more they teach themselves.

But their methods of learning are beyond human comprehension. We can’t look inside the ‘black box’. Nor can we as individuals even know when the robots are at work. ‘I am not even sure how much neural nets are being used at the moment,’ admits Dr Mittelstadt.

‘We are not always aware that it is going on.

‘Companies can say they are not required to reveal this because it is “proprietary information”.’

In other words, it’s a commercial secret. But we know that the robots are beavering away.

‘If my news feed in Facebook or other apps is being personalised, then that definitely is the product of algorithmic decision-making,’ warns Dr Mittlestadt.

So the robots even have control of the ‘facts’ you are fed each day.

In an attempt to exert some control, the European Union is introducing the General Data Protection Regulation.

This aims to give us all the right to know when and how robots have made decisions about us.

Dr Mittelstadt is not optimistic. ‘The legislation lacks well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless,’ he says.

Indeed, how can you demand an explanation from a robot that thinks in a way no one can comprehend?

If you question a robot’s judgment, you may receive only a general explanation of how it works, rather than why it made a specific decision about your life, says Dr Mittelstadt.

As for knowing where that information has gone, or what other robots may be using it — well, good luck with that! 



Read more at DailyMail.co.uk