ROSS CLARK: The exams fiasco is proof that we can’t trust algorithms to rule our lives

You could be forgiven if, until this week, you’d never heard of the word ‘algorithm’.

As some wag suggested, anyone who can both spell the word and come up with a concise definition deserves an A grade in mathematics for that alone.

But now we all have a good idea of what algorithms are: computer programs which process data by subjecting it to a pre-set series of calculations. What is more, we know their shortcomings.

The Government’s decision to abandon the algorithm devised by exams regulator Ofqual and instead allow pupils to keep the A-level grades predicted by their teachers has exposed everything that is wrong with our growing reliance on computers making decisions for us.

Professor Neil Ferguson of Imperial College claimed that Britain would suffer 250,000 deaths from the disease if ministers kept the economy open

Because it isn’t just A-levels. Our lives are now governed by algorithms. When we shop online, it is an algorithm that studies our browsing habits and determines the goods we are offered, at what price and the advertisements foisted on us.

When we apply for a mortgage, it’s unlikely to be a flesh-and-blood manager who judges whether or not we can be trusted to repay the loan — it will almost certainly be a computer, analysing our past financial behaviour and current circumstances.

If you’re after insurance, an algorithm will calculate your premium based on risk assessment data.

Yes, it’s all very clever, until it goes wrong or unforeseen circumstances intervene.

Algorithms didn’t stop U.S. banks advancing cash — sub-prime loans — to high-risk individuals unlikely ever to pay it back. That triggered the 2008/09 global financial crisis.

But for most of us, it is the coronavirus pandemic rather than the 2008 crash that is acquainting us with the curse of the algorithm.

After all, it was an algorithm — and a pretty faulty one at that — which put the nation into lockdown in March, the horrendous consequences of which we are seeing play out.

At its best, an algorithm can save an enormous amount of time, by enabling a computer to process a vast amount of data which might take a human being hundreds of years. It is easy, then, to think of an algorithm as cleverer than the human brain. It is certainly faster, but ultimately it is only as good as the assumptions on which it is based and on the data fed into it.

Take those algorithms which steer us towards certain goods when we shop online. A computer has no real idea why we are looking for something.

I might Google ‘A-level results’ because I am researching this article. It doesn’t mean I’m an eager student looking for a university place so bombarding me with advertisements for courses is a waste of time. But the algorithm doesn’t know that.

Did it ever occur to Health Secretary Matt Hancock or to the Government's scientific advisers that the algorithm might be flawed?

Did it ever occur to Health Secretary Matt Hancock or to the Government’s scientific advisers that the algorithm might be flawed?

That’s a trivial example. The A-levels fiasco is much more serious. Ofqual designed its algorithm to solve a genuine problem. Many teachers, ambitious for students who haven’t been able to sit exams because of coronavirus, have sought to exaggerate the grades they might have achieved.

But the assumptions that the regulator used were plain daft. The algorithm sought to realign students’ predicted grades with the history of grades at their school.

You don’t have to be a mathematical genius to have realised that this would discriminate against bright pupils in otherwise low-achieving schools.

Why did no one at Ofqual spot that? Is it because many people have become so used to computers delivering answers that they have stopped thinking and questioning outcomes for themselves.

That loss of initiative, of logic and intellectual engagement has never been more potently illustrated than in Matt Lucas and David Walliams’ Little Britain sketches where a bored office worker mindlessly repeats ‘Computer says no’ to every inquiry from customers.

If it was just receptionists who were bamboozled by technology, that would be one thing. It is quite another when our leaders allow computers to make decisions which affect the lives of millions.

That is exactly what has happened with Covid-19.

The defining moment in the Government’s response to the epidemic came on March 16 when Boris Johnson presented a paper published that day by Professor Neil Ferguson of Imperial College.

It claimed that Britain would suffer 250,000 deaths from the disease if ministers continued with their policy of keeping the economy open. Within a week, we were in total lockdown.

As we are learning daily, the damage to the global and domestic economy, to our health and relationships, and to our children’s futures is incalculable. Worse is likely to come in the months ahead. 

Did it ever occur to the Prime Minister, to Health Secretary Matt Hancock or to their scientific advisers that the Imperial model might be wrong, the algorithm flawed? That trying to balance the country’s economic needs with its medical needs might be more sensible?

The defining moment in the Government's response to the epidemic came on March 16 when Boris Johnson presented Professor Neil Ferguson's paper

The defining moment in the Government’s response to the epidemic came on March 16 when Boris Johnson presented Professor Neil Ferguson’s paper

If it did, there is little sign of it. Professor Ferguson’s modelling was treated as if it was scientific gospel. In fact, the model proved to be unsatisfactory in many ways, and there were plenty of experts who warned about it. Scientists who ran the program found that it gave different results each time. Others have criticised its construction as a model developed to analyse influenza, not coronavirus.

Certainly many of the assumptions on which it was built are questionable. The program assumed that 0.9 per cent of those infected with the Covid-19 virus would die of the disease. Imperial College has since revised its estimate down to 0.66 per cent.

Another assumption was that because this strain of the coronavirus was new, everyone was susceptible to it.

Yet real-world evidence soon established that some people may have a natural resistance gained through exposure to other coronaviruses such as those which cause the common cold. On the Diamond Princess cruise ship, for example, only 17 per cent of passengers and crew caught the virus despite it being in circulation undetected for a fortnight.

Most damning of all, perhaps, is the evidence from Sweden. In April, scientists at Uppsala University ran a version of Ferguson’s program which predicted that unless the government there introduced a lockdown, 90,000 Swedes would die by the end of May.

No lockdown followed. Today the death toll stands at 5,790. It’s not as if Ferguson didn’t have form in this respect and surely our top scientists should have known that?

In 2002, he claimed that vCJD, the human form of mad cow disease, could kill between 50 and 50,000 Britons: a range so broad as to be meaningless but it caused panic. So far, 178 people have died from vCJD.

Three years later, Ferguson was at it again, foretelling of 200 million deaths globally from H5N1 avian flu. According to the World Health Organisation, it has killed just 455.

The grim truth is that throughout the Covid-19 crisis, ministers have hidden behind the modelling, calling it ‘the science’ and asserting it is beyond reproach.

The reality is different and we will be living with the consequences for years.

As for the future, one can only hope that ministers and advisers might want to rely on a machine far more impressive than any algorithm: the human brain.

Read more at DailyMail.co.uk