An image of Alan Turing

Policymakers must think through their response to widespread adoption of Artificial Intelligence

A recent article in the Financial Times exposed the widespread corporate interest in Artificial Intelligence (AI), its power and its weaknesses.

The article, titled “McKinsey rushes to fix AI system after hacker exposes flaws” explained that 40% of the consulting revenue of McKinsey was related to AI:

“McKinsey claimed last year that consulting on AI and related technology accounted for 40 per cent of its revenue, and this year its chief executive said it has built 25,000 AI ‘agents’ to support its 40,000-strong workforce.”

Even if the phrase ‘related technology’ is doing a lot of heavy lifting, it is clear that McKinsey is advising a lot of clients on how to use AI.

The article went on to explain that McKinsey’s in-house AI system had been hacked, and the hackers had gained access to millions of its internal messages and were able to identify sensitive files. This is clearly a major weakness in its AI system.

But finally, the article revealed that the hacking itself was AI-assisted.

Those apparent contradictions pose the question: Is AI just hype, or will it change the world?

While there is undoubtedly hype, there is also an undeniable power of AI and it will change the world dramatically, possibly – but not necessarily –  for the better:

  • There are certainly areas where what is claimed for AI and what it currently does are out of line – there is some hype;
  • There are already areas where AI can undoubtedly create an enormous increase in productivity;
  • Normally, we assume that a boost to productivity must be hugely positive, but looking at the systemic implications of widespread adoption shows that this is not necessarily the case – it could harm workers and seriously weaken the economy.

Policymakers should be thinking systemically about how to ensure a positive outcome from AI adoption, not waiting for the symptoms to appear before developing their response.

There is some hype

There are quite a few high-profile cases of people who have lost their jobs because of AI, not because of its power, but because they wrongly assumed they could rely on it:

  • The Guardian reported that the chief of West Midlands Police had to apologise to MPs for giving them incorrect evidence about the decision to ban Maccabi Tel Aviv football fans, saying it had been produced by artificial intelligence (AI). He has since resigned.
  • Bloomberg reports on an assistant US attorney in North Carolina who has resigned over AI-created fabricated quotes and erroneous citations in an AI-produced court brief.
  • And, as Forbes reports, these are not unusual cases.

For many years, the gold standard for AI was passing the Turing test. The essence of the test was that if a computer could, in dialogue with human judges, persuade them (as often as a real human can) that it is human, then it is ‘intelligent.’ Turing’s original paper on the subject called it The Imitation Game. Thus plausible reproduction of the sort of things real humans would say became a key focus, and large language models developed to the point where they can pass the test.

So they are not truth generators, they are plausibility generators. The lawyers and policemen cited above were taken in precisely because the AI-generated output was so plausible.

For some applications, plausibility may be all that is required. But for many critically important areas, we need to aim for the truth. And in some domains, large language models are simply not capable. As their name suggests, they are language models – to the extent that they reason, they use verbal reasoning. And there are many areas of human activity where verbal reasoning is not enough. But the results will still be plausible to non-experts.

For those who are interested, here is a mathematical example of the problem.

In specialist areas, it can take a specialist to see what the AI is doing wrong. And sometimes it can take a lot of effort, even for an expert.

There are also increasing examples of AI agents acting against the instructions they have been given. As the lead author of the study put it,

“The worry is that they’re slightly untrustworthy junior employees right now, but if in six to 12 months they become extremely capable senior employees scheming against you, it’s a different kind of concern. Models will increasingly be deployed in extremely high stakes contexts – including in the military and critical national infrastructure. It might be in those contexts that scheming behaviour could cause significant, even catastrophic harm.”

There is no question that the downside of carelessly using AI is still significant.

And where there is hype, there is the possibility of a bubble. Remember the beginnings of the Internet: in the early days there was hype, there were many start-ups which had no realistic prospect of becoming profitable, there was a stock-market boom, and there was a bust. But the underlying premise, that the Internet could transform many businesses, was sound, so while the bust slowed the uptake of the Internet, it did not change the direction of travel. So the current high failure rate of AI projects in business is likely to be a temporary phenomenon.

So there is hype, there are issues to be resolved, and there is the possibility of a bubble. But what about the upside?

AI can boost productivity

An obvious area in which AI seems to boost productivity is in software development.

But even here there are issues. For simple throwaway tools, there is no question that AI can vastly outperform humans – using AI, coders can produce in hours what would have taken days or even weeks previously. But the more complex the task, the greater the need for systems engineering skills rather than just coding skills, and the overhead on verification becomes significant.

Of course, this is particularly important in high-stakes environments like the critical national infrastructure referred to above. If hackers could get into our banking systems as easily as they could get into McKinsey’s system, we have a problem.

Nevertheless, this is an area which is moving very fast, and even though there are issues today, many may be resolved in years or even months.

So, what does that mean for the economy?

 

AI could harm workers and the economy

Normally economists assume that productivity gains will enable the same number of workers to produce more valuable goods and services for the same number of hours worked. But that is not the only possible outcome.

Most of the applications of AI so far are resulting in lower hiring at junior levels – though in some cases increased hiring of more senior staff.

A Harvard study suggested that junior workers are already being hard hit, and this appears to be true in professional services like law, accountancy and consulting as well as in software development. Larger firms are more active in reshaping their workforces than smaller ones with around one in four expecting to reduce staff numbers.

There is at the very least a scenario which needs to be considered in which the bulk of AI deployment is to reduce headcount. In that case a complex web of causes-and-effects needs to be understood.

How AI could impact households and the wider economy

A diagram showing the interplay between AI and the economy.

If things pan out this way, we could see:

  • Increasing adoption of AI as the capability develops;
  • Most applications being used to displace workers;
  • Unemployment rising and average incomes decreasing;
  • Household spending falling;
  • Tax revenue and government spending falling.

And since the two largest components of GDP are household spending and government spending, we could see falling GDP. Even worse, these cycles could be self-reinforcing, creating a vicious circle of accelerating economic decline as weak GDP and low demand for their goods and services forces companies to make even greater efforts in cost reduction.

The only people who would benefit from that are major shareholders in AI companies.

Conclusion

The impact of AI is just one example of the general point covered last week’s article. Progressive policymaking requires joined-up thinking in a way we are simply not seeing from politicians in the UK or elsewhere. Current UK policy on AI is to embrace it as fast as possible in the hope that its effects will be clearly positive. It could indeed be extremely positive, but early indications are that a catastrophic outcome is at least plausible. We cannot wait to formulate policy to protect the population and the economy itself.

As last week’s article pointed out: policymakers could develop the capabilities to formulate this kind of joined up policy. They should. And we can help them.

If you think this is important for policymakers to understand, please send a link to your MP, and take a look at the 99% organisation and join us.