Unlock stock picks and a broker-level newsfeed that powers Wall Street.
‘This is his climate change’: The experts helping Rishi Sunak seal his legacy
Rishi Sunak
Rishi Sunak wants Britain to lead on AI safety - IAN VOGLER/POOL/AFP via Getty Images

It took just 23 words for the world to sit up and pay attention. In May, the Center for AI Safety, a US non-profit, published a one-sentence statement warning that artificial intelligence should be considered an extinction risk alongside pandemics.

Those who endorsed the statement included: Geoffrey Hinton, known as the Godfather of AI; Yoshua Bengio, whose work with Hinton won the coveted computer science Turing prize; and Demis Hassabis, the head of the Google-owned British AI lab Deepmind.

The statement helped to transform public opinion on AI from seeing it as a handy office aide to a potential threat of the kind usually only seen in dystopian science fiction.

The Center itself describes its mission as reducing the “societal-scale risks from AI”. It is now one of a handful of California-based organisations advising Rishi Sunak’s government on how to handle the rise of the technology.

In recent months, observers have detected an increasingly apocalyptic tone in Westminster. In March, the Government unveiled a white paper promising not to “stifle innovation” in the field. Yet just two months later, Sunak was talking about “putting guardrails in place” and pressing Joe Biden to embrace his plans for global AI rules.

Sunak’s legacy moment

An AI safety summit at Bletchley Park in November is expected to focus almost entirely on existential risks and how to negate them.

Despite myriad political challenges, Sunak is understood to be deeply involved in the AI debate. “He’s zeroed in on it as his legacy moment. This is his climate change,” says one former government adviser.

In November, Bletchley Park will host Prime Minister Rishi Sunak's AI Safety Summit
In November, Bletchley Park will host Prime Minister Rishi Sunak's AI Safety Summit - Simon Walker / No 10 Downing Street

In the last year, Downing Street has assembled a tight-knit team of researchers to work on AI risk. Ian Hogarth, a tech investor and the founder of the concert-finding app Songkick, was enlisted as the head of a Foundation Model taskforce after penning a viral Financial Times article warning of the “race to God-like AI”.

This month, the body was renamed the “Frontier AI taskforce” – a reference to the bleeding edge of the technology where experts see the most risk. Possible applications could include creating bioweapons, for example, or orchestrating mass disinformation campaigns.

Human-level AI systems ‘just a few years away’

Hogarth has assembled a heavyweight advisory board including Bengio, who has warned that human-level AI systems are just a few years away and pose catastrophic risks, and Anne Keast-Butler, the director of GCHQ. A small team is currently testing the most prominent AI systems such as ChatGPT, probing for weaknesses.

Hogarth recently told a House of Lords committee that the taskforce is dealing with “fundamentally matters of national security”.