News

The profile of Father Paolo Benanti (“The monk helping the Vatican take on AI”, Magazine, April 9) covered some incredibly important issues concerning the ethics of artificial intelligence. But unmentioned was AI-originated existential risk (AIXR), which is less tangible than bias or inequality, but equally profound.

Some scientists and philosophers argue that ever-more-capable AI might enable scenarios which put humanity’s very existence at risk. This might sound at once alarmist and banal, evoking Hollywood’s Terminator franchise. But the HAL 9000 envisioned by Stanley Kubrick is perhaps a closer metaphor.

Briefly, the issue is that companies building AI systems in a competitive race for dominance may make design decisions that result in the machines developing their own world views and goals, with little regard for human welfare. Geopolitics may exacerbate the problem: imagine if the US or Chinese military decided to “improve” their nuclear command-and-control system with AI.

FT readers will appreciate the concepts of risk, uncertainty and loss. Given the novelty and complexity of AIXR, there is little consensus on the probability or timeframe, though there is some convergence of views on possible loss (unrecoverable dystopia or outright extinction).

In light of the stakes, one hopes this issue makes the agenda at Hiroshima, where Father Benanti says a planned meeting is due to take place between leaders of the Eastern religions, including Hinduism, Buddhism and Shinto.

Curious readers might also consult Professor Stuart Russell’s 2021 BBC Reith Lectures, Toby Ord’s The Precipice (2020) or Nick Bostrom’s Superintelligence (2014).

Kanad Chakrabarti
New York, NY, US