News
From 2022 to 2024, Strategic Foresight Group (SFG) and the Geneva Centre for Security Policy (GCSP), with support from Future of Life Institute (FLI) and Normandy for Peace have steered a dialogue process for thought leaders from P5 countries (China, France, Russia, UK and USA) on global catastrophic risks with a focus on the application of AI and other technologies in the nuclear command, control and communications including decision support infrastructure. This process involved four in-person roundtables, several online conferences, bilateral consultations with the P5 Disarmament Ambassadors and other stakeholders.
Parallel to the SFG-GCSP process, the question of AI and nuclear convergence is also being discussed in the inter-governmental fora. In November 2024. President Biden and President Xi Jinping agreed two weeks ago that there should be human control of AI in nuclear weapons. We discussed how to make this agreement work.
In light of these developments, and considering that AI systems are not predictable, reliable or verifiable, it is necessary to advocate an international framework for the responsible use of AI in the nuclear domain, taking into account unpredictable future risks.
A high-level roundtable of P5 experts was hosted in Geneva from 4 to 6 December 2024, to discuss the concept of such a framework. The participants included political leaders, military officials, strategic experts and specialists in artificial intelligence from the P5 countries. The roundtable focused on the following principles for AI-NC3 interface:
1. Human control - ensuring human operators retain control on all nuclear decisions, including assessment of inputs from AI decision support systems. If probabilistic AI is used for threat detection or other decision support systems, there should be another source of input for comparative assessment.
2. Explainability- clear understanding about how algorithms operate, and decisions made by AI should be explainable to human operators.
3. Adherence to International Humanitarian Law
The principle of transparency is ideal, but it is difficult to implement. Meaningful transparency will require robust verification measures which do not appear practical in the short term.
The roundtable also identified voluntary measures that countries should take within national jurisdiction. At the same time we proposed a number of collaborative measures for P5 to agree on and implement. We also suggested political measures that are required
This initiative is within the framework of Normandy Manifesto for World Peace which calls for phased elimination of all weapons of mass destruction in a time bound manner. The nuclear risk reduction measures in the context of AI are only the first step in this direction.