AI is more than just a substitute for specific human functions. Its essence is shaped by the data it processes and the complexity of its algorithms. Unlike commercial applications, nuclear weapons in warfare cannot readily amass extensive training data. The confidentiality of enemy information and the dynamic nature of warfare further complicate data acquisition.
Even the most advanced AI algorithms fall short of emulating the nuanced decision-making of an experienced general who bases decisions on a wealth of professional knowledge and real-life experience. This differs significantly from current discussions about the military application of AI, often encapsulated in the “human in the loop” concept.
Current discussions around AI governance focus on misinformation, intellectual property, privacy and virality. However, these challenges aren’t new. False information has existed in media such as newspapers, radio and television since their inception.
Chinese scientists’ attack on ChatGPT shows how criminals can exploit AI
Chinese scientists’ attack on ChatGPT shows how criminals can exploit AI
What’s more, technology isn’t the source of all evil, and neither should it be a scapegoat for flawed human decision-making. If we outsource moral and political choices to intelligent machines, we ignore the warning of cybernetics pioneer Norbert Wiener – that these responsibilities will ultimately return to us.
From an international governance perspective, if there’s anything to learn from nuclear arms control, it’s that humanity is capable of figuring out how to manage emerging, unpredictable technologies. Given the rapid evolution of AI and the diverse governance approaches worldwide, our focus should shift to adaptable cooperation.
Take nuclear energy, automobiles, aviation and pharmaceuticals. These are not mere triumphs of innovation but milestones of a long, arduous journey towards safety and civility. This complex endeavour requires flexible, sustained collaboration across diverse stakeholders. The recent US-China consensus and declaration at the UK AI Safety summit herald promising starts.
But can an entity surpassing human intelligence ever be fully subjugated by human control? This might not be explained simply by a path-dependent model of technological governance but rather by a deeper reflection that our perception of technology as a tool of power has obscured its true nature.
Dong Ting is a resident research fellow at the Centre for International Security and Strategy, Tsinghua University