In a pivotal meeting in Geneva on May 14, Chinese and U.S. envoys convened for the first official bilateral dialogue on artificial intelligence, a byproduct of the November 2023 Woodside Summit meeting between China President Xi Jinping and U.S. President Joe Biden in California. While full details of these closed-door talks remain undisclosed, initial statements suggest Beijing voiced frustration over the Biden administration’s export controls on advanced computer chips and semiconductors, which could hinder China’s ability to progress in AI development. Conversely, the U.S. side reportedly expressed concerns about China’s potential misuse of AI and the need for safety measures, which partly justifies Washington’s export restrictions.
These talks underscore the delicate balance China must strike between advancing its AI capabilities and addressing legitimate safety concerns raised by the international community.
China’s industry development typically follows a top-down model, where the central government plays an active role in overseeing an emerging sector to ensure responsible development. However, policymakers are increasingly recognizing that AI over-regulation could impede the pace of innovation. Striking the right balance between nurturing AI capabilities and ensuring responsible development has thus become a delicate exercise for Chinese authorities.
Of all the major players involved in the AI race, China’s stance on addressing the risks of AI is the least clear.
Domestically, China has taken notable steps to address AI risks, implementing stringent domestic regulations against deepfakes and harmful recommendation algorithms as early as 2018. The Cyber Security Association of China (CSAC) announced the formation of an AI safety and security governance expert committee last October, and municipal governments in major tech centers like Shanghai, Guangdong, and most prominently Beijing – which houses more than half of China’s large language models – have called for the development of benchmarks and assessments to evaluate the safety of AI systems.
Internationally, China played an active role at last year’s AI Safety Summit in the United Kingdom, co-signing the “Bletchley Declaration” alongside the United States, European Union, and 25 other countries to strengthen cooperation on frontier AI risks. Last fall, China unveiled its own “Global AI Governance Initiative” with the goal of making “AI technologies more secure, reliable, controllable, and equitable.” Chinese experts have also contributed to consensus papers and dialogues with their global counterparts, jointly calling for safety research and governance policies to mitigate AI risks.
China has also made strides in technical AI safety.
However, concerns persist regarding China’s commitment to addressing the true risks of its burgeoning AI sector. Although U.S. National Security Council spokesperson Adrienne Watson did not elaborate on the concerns specifically mentioned in the Geneva meeting, specific points have been raised in other settings.
While China has cracked down on deepfakes and misinformation domestically, some U.S. lawmakers have raised alarms about the potential use of AI-generated deepfakes to spread political disinformation overseas. Additionally, the lack of knowledge regarding China’s AI landscape and military applications has fueled apprehensions. The United States, for example, has publicly pledged to maintain human control over nuclear weapons, while China has so far remained silent on the issue.
Regardless of China’s current stance on AI safety, one thing is abundantly clear – Washington expects more from Beijing in addressing the risks posed by these powerful technologies. This expectation, however, creates a conundrum: Safety measures are often seen as impediments to rapid AI progress, the very goal China is fervently pursuing in order to catch up to top U.S. labs like Open AI, Google DeepMind, and Anthropic.
Last year’s open letter signed by tech luminaries like Elon Musk and Steve Wozniak epitomized this tension by calling for a sweeping six-month pause on all AI development to allow for rigorous safety evaluations. Less extreme methods usually hinge upon making sure that AI labs dedicate a significant amount of their computing resources specifically to safety research, which could slow-down efforts to push the cutting edge – a compromise China may be unwilling to make in the heated AI race.
Yet the Geneva talks hint at a potential pathway where safety and progress are not mutually exclusive. For years, Beijing has railed against U.S. export controls limiting access to the advanced chips underpinning AI breakthroughs. If China demonstrates a credible commitment to mitigating AI risks, this addresses many of those reasons provided by the United States justifying regulations.
This hardware-centric approach to AI safety is one that has gained prominence in recent years. If executed successfully, it could allow Beijing to import the cutting-edge computing technology needed to grow its AI sector while assuaging international concerns – a scenario where prioritizing safety and accelerating development could go hand-in-hand.
But regulations from the United States are about more than just safety. On the same day as the AI talks in Geneva, Biden unveiled steep tariff increases on an array of Chinese imports, including electric vehicle (EV) batteries, solar cells, and medical products – highlighted by a quadrupling of EV duties to over 100 percent. China’s recent major initiatives promoting sustainability suggest these duties are motivated by broader geopolitical considerations. Therefore, even if China addresses its AI safety concerns, the U.S. could still refuse to budge on its crack down.
But a scenario where China gains access to advanced computing hardware by demonstrating AI safety commitments could be mutually beneficial. By allocating substantial resources to safety initiatives like “red-teaming” inspections and dedicated alignment research, China would not necessarily accelerate progress in the area of greatest U.S. concern: enhancing the capabilities of its cutting-edge AI models. This would enable China to harness greater computing power for model testing and better understanding its AI applications, while reassuring the U.S. over national security concerns.
Simultaneously, demonstrating responsible AI development could bolster China’s international reputation, a key objective of its public diplomacy. Such a dynamic highlights a potential path where innovative progress and mitigating risks are complementary rather than opposing goals.
The coming months will witness an unprecedented level of international dialogue on the issue of AI safety. From this week’s Global AI Safety Summit in South Korea to the United Nations’ Summit of the Future, and continued governmental dialogues between China and the U.S., the world’s powers are convening to chart a course for responsible AI development. As this global conversation unfolds, all eyes will be on China and how it demonstrates a willingness to collaborate and promote responsible AI practices on the world stage.
Beijing’s actions during this pivotal period may determine the fate of its AI dream.