Artificial intelligence is reshaping the global economy, security, and governance, with the United States and China holding dominant positions. Their rivalry reflects the conflict between market logic, emphasizing efficiency and innovation, and security logic, prioritizing risk management and strategic control. Both powers exercise strong normative influence in multilateral platforms, yet domestic incorporation of norms remains limited. Market rationality accelerates diffusion of technologies such as Large Language Models but aggravates risks of inequality, whereas security rationality mitigates threats but constrains cooperation. Divergences appear in domains including quantum AI, data governance, and military–civil fusion, where strategic confrontation delays regulatory adaptation. Balanced and inclusive frameworks are required for effective global governance, as highlighted by initiatives such as the Bletchley Declaration, with institutions like the United Nations serving a bridging role. Progress depends on cultivating reciprocal trust, avoiding zero-sum dynamics, and achieving mutually beneficial outcomes in AI governance. The persistence of these tensions illustrates the structural challenges inherent in aligning technological development with coherent global regulatory mechanisms.
Research Article
Open Access