Global AI Rivalry: Security Risks and Strategic Threats in Artificial Intelligence
Table of Contents
The growing global AI rivalry has raised serious security concerns, as American AI firm Anthropic accuses Chinese companies DeepSeek, Moonshot AI, and MiniMax of distilling advanced AI models. Simultaneously, the Pentagon labels Anthropic itself as a supply chain risk. This dispute highlights the dual-use nature of AI, its potential military applications, and the challenges of controlling powerful AI technologies. Understanding these strategic risks is critical for global innovation, governance, and ensuring responsible use of artificial intelligence in both civilian and military sectors.
Why in the News?
A major debate has emerged in the global Artificial Intelligence (AI) sector involving the American AI company Anthropic.
The company has asked that three Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — be treated as national security threats.
These firms are accused of “distilling” advanced AI models developed by American companies.
At the same time, the Pentagon has labelled Anthropic itself as a “supply chain risk”, raising concerns about how its AI technology is used in military operations.
The dispute highlights growing tensions over AI technology, military use of AI, and global competition in artificial intelligence.
What are the Key Highlights?
Accusations Against Chinese AI Labs
Anthropic has accused Chinese AI companies of copying knowledge from advanced American AI models.
The companies accused include:
DeepSeek
Moonshot AI
MiniMax
What is AI Distillation?
Distillation means teaching a smaller AI model using answers from a more powerful model.
The weaker model asks many questions to the stronger model.
It learns patterns from the responses and improves its own performance.
Large-Scale Distillation Allegationin Global AI Rivalry
According to Anthropic, the Chinese labs conducted distillation on a massive scale.
Around 16 million interactions were made with Anthropic’s AI model Claude.
These interactions were carried out using around 24,000 fake accounts.
Use of AI in Military Operations
AI systems developed by companies such as Anthropic, OpenAI, Google, and xAI have reportedly been used by the U.S. military.
These systems helped speed up the “kill chain”, which includes:
Identifying targets
Legal approval
Carrying out military strikes.
Pentagon’s Supply Chain Risk Label
The Pentagon labelled Anthropic as a supply chain risk.
This label is usually used for companies linked with foreign adversaries.
Anthropic has challenged this designation in court.
Debate on AI as a Strategic Technology
Some policymakers compare generative AI to nuclear weapons.
The argument is that AI should be tightly controlled to prevent dangerous proliferation.
However, AI is also a general-purpose technology, similar to semiconductors.
Difficulty of Controlling AI Technology
AI research mostly happens in the private sector.
Many researchers move between countries and companies.
Export controls on semiconductors have already been partially bypassed.
Limits of Restrictions
The success of DeepSeek in building strong models at lower cost shows that restrictions are not always effective.
This demonstrates how quickly AI technologies can spread.
What are the Significance?
Rising Global Competition in Artificial Intelligence
The dispute shows the intense competition between the United States and China in AI technology.
AI leadership is increasingly seen as a strategic advantage in global power politics.
AI as a Dual-Use Technology
AI has both civilian and military applications.
The same technology used for chatbots or research can also be used for:
Surveillance
Cyberwarfare
Autonomous weapons.
Expansion of AI in Military Systems
AI is already being used in military operations to speed up decision-making processes.
The use of AI in the “kill chain” shows how technology can change modern warfare.
Weakness of Corporate Guardrails
Companies sometimes create rules or restrictions on how their AI can be used.
However, governments or military organisations can pressure companies to change these rules.
This shows that corporate rules alone cannot control powerful technologies.
Increasing Market Power of Large AI Companies
A coordinated response among AI firms, cloud providers, and policymakers may strengthen the dominance of a few large companies.
This can reduce competition and innovation.
Debate Over Intellectual Property in AI
AI companies argue that distillation is intellectual property theft.
However, critics point out that AI models themselves are trained using large amounts of public internet content without consent.
This raises questions about fairness and ownership in AI development.
Impact on Global Innovation and Collaboration
Strict technology restrictions can reduce scientific collaboration and innovation.
They may also limit the spread of useful AI tools for economic development and research.
Challenges
Difficulty in Controlling AI Technology
AI models are mathematical systems that can easily be copied or replicated.
This makes it difficult to control their global spread.
Talent Mobility
Many AI researchers trained in the U.S. now work in companies across the world.
This makes it difficult for governments to restrict knowledge transfer.
Ineffective Export Controls
Restrictions on semiconductors and AI tools have been bypassed in many cases.
Companies and countries often find alternative solutions.
Risk of Military Misuse
AI can be used in autonomous weapons, surveillance, and cyberwarfare.
This raises serious ethical and security concerns.
Growing Corporate Dominance
A small number of companies control the most powerful AI systems.
This concentration of power can influence global technology governance.
Weak Global Governance Framework
There are no universal global rules for the military use of AI.
Different countries follow different standards.
Way Forward
Establish International AI Governance
Countries should create global agreements on responsible AI use.
These agreements should include:
Rules on military applications
Ethical guidelines for AI deployment.
Ensure Human Control in Military AI
AI systems used in warfare should operate under meaningful human control.
Humans must remain responsible for lethal decisions.
Prohibit Mass Civilian Surveillance
Governments should agree to limit large-scale surveillance using AI technologies.
This will protect civil liberties and human rights.
Develop Transparent Technical Standards
AI systems used in sensitive areas should follow auditable technical standards.
Independent audits can verify that systems operate safely.
Promote Responsible Innovation
Governments and companies should balance innovation with safety regulations.
Policies should encourage open research while preventing misuse.
Encourage Global Cooperation
Countries should cooperate through multilateral and plurilateral agreements.
These agreements must apply to all major powers to be effective.
Conclusion
The rapid growth of artificial intelligence is reshaping technological competition and security thinking across the world. Managing this transformation requires strong cooperation among governments, researchers, and industry so that the benefits of AI can be realised without creating new global risks.