AWS, Microsoft, and NVIDIA Partner With Pentagon on AI — Why Anthropic Was Left Out

The U.S. Department of Defense is rapidly expanding its use of artificial intelligence, and some of the biggest names in tech are now involved. In a major move announced on May 1, 2026, the Pentagon confirmed new agreements with Amazon Web Services (AWS), Microsoft, NVIDIA, along with OpenAI, Google, SpaceX, and startup Reflection AI, allowing their AI technologies to operate inside classified military systems.

However, one major AI company was noticeably missing from the list: Anthropic.

So why was Anthropic excluded while other AI giants secured Pentagon partnerships? Here’s everything you need to know.

Pentagon Expands AI Partnerships With Big Tech

The Pentagon announced agreements with seven leading AI companies to deploy advanced artificial intelligence tools across its classified networks.

The approved companies include:

  • Amazon Web Services
  • Microsoft
  • NVIDIA
  • OpenAI
  • Google
  • SpaceX
  • Reflection AI

These companies will now help power AI tools within the Pentagon’s Impact Level 6 and Impact Level 7 networks, which are used for highly sensitive and classified operations.

According to the Department of Defense, this move is part of a broader effort to transform the U.S. military into an “AI-first fighting force.” The goal is to improve:

  • Military logistics
  • Intelligence gathering
  • Cybersecurity defense
  • Battlefield decision-making
  • Operational planning
  • Language translation
  • Data analysis

Officials also said they want to avoid relying too heavily on one AI provider. By working with multiple companies, the Pentagon gains flexibility and access to different AI capabilities.

Why Was Anthropic Left Out?

Anthropic was previously involved in government AI initiatives and even had contracts linked to classified systems.

However, tensions reportedly grew between Anthropic and the Pentagon over how its AI tools could be used.

Reports suggest Anthropic pushed for restrictions that would prevent its AI systems from being used for:

  • Mass domestic surveillance
  • Fully autonomous weapons systems
  • Certain military targeting operations without human oversight

The Pentagon reportedly wanted broader access to AI tools for “all lawful use cases,” but Anthropic resisted those terms.

As a result:

  • The Pentagon labeled Anthropic a “supply-chain risk”
  • Government agencies began phasing out its AI tools
  • Anthropic filed legal challenges against the U.S. government

This disagreement appears to be the main reason Anthropic was excluded from the latest Pentagon AI agreements.

Why AWS, Microsoft, and NVIDIA Matter

Amazon Web Services

AWS already powers major government cloud infrastructure through contracts with defense and intelligence agencies. Its secure cloud environment makes it a natural fit for military AI deployment.

Microsoft

Microsoft has longstanding defense partnerships, including military cloud computing and cybersecurity services. Its AI tools and enterprise systems are already widely used across government agencies.

NVIDIA

NVIDIA plays a critical role because its GPUs power most advanced AI systems worldwide. Military AI models require massive computing power, making NVIDIA essential to defense AI expansion.

Growing Ethical Concerns Around Military AI

Not everyone supports deeper ties between Silicon Valley and the military.

Critics warn that AI could be used for:

  • Autonomous weapons
  • Mass surveillance
  • Facial recognition abuse
  • Faster military escalation
  • Reduced human oversight in warfare

Some employees at major tech firms have reportedly raised concerns about AI being used in military operations.

This debate mirrors previous controversies such as Google’s involvement in Project Maven, where workers protested the company’s military contracts.

What This Means for the Future of AI

The Pentagon’s latest agreements show that artificial intelligence is becoming central to modern warfare.

The U.S. government is racing to stay ahead of global competitors like China and Russia in AI development.

Meanwhile, companies must decide how far they are willing to let governments use their technologies.

Anthropic’s exclusion highlights an important divide in the AI industry:

  • Some companies are willing to work closely with defense agencies
  • Others want stricter ethical boundaries on how their AI systems are deployed

This debate will likely shape the future of AI regulation, military technology, and global security.

Final Thoughts

The Pentagon’s partnerships with Amazon Web Services, Microsoft, and NVIDIA signal a major shift toward AI-powered defense operations.

At the same time, Anthropic being left out shows that ethical disagreements over AI use are becoming increasingly important.

As military AI expands, the biggest question remains:

How much control should governments have over powerful artificial intelligence tools?

Scroll to Top