Dario Amodei, chief government officer of Anthropic, on the AI Influence Summit in New Delhi, India, on Thursday, Feb. 19, 2026.
Ruhani Kaur | Bloomberg | Getty Photos
Final August, Pentagon know-how chief Emil Michael, a former Uber government and lawyer, took on the added position of overseeing the Protection Division’s synthetic intelligence portfolio. A month earlier, Anthropic had been awarded a $200 million DOD contract that expanded its work with the company.
“I stated, ‘I simply need to see the contracts,'” Michael instructed the All-In Podcast on Friday, reflecting on his early days managing the AI portfolio. “You understand, the outdated lawyer in me.”
Michael’s request kicked off a months-long assessment course of that culminated within the Protection Division banning Anthropic’s know-how, leaving the army with out its hand-picked AI fashions to function in essentially the most delicate environments. In a unprecedented transfer, the DOD designated Anthropic a provide chain danger, a label that is traditionally solely been utilized to overseas adversaries. It should require protection distributors and contractors to certify that they do not use the corporate’s fashions of their work with the Pentagon.
Anthropic sued the Trump administration on Monday, calling the federal government’s actions “unprecedented and illegal,” and claiming that they’re “harming Anthropic irreparably,” placing a whole bunch of thousands and thousands of {dollars} value of contracts in jeopardy.
The DOD’s sudden reversal got here as a shock to many officers in Washington who seen Anthropic’s fashions as superior — they have been the primary to be deployed within the company’s categorised networks — and championed the corporate’s means to combine with present protection contractors like Palantir. The choice was all of the extra puzzling for the reason that Trump administration had threatened throughout negotiations to invoke the Protection Manufacturing Act, which may have pressured Anthropic to grant the army entry to its know-how.
“I do not know the way these two issues can each be true in actuality,” stated Mark Dalton, a retired Navy rear admiral who now leads know-how and cybersecurity coverage at R Avenue, a suppose tank in Washington, D.C. “One thing is so crucial that you could invoke DPA and so dangerous that you simply put a designation on it that is reserved for overseas adversaries.”
Protection consultants like Dalton expressed concern in regards to the authorities’s resolution. Not solely does it set a troubling precedent, they argue, nevertheless it additionally means the administration is banishing a key know-how vendor that is been lauded for its diligence with respect to AI security, powerful rhetoric in opposition to China and its entrepreneurial chops, changing into one of many fastest-growing tech startups within the U.S.
Former DOD official Brad Carson, who’s now co-founder and president of AI coverage nonprofit People for Accountable Innovation, stated the transfer is especially troubling for army personnel, who’ve come to depend on Claude. An ex-Navy intelligence officer who served in Iraq, Carson stated he is talked to a variety of retired officers who instructed him that “warfighters will not be blissful about it.”
“You are not so excited should you’re within the army,” stated Carson, who labored in President Obama’s Protection Division till 2016 and earlier than that was deployed to Iraq whereas within the Military and in addition served two phrases in Congress as a Democrat in Oklahoma. “They view Claude as being a greater product, essentially the most dependable, with essentially the most consumer pleasant outputs they’ll assimilate into planning.”
CNBC spoke to 17 AI coverage consultants, former Palantir and Anthropic staff, tech analysts and researchers about Anthropic’s essential position within the Protection Division and what comes subsequent. A number of of the individuals requested to not be named as a result of they weren’t licensed to talk on the matter.
Anthropic declined to supply a remark for the story.
Anthropic CEO Dario Amodei based the San Francisco-based firm in 2021 alongside his sister, Daniela Amodei, and a handful of different researchers. The group had defected from OpenAI, earlier than the launch of ChatGPT, over considerations in regards to the firm’s path and angle towards security. They spent years rigorously developing Anthropic’s popularity as a agency that was extra devoted to accountable AI deployment.
Anthropic launched its household of AI fashions, often called Claude, in March 2023, a couple of months after ChatGPT hit the market and rapidly went viral. Within the three years since introducing Claude, Anthropic has raised billions of {dollars} of capital, en path to a $380 billion valuation.
The corporate is now underneath immense strain to justify that pricetag and has been pressured to quickly commercialize its know-how in an effort to maintain tempo with OpenAI and different rivals like Google.
Partnering with AWS and Palantir
Whereas OpenAI was enthralling shoppers, Anhtropic discovered fast success promoting to massive enterprises, together with the DOD. It is an space Amodei began specializing in early, recognizing the enterprise and societal significance of working intently with the federal government and army and serving to to determine principals for secure makes use of of a know-how that has the ability to result in potential catastrophes, in accordance with individuals with data of the matter.
The corporate started constructing relationships and making inroads with officers in Washington, D.C., and Amodei was among the many few AI business executives invited to fulfill with then-Vice President Kamala Harris, in Could 2023.
Round that very same time, Anthropic turned to a well-known tech companion that might assist it prosper amongst D.C. technologists: Amazon Internet Providers.
Anthropic’s Claude grew to become obtainable inside AWS’ Bedrock service that 12 months, which helped it acquire traction inside the authorities tech neighborhood, a number of sources stated. Federal companies may start experimenting with Anthropic’s fashions as a result of they have been accessible inside AWS’ government-sanctioned setting.
Amazon has been considered one of Anthropic’s largest monetary backers since 2023, investing a complete of $8 billion within the startup.
As pilot initiatives acquired underway, many federal staff discovered that Claude produced extra compelling outcomes than different fashions from firms like OpenAI and Meta, the sources stated. Claude may present step-by-step causes for why it could derive a solution or full a job, which was essential for federal companies that require sturdy auditing and verification, sources stated.
And since Anthropic had prioritized constructing for enterprise prospects, the corporate’s consumer expertise was particularly appropriate for desktop computer systems, the individuals stated. With a robust AI mannequin and an intuitive consumer interface, Anthropic started to earn credibility with federal employees, sowing the seeds for a main partnership with Palantir, a software program and providers supplier that counts on authorities contracts for about 60% of its U.S. income.

Anthropic’s push into the federal authorities was assisted by its head of world affairs, Michael Sellitto, who led cybersecurity coverage on the Nationwide Safety from 2015 via 2018, and former AWS government Thiyagu Ramasamy, the pinnacle of the corporate’s public sector enterprise.
In a LinkedIn put up final week, Ramasamy, who joined in early 2025, expressed his concern in regards to the Pentagon’s actions.
“Immediately, I mourn for the purchasers I’ve deep respect for,” Ramasamy wrote. “They have been shifting at a tempo I could not have imagined throughout my twenty years on this business, and now it involves a halt over a weekend. Down, however not out.”
Sellitto and Ramasamy didn’t reply to requests for remark.
In November of 2024, shortly earlier than Ramasamy’s arrival, Anthropic and Palantir introduced a partnership with AWS that may permit U.S. intelligence and protection companies to entry Claude. Some Anthropic staffers have been upset in regards to the deal when it was introduced, a former worker stated, including that it prompted “many huge Slack threads” and have become a degree of lingering stress inside the firm.
Having Palantir as a companion helped Anthropic construct direct traces with the DOD and quick tracked its integration into the highest-level, categorised initiatives. The partnerships have been an important purpose for Anthropic with the ability to be the primary mannequin firm to formally deploy throughout categorised networks, stated Lauren Kahn, a senior analysis analyst at Georgetown’s Heart for Safety and Rising Expertise.
“The truth that Anthropic is ready to principally play good with others like Palantir, AWS, Google, and many others., particularly Palantir,” she stated, “is extraordinarily priceless.”
In its July 2025 launch asserting its $200 million protection contract, Anthropic stated it had “accelerated mission influence throughout U.S. protection workflows with companions like Palantir.” Anthropic stated its know-how helped the federal government’s protection and intelligence organizations “quickly course of and analyze huge quantities of complicated knowledge.”
A month after profitable the Pentagon contract, Anthropic partnered with the U.S. Common Providers Administration to deliver its AI fashions to different taking part companies for $1 greenback a 12 months.
However by that time, Anthropic’s relationship with the federal government was starting to bitter.
President Trump had been sworn into workplace that January, and Amodei wasn’t a fan, having as soon as likened the commander in chief to a “feudal warlord” in a since deleted Fb put up, in accordance with Fortune.
Different business executives, together with OpenAI CEO Sam Altman, Apple CEO Tim Cook dinner and Google CEO Sundar Pichai, had been photographed rubbing elbows with Trump on the White Home, however Amodei was conspicuously absent. He did not attend Trump’s inauguration final 12 months.
Amodei instructed staffers earlier this month that the administration does not like Anthropic as a result of it hasn’t donated or provided “dictator-style reward to Trump,” in accordance with a report from The Data.
He apologized for the tone of these remarks in an announcement on Thursday, writing that they have been written after a “troublesome day for the corporate” and don’t “replicate my cautious or thought-about views.”
Amodei has additionally drawn the ire of David Sacks, the enterprise capitalist serving because the White Home AI and crypto czar who has accused Anthropic of supporting “woke AI,” largely for its positions on regulation.
“This feels to me like a dispute that’s about politics and personalities,” Michael Horowitz, a senior fellow for know-how and innovation on the Council on Overseas Relations, stated in an interview. “It is masquerading as a coverage dispute.”
By the point the Trump administration blacklisted Anthropic, the startup’s instruments had been extensively adopted throughout authorities companies. A transition is already underway, as teams together with the U.S. Division of Well being and Human Providers, the Treasury Division and the State Division have confirmed they’re shifting off of Claude.
However that course of is particularly sophisticated inside the DOD, partly as a result of the U.S. is actively finishing up a army operation in Iran. Anthropic’s fashions have been used to help that operation, even after it was blacklisted, as CNBC beforehand reported.
Transitioning away from Anthropic towards a brand new vendor will take the DOD time and comes at a big value when it comes to effectivity, stated Jacquelyn Schneider, a Hargrove Hoover fellow at Stanford College’s Hoover Establishment, in an interview.
“You are not going to stroll away from applied sciences which can be deeply embedded in your wartime processes proper earlier than you go to conflict,” Schneider stated.
WATCH: Why the U.S. Protection Division blacklist of Anthropic is so unprecedented

