Sam Altman, chief government officer of OpenAI Inc., on the AI Impression Summit in New Delhi, India, on Thursday, Feb. 19, 2026.
Prakash Singh | Bloomberg | Getty Photographs
OpenAI CEO Sam Altman instructed staffers late Thursday that he would love the corporate “attempt to assist de-escalate issues” between rival Anthropic and the Division of Protection.
“We have now lengthy believed that AI shouldn’t be used for mass surveillance or autonomous deadly weapons, and that people ought to stay within the loop for high-stakes automated choices,” Altman wrote in a memo that was considered by CNBC. “These are our principal crimson traces.”
Anthropic has till 5:01 p.m. ET on Friday to resolve whether or not it should agree to offer the Pentagon permission to make use of its synthetic intelligence fashions in all lawful use circumstances with out limitation. The startup desires assurance that its know-how will not be used for totally autonomous weapons or home mass surveillance of People, however the DoD hasn’t budged.
Altman’s inner letter on Thursday was meant to point out that OpenAI shares Anthropic’s boundaries. The Wall Road Journal was first to report the memo.
Previous to the Altman memo, OpenAI staff had begun to talk out in help of Anthropic on social media. Some 70 present staffers have signed an open letter titled, “We Will Not Be Divided,” which goals to create a “shared understanding and solidarity within the face of this strain” from the division, based on its web site.
“For all of the variations I’ve with Anthropic, I principally belief them as an organization, and I believe they actually do care about security, and I have been completely satisfied that they have been supporting our conflict fighters,” Altman instructed CNBC in an interview on Friday. “I am unsure the place that is going to go.”
OpenAI was awarded a $200 million contract by the DoD final 12 months, which allowed the company to start utilizing the startup’s fashions in non-classified use circumstances. Anthropic was the primary AI lab to combine its fashions into mission workflows on categorised networks.
Altman stated he’ll see if OpenAI can strike a cope with the DoD to deploy its fashions in categorised environments in a method that “matches with our rules.” He stated the corporate would construct technical safeguards and deploy personnel to “guarantee issues are working appropriately.”
“We might ask for the contract to cowl any use besides these that are illegal or unsuited to cloud deployments, similar to home surveillance and autonomous offensive weapons,” Altman wrote.
Altman stated OpenAI has had conferences in regards to the subject in current days, and that the corporate hasn’t but arrived at a call on what to do. He stated extra conferences will happen with OpenAI’s security groups on Friday.
“This can be a case the place it is essential to me that we do the appropriate factor, not the simple factor that appears robust however is disingenuous,” Altman wrote. “However I notice it might not “look good” for us within the quick time period, and that there’s a lot of nuance and context.”
— CNBC’s Kate Rooney contributed to this report.
WATCH: OpenAI closes $110 billion funding spherical with backing from Amazon, Nvidia, Softbank

