OpenAI CEO Sam Altman continues to be within the scorching seat this week after his firm signed a take care of the US navy. OpenAI staff have criticized the transfer, which got here after Anthropic’s roughly $200 million contract with the Pentagon imploded, and requested Altman to launch extra details about the settlement. Altman admitted it appeared “sloppy” in a social media publish.
Whereas this incident has turn into a significant information story, it could simply be the newest and most public instance of OpenAI creating imprecise insurance policies round how the US navy can entry its AI.
In 2023, OpenAI’s utilization coverage explicitly banned the navy from accessing its AI fashions. However some OpenAI staff found the Pentagon had already began experimenting with Azure OpenAI, a model of OpenAI’s fashions supplied by Microsoft, two sources conversant in the matter mentioned. On the time, Microsoft had been contracting with the Division of Protection for many years. It was additionally OpenAI’s largest investor, and had broad license to commercialize the startup’s expertise.
That very same 12 months, OpenAI staff noticed Pentagon officers strolling by means of the corporate’s San Francisco workplaces, the sources mentioned. They spoke on the situation of anonymity as they aren’t licensed to touch upon non-public firm issues.
Some OpenAI staff had been cautious about associating with the Pentagon, whereas others had been merely confused about what OpenAI’s utilization insurance policies meant. Did the coverage apply to Microsoft? Whereas sources inform WIRED it was not clear to most staff on the time, spokespeople from OpenAI and Microsoft say Azure OpenAI merchandise are usually not, and weren’t, topic to OpenAI’s insurance policies.
“Microsoft has a product referred to as the Azure OpenAI Service that grew to become obtainable to the US Authorities in 2023 and is topic to Microsoft phrases of service,” mentioned spokesperson Frank Shaw in an announcement to WIRED. Microsoft declined to remark particularly on when it made Azure OpenAI obtainable to the Pentagon, however notes the service was not accredited for “high secret” authorities workloads till 2025.
“AI is already taking part in a big function in nationwide safety and we consider it’s necessary to have a seat on the desk to assist guarantee it’s deployed safely and responsibly,” OpenAI spokesperson Liz Bourgeois mentioned in an announcement. “We have been clear with our staff as we’ve approached this work, offering common updates and devoted channels the place groups can ask questions and interact straight with our nationwide safety crew.”
The Division of Protection didn’t reply to WIRED’s request for remark.
By January 2024, OpenAI up to date its insurance policies to take away the blanket ban on navy use. A number of OpenAI staff discovered concerning the coverage replace by means of an article in The Intercept, sources say. Firm leaders later addressed the change at an all-hands assembly, explaining how the corporate would tread fastidiously on this space shifting ahead.
In December 2024, OpenAI introduced a partnership with Anduril to develop and deploy AI methods for “nationwide safety missions.” Forward of the announcement, OpenAI advised staff that the partnership was slender in scope and would solely take care of unclassified workloads, the identical sources mentioned. This stood in distinction to a deal Anthropic had signed with Palantir, which might see Anthropic’s AI used for categorized navy work.
Palantir approached OpenAI within the fall of 2024 to debate collaborating of their “FedStart” program, an OpenAI spokesperson confirmed to WIRED. The corporate finally turned it down, and advised staff it will’ve been too high-risk, two sources conversant in the matter inform WIRED. Nevertheless, OpenAI now works with Palantir in different methods.
Across the time the Anduril deal was introduced, a couple of dozen OpenAI staff joined a public Slack channel to debate their considerations concerning the firm’s navy partnerships, sources say and a spokesperson confirmed. Some believed the corporate’s fashions had been too unreliable to deal with a person’s bank card info, not to mention help Individuals on the battlefield.

