During a recent all-hands meeting, OpenAI CEO Sam Altman informed employees about a potential agreement with the U.S. Department of War to utilize the company’s AI models and tools. This contract, still pending signature, follows a tumultuous week in which tensions escalated between the Department of War and OpenAI competitor, Anthropic, culminating in the termination of Anthropic’s contracts with the Pentagon.
Altman indicated that the government would allow OpenAI to create its own “safety stack,” consisting of technical, policy, and human controls to regulate the deployment of its AI models. Importantly, if an AI model refuses a task, the government would not compel OpenAI to override that refusal. OpenAI retains authority over the implementation of technical safeguards and limits deployment to cloud-based environments, avoiding “edge systems” such as drones or aircraft.
In a significant concession, Altman stated that the government is amenable to incorporating OpenAI’s predefined restrictions into the contract. These include prohibiting the use of AI for autonomous weapons, domestic mass surveillance, and critical decision-making without human oversight.
The relationship between Anthropic and the government recently deteriorated, primarily due to perceived offensive comments by Anthropic’s CEO, Dario Amodei, towards Department of War leadership. Anthropic had previously maintained contracts for its AI technology but faced conflict over its own limitations on use, similar to those Altman discussed.
During the meeting, OpenAI leadership emphasized that foreign surveillance posed a primary concern. They acknowledged the necessity for governments to employ surveillance tools, particularly in contexts involving adversaries, amidst ongoing discussions regarding AI’s implications for democracy and national security.
Why this story matters
Key takeaway
Opposing viewpoint