-
Technology
-

OpenAI Revises Controversial US Defence Contract Following Criticism Over Handling

By
Distilled Post Editorial Team

OpenAI, the Silicon Valley-based developer of the popular ChatGPT artificial intelligence system, has announced changes to its recently signed contract with the US Department of Defence dubbed the Pentagon, after intense criticism that the original agreement was “opportunistic and sloppy”. The company’s CEO, Sam Altman, publicly acknowledged that the haste with which the deal was struck had created public concern about ethical safeguards and transparency in the use of advanced AI technologies in military contexts.

The original deal, finalised late in February, would have seen OpenAI’s technology deployed on classified defence networks. However, observers, including technology analysts, civil liberties advocates and even rival AI companies raised alarms that the language of the contract lacked sufficient protections against domestic surveillance and the use of AI to automate high-stakes military decisions. In response, OpenAI has moved to clarify and strengthen the terms of its agreement with the Pentagon.

A Public Admission and Contract Amendments

In a rare public admission, Altman described the initial approach to the agreement as appearing “opportunistic and sloppy”, a phrase widely reported by international media. He conceded that OpenAI had perhaps moved too quickly to fill the gap left when another AI firm, Anthropic, walked away from negotiations with the Pentagon due to ethical concerns over surveillance and autonomous weapons use.

Under the revised terms, OpenAI says the company’s models will not be used for domestic surveillance of US citizens or for autonomous weapons systems. A spokesman for OpenAI noted that in addition to contractual language, technical measures such as cloud-only deployment and human oversight by cleared OpenAI engineers will help enforce those boundaries.

The amended agreement also explicitly prevents intelligence agencies, such as the National Security Agency, from using OpenAI’s AI models for domestic-oriented intelligence collection unless further modifications are agreed. This represents an effort by OpenAI to reassure critics that strong legal and technical guardrails are in place to protect civil liberties and avoid misuse of its technologies.

Industry Reaction and Broader Context

The Pentagon deal has triggered significant debate across the technology sector. OpenAI’s rival, Anthropic, whose own negotiations with US defence officials broke down amid disagreements over safety provisions, was publicly critiqued by the Pentagon as a “supply-chain risk”. Shortly after, OpenAI stepped into a position to provide its tools for military use.This sequence of events has fed a broader discussion about how advanced AI should interact with national security and military operations. Critics argue that companies like OpenAI must tread carefully, balancing innovation with ethical considerations around privacy and the potential for weaponisation. Some in the industry have pointed out that moves to restrict one form of misuse should be accompanied by ongoing oversight and clarity around how AI systems behave in complex military and security settings.

Public Backlash and Boycott Movements

The deal also triggered a wave of public reaction, including online campaigns against OpenAI’s products. Movements such as “QuitGPT” emerged on social media platforms, urging users to cancel ChatGPT subscriptions in protest at the company’s decision to engage with the Pentagon on AI deployment. Critics taking part in these campaigns argue that handing over powerful AI systems to defence networks, even with safeguards could erode public trust and blur the line between civilian technology and military use.

These grassroots reactions reflect a larger global conversation on how AI intersects with democratic values, including the right to privacy and accountability in powerful technological systems. Analysts say such reactions can influence corporate behaviour and regulatory priorities, especially as AI becomes more embedded in public and private sector operations around the world.

Looking Forward

While OpenAI’s revisions respond to immediate criticisms, the evolution of the Pentagon agreement underscores the ongoing challenge of aligning powerful AI capabilities with ethical norms and public expectations. As watchdog groups, industry voices and governments continue to debate appropriate guardrails, the case of OpenAI and the Pentagon may serve as a bellwether for how future AI-government contracts are scrutinised, negotiated and implemented.