-
Technology
-

OpenAI Plans Additional Oversight Safeguards Following Pentagon Partnership

By
Distilled Post Editorial Team

New safeguards announced after criticism of defence partnership

OpenAI has announced additional safeguards around surveillance and privacy following backlash over its recently announced agreement with the United States Department of Defense. The artificial intelligence company, led by chief executive Sam Altman, is seeking to strengthen protections in the deal after critics warned that the technology could be misused for mass surveillance or other controversial military applications.

The agreement, announced in late February 2026, will allow OpenAI’s advanced AI models to be deployed within the Pentagon’s classified network to support defence operations. However, the speed with which the partnership was unveiled triggered concern from privacy advocates, AI researchers and some OpenAI employees about the potential implications for civil liberties.

In response, OpenAI has begun revising the terms of the contract to clarify restrictions on how the technology can be used.

Explicit ban on domestic surveillance

One of the most significant changes is the addition of explicit language preventing the AI systems from being used for domestic surveillance of American citizens. The amended agreement states that the technology “shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” aligning the system with existing legal protections such as the Fourth Amendment.

The updated provisions also specify that the prohibition includes deliberate tracking, monitoring or surveillance using personal data, including information obtained from commercial data brokers.

Altman has said the company decided to revise the agreement after recognising that the original announcement did not adequately address public concerns. “We shouldn’t have rushed to get this out,” he acknowledged, noting that the ethical issues surrounding military use of AI are complex and require clearer communication.

In addition to the surveillance restriction, OpenAI said its technology would not be made available to intelligence agencies such as the U.S. National Security Agency without further contractual changes, providing another layer of protection.

Broader debate over AI and defence

The Pentagon agreement is part of a wider effort by the U.S. government to integrate advanced AI technologies into defence planning, logistics and intelligence analysis. In recent years, the Department of Defense has sought partnerships with leading AI companies in order to maintain technological advantage in areas such as data analysis, cyber security and autonomous systems.

However, the move has triggered intense debate within the technology sector. Critics argue that powerful AI systems could enable unprecedented surveillance capabilities or be used in autonomous weapons systems if appropriate safeguards are not in place.

Some technology workers have publicly protested military collaborations involving AI companies. Industry observers say the OpenAI agreement has intensified this debate by highlighting the lack of a clear global framework governing how advanced AI systems should be used in security contexts.

Rivalry with Anthropic adds tension

The controversy has also unfolded amid tensions between the Pentagon and Anthropic, another major AI developer. Anthropic had previously refused to allow its AI models to be used for mass surveillance or certain military functions, which contributed to a breakdown in negotiations with the U.S. government.

Shortly afterwards, OpenAI stepped in with its own agreement, including similar safeguards. The sequence of events has sparked criticism from some industry leaders who question why restrictions rejected during earlier talks were ultimately accepted in OpenAI’s deal.

Altman has said he would prefer to see the same safety standards applied across the entire AI industry, rather than becoming a competitive advantage for individual firms.

Technical safeguards and oversight

OpenAI says it will enforce the agreement through a combination of technical safeguards, contractual restrictions and human oversight. The company plans to deploy its models through secure cloud infrastructure and maintain control over how the systems operate, ensuring that government users cannot remove safety features.

The firm also intends to embed engineers and technical specialists alongside defence teams to monitor the technology’s behaviour and prevent misuse.

Despite these assurances, critics argue that it remains unclear how such safeguards will be enforced in practice. Legal experts have warned that even if deliberate surveillance is banned, AI systems could still enable indirect or unintended monitoring depending on how they are deployed.

Implications for global health and technology sectors

For the wider technology ecosystem, the episode highlights the growing intersection between artificial intelligence, national security and public accountability.

AI systems developed by companies like OpenAI are increasingly used in fields such as healthcare research, epidemiological modelling and drug discovery. However, the same underlying technologies can also be applied to intelligence analysis or military planning.

Experts say the debate surrounding the Pentagon deal underscores the need for clear international standards governing the use of AI in sensitive contexts. Without stronger regulatory frameworks, technology companies may face increasing pressure from governments, investors and civil society to demonstrate how they balance innovation with ethical responsibility.

As AI becomes more embedded in critical infrastructure and decision-making, the challenge for policymakers and industry leaders alike will be ensuring that powerful technologies deliver public benefit while protecting fundamental rights and freedoms.