-
Business
-

Google Grants Pentagon Broad AI Access After Anthropic Refused Similar Terms

By
Distilled Post Editorial Team

Google has expanded the US Department of Defense's access to its artificial intelligence models across classified networks, granting permission for all lawful uses of the technology. The agreement fills a gap left by Anthropic, which declined to offer the Pentagon comparable terms, citing concerns about domestic surveillance and the potential use of its models in autonomous weapons systems.

The distinction between the two companies' positions is substantive. Anthropic's refusal was based on specific objections to deployment scenarios it considered incompatible with the safety conditions it attaches to its models, including mass surveillance applications and autonomous lethal systems operating without human oversight. Google's agreement does not appear to carry equivalent restrictions. The Department of Defense has moved away from compartmentalised AI deployments in favour of platform-wide integration capable of operating across both administrative and combat environments, and the terms of the Google agreement are compatible with that approach.

The healthcare implications of broad military AI access are not peripheral to the agreement. AI systems are already being used in tactical combat casualty care contexts, automating elements of trauma triage and supporting remote clinical decision-making in environments where qualified medical personnel are not immediately available. Where Google's models are integrated into those workflows under a lawful use standard, the question of what oversight governs their clinical outputs becomes directly relevant. A model making or contributing to triage decisions in a combat setting operates under different accountability structures than one deployed in a regulated NHS or hospital environment, and the agreement does not appear to address how that distinction is managed.

The biosurveillance dimension is a separate concern. Anthropic's stated objections referenced domestic mass surveillance, and in a healthcare context that concern extends to the monitoring of biological markers and population health data under national security authorities. The legal basis for such surveillance in the United States is broader than in comparable democracies, and AI models with access to health data at scale could, under military integration, be applied to population monitoring programmes that would not meet the regulatory standards required of civilian health applications.

The dual-use risk attached to medical AI compounds these concerns. Models trained to identify pathogens for vaccine development or to analyse biological samples for diagnostic purposes carry capabilities that are not inherently restricted to defensive or therapeutic applications. Under an unrestricted military access arrangement, the same analytical capacity that supports public health responses could in principle be redirected toward applications with offensive potential. The lawful use standard does not eliminate that risk; it defers to the legal framework of the contracting government to define its limits.

Google's decision follows a pattern that has generated internal conflict at the company before. Its Project Maven contract with the Pentagon, a programme applying AI to the analysis of drone surveillance footage, prompted a significant employee protest in 2018 and ultimately led to Google declining to renew that specific contract. The current agreement is broader in scope than Project Maven, and whether it produces a similar internal response will depend in part on how its terms and applications become known to Google's workforce in the months ahead.

The broader question raised by the divergence between Google and Anthropic is what standards should govern AI deployment in military and national security contexts, and who has the authority to set them. At present, the answer is largely determined by individual companies through their contractual terms, and by the legal frameworks of the governments they contract with. There is no international framework equivalent to the Geneva Conventions that addresses AI's role in battlefield medicine, triage automation, or biological data analysis in conflict settings. The absence of such a framework means that the ethical boundaries around military medical AI are currently being drawn, if they are being drawn at all, through commercial negotiation rather than international agreement.

Whether the lawful use standard is a sufficient constraint on how AI models are applied in the most sensitive areas of military medicine and biosecurity is a question the Google agreement has raised without answering. The commercial logic of the deal is clear. The clinical and ethical logic remains considerably less so.