-
Technology
-

Regulating the Present Away: Why Britain’s Health-Tech AI Rulebook Risks Missing the Future

By
Distilled Post Editorial Team

In the past few years, something quietly extraordinary has been happening above our heads. SpaceX has placed more than 5,000 satellites into low-Earth orbit, creating the backbone of a global, always-on data network. By its own estimates, that number is set to rise sharply again by 2027. This is not about spectacle. It is infrastructure, built at planetary scale, designed to support real-time data, automation and artificial intelligence everywhere at once.

For Orlando Agrippa, founder and chief executive of Sanius Health, the lesson is strategic rather than technological. “When you see thousands of satellites launched in such a short period, with plans to expand that network again before the end of the decade, it tells you how seriously AI adoption is being taken globally,” he says. “This isn isn’t experimentation. It’s long-term industrial planning.”

That context matters as Britain tightens its approach to AI regulation, particularly in healthcare. After years of pilots and proofs of concept, regulators are now focused on removing transient, poorly evidenced tools and reinforcing approval pathways. The instinct is understandable. Patients are not beta testers, and the NHS cannot afford systems that overpromise and underdeliver.

Yet history suggests that regulation without an adoption strategy carries its own risks. In the 1990s and early 2000s, the UK was a world leader in plant science and agricultural biotechnology, only to retreat from deployment as political and regulatory resistance to genetically modified crops hardened across Europe. While the US, Brazil and parts of Asia integrated GM technology at scale, Europe opted for precaution over adoption. UK food prices rose by around 25 to 35 per cent in nominal terms during the period, even as consumers were partially insulated by imports of GM-enabled crops grown elsewhere. Productivity gains, investment and industrial leadership accrued overseas, leaving Britain as a net importer of technologies its own scientists had helped create.

The parallel with AI is difficult to ignore. Globally, the pace of investment is accelerating. The United States, operating as a single market of more than 330 million people, invested over $65 billion in AI in 2024 alone. Europe invested less than a quarter of that amount across all member states combined. In healthcare, the US Food and Drug Administration has authorised more than 500 AI-enabled medical devices, while European approvals remain slower and fragmented.

“The US deploys technology at population scale,” Agrippa argues. “You can debate outcomes, but the speed difference is undeniable. Europe is still trying to agree the framework while others are already on version three.”

From a technology-risk perspective, the real comparison is not transatlantic. “The real race is the US versus China,” Agrippa says. “In parts of China, some of the AI technologies we are still debating are already considered mature. They are iterating at a speed Europe hasn’t adjusted to yet.”

None of this is an argument for deregulation. Healthcare AI has earned scepticism. Algorithms trained on biased data, opaque decision-making and weak clinical validation pose real risks. Removing tools that cannot demonstrate safety or benefit is necessary stewardship.

The danger lies in what replaces them. Regulation that only subtracts, without enabling safer alternatives, creates a vacuum. Patients already upload clinic letters into general-purpose chatbots to interpret diagnoses. Clinicians quietly experiment with ambient voice tools to reduce documentation burden, often without formal guidance.

“What concerns me,” Agrippa notes, “is that patients and clinicians are already using these tools anyway. If regulation only removes options instead of guiding adoption, we lose visibility and governance.”

The UK still has formidable advantages. The NHS serves more than 65 million people through a single-payer system, with deep longitudinal data and a workforce under intense pressure. Ambient AI alone could release thousands of clinical hours each year. Decision-support tools could reduce unwarranted variation and improve safety at scale.

“This isn’t about choosing innovation over safety,” Agrippa concludes. “It’s about recognising that standing still while others accelerate is itself a risk. If we regulate with intent rather than fear, the NHS could leapfrog years of incremental progress.”

In a world moving faster, not slower, regulation cannot simply tidy up the present. It must also make deliberate space for the future.