-
Technology
-

Slingshot Pulls Therapy Chatbot Ash Out of the U.K. Over Regulatory Concerns

By
Distilled Post Editorial Team

Slingshot AI has withdrawn its AI therapy chatbot, Ash, from the UK market after concluding that the current regulatory landscape offers no clear pathway for wellbeing products functioning as therapeutic tools. The company's CEO informed UK users that Ash would cease operations after 23 January 2026, stating the lack of regulatory clarity made it impossible to "operate with confidence."

This decision highlights the wider regulatory uncertainty in the UK regarding digital mental health tools, which occupy a grey zone between wellness support and clinical care. Slingshot maintains that Ash is a wellbeing product, not a regulated medical device. However, regulators appear to challenge this classification, likely due to the chatbot's therapeutic positioning and how users interact with it. Despite raising over $90 million from investors like Andreessen Horowitz, Ash—part of a trend of startups aiming to provide accessible, AI-powered "therapy-like" support—faced intense scrutiny in the UK's relatively mature digital health regulatory environment.

The core regulatory concern is how chatbots like Ash blur the lines between general wellness advice and treatment-like interaction. Unlike human therapists, these AI tools generate responses based on learned patterns. This raises risks that they could inadvertently mirror or validate maladaptive thoughts or behaviour, potentially harming vulnerable users. Slingshot acknowledged these limits by including explicit disclaimers on the Ash website, urging users in crisis to seek professional help.

Globally, regulators are struggling with how to oversee generative AI in mental health. For instance, the Food and Drug Administration’s Digital Health Advisory Committee in the US is considering regulation for therapy chatbots, focusing on risks posed by inconsistent advice. For Slingshot, the UK's regulatory vacuum was the direct cause of the withdrawal. A senior executive noted that regulators will judge a product based on its behaviour and impact on users, not just its branding. A chatbot engaging distressed users in a therapeutic manner could thus face medical device-like scrutiny even if labelled as a wellness tool.

Public reaction has been mixed. Some UK users expressed frustration and disappointment at losing Ash, viewing it as a valuable support system between long waits for NHS mental health services. This sentiment underscores the pressure on traditional mental health services. Conversely, clinicians and researchers have issued warnings about the significant risk of unsupervised AI, citing independent research showing that even advanced models like ChatGPT-5 can give unsafe advice in simulated crisis scenarios. The NHS has also cautioned against using AI chatbots as substitutes for professional care, particularly for vulnerable individuals, warning that these platforms should "not be relied upon" as replacement therapy.

The Ash case ultimately exposes the tension between innovation and safety in digital mental health. While AI offers the potential for scalable, affordable support amidst workforce shortages, it raises complex questions regarding efficacy, harm, clinical governance, and liability that current regulatory frameworks are ill-equipped to handle. This situation is a clear signal to UK health providers and policymakers that policy and regulation must evolve rapidly. Future frameworks must clearly distinguish between informational wellbeing tools and therapeutic content, setting thresholds for clinical oversight, user safety mechanisms, and harm reporting to ensure AI can play a safe and effective role in future care pathways.

While Slingshot is in ongoing dialogue with government officials, the immediate outcome is that UK users are losing access to Ash. This highlights the major challenge of safely and ethically integrating generative AI into health systems.