

Emotional intelligence, which involves understanding emotions, responding with empathy, and taking appropriate action, is essential for delivering high-quality healthcare. The question now is whether technology can help health systems be emotionally intelligent at scale.
Leveraging Emotion AI to Support Patients and Clinicians
Recent advances in affective computing and “emotion AI” mean systems, such as Woebot and Wysa, can infer mood from text, voice or facial cues and tailor responses. Early evaluations indicate that these tools offer significant benefits. Chatbots and digital therapeutic applications can deliver continuous, non-judgemental support, encourage self-management, and even assist in training clinicians through emotionally responsive virtual patient interactions. That capability is especially useful for stretched services where immediate human contact isn’t always available.
Emerging evidence suggests that users can perceive AI responses as empathetic, with experimental studies demonstrating that chatbots achieve high empathy ratings for specific patient groups. However, perception alone does not capture the full picture. Most research shows only short-term benefits, such as reduced anxiety or increased engagement, while comprehensive long-term outcome data and safety evidence for complex cases remain limited. Early promise must not replace thorough and rigorous evaluation.
There are genuine risks of “emotion AI”. The unregulated deployment of large language models in care planning and therapy has raised concerns in the UK regarding privacy, misleading validation, and the inability to recognise crises. Instances have emerged where AI generates comforting but unsafe guidance or is applied to tasks that demand human judgement. For emotionally intelligent systems, the stakes include not only incorrect clinical advice but the potential emotional harm of misreading or normalising distress.
What should the NHS and system leaders do?
Begin with a simple principle: technology should enhance healthcare while always allowing seamless transfer to human care. In practice, this requires clearly defined use cases, such as out-of-hours listening support, clinician alerts for patient distress, and training simulations, along with mandatory escalation pathways, explicit consent for emotion-sensing, and mixed-method evaluations that integrate clinical outcomes with qualitative measures of safety and dignity. NHS England’s AI guidance emphasises that AI must support rather than replace the human touch, and the AI Knowledge Repository provides a valuable resource for approved and evaluated tools. Equally important are workforce and organisational trust. Clinicians need training to interpret and respond to emotionally derived signals from technology, and patients must retain the option to interact with humans when necessary. Emotional intelligence in healthcare is as much about organisational culture as algorithms, requiring policies, skilled personnel, and an ethical framework alongside technological innovation.
In short, technology can help health systems become more emotionally intelligent — by scaling listening, surfacing hidden needs and freeing clinicians to do what humans do best. The promise will become a reliable component of UK healthcare only if clear boundaries, robust governance, and the recognition of the essential role of human oversight are maintained.