-
Technology
-

Google Removes Controversial AI Feature Offering Crowdsourced Medical Advice

By
Distilled Post Editorial Team

Google has quietly removed an artificial-intelligence search feature that surfaced crowdsourced health advice from non-experts, amid mounting scrutiny over the reliability of AI-generated medical information online. The experimental tool, known as “What People Suggest”, attempted to show users tips and insights from individuals reporting similar health experiences, alongside traditional search results. The feature was initially promoted as an example of how artificial intelligence could help users discover practical advice from others living with similar conditions. However, critics argued that presenting anecdotal health tips from strangers risked spreading inaccurate or potentially harmful medical guidance.

Google confirmed the feature had been discontinued, saying its removal was part of a broader effort to simplify the search interface rather than a response to specific safety concerns. Nevertheless, the move comes as technology companies face increasing pressure from regulators, clinicians and patient groups over the role of AI in delivering health information.

How the ‘What People Suggest’ tool worked

The “What People Suggest” feature appeared within Google Search results, primarily on mobile devices in the United States. When users searched for health-related queries, the tool would display a panel summarising advice shared by individuals online who reported having similar symptoms or diagnoses. Rather than relying solely on medical websites or authoritative sources, the AI system aggregated personal anecdotes from forums and discussion platforms. The aim was to complement clinical guidance with lived experience, offering users additional perspectives about treatments, lifestyle changes or coping strategies.

While some health researchers support the inclusion of patient experiences in digital health platforms, many experts warned that AI-generated summaries of such content could easily blur the distinction between evidence-based medical advice and unverified personal opinion. The risk, they argue, is that users may struggle to differentiate between clinically validated guidance and informal suggestions shared online.

Mounting scrutiny over AI health information

The removal of the feature follows a series of controversies surrounding Google’s broader use of AI in search. Earlier investigations revealed that the company’s AI Overviews feature, which generates automated summaries at the top of search results which had produced misleading or inaccurate information about certain medical topics.In one widely reported example, AI-generated responses about liver blood tests provided numerical ranges without accounting for factors such as age, sex or ethnicity, potentially leading users to believe their results were normal when they were not.

Medical experts also warned that some AI summaries offered advice contradicting clinical guidelines. Such findings prompted Google to remove AI-generated summaries for some health-related queries and review its safeguards for medical content. Health charities and advocacy groups have argued that AI-powered search tools must prioritise reliability and transparency, particularly when dealing with complex or sensitive medical issues. Critics say inaccurate information could discourage patients from seeking professional care or lead them to misinterpret symptoms.

Growing debate over AI in digital health information

The controversy reflects a wider debate about the role of generative AI in healthcare information ecosystems. Search engines and chatbots are increasingly becoming the first point of contact for individuals seeking health advice online, meaning inaccuracies can have real-world consequences.

Research examining AI-generated search results has already highlighted significant reliability concerns. One analysis found that only around one-third of citations used in AI health summaries came from trusted medical sources, raising questions about how such systems prioritise information. At the same time, technology companies argue that AI can improve access to health information and help users navigate complex medical topics more easily. Google has maintained that it continues to refine its systems and relies on internal clinical teams to review sensitive queries and improve accuracy.

Implications for technology regulation

The removal of the “What People Suggest” feature underscores the growing regulatory and reputational pressures facing technology companies as they deploy generative AI in high-risk domains such as healthcare. Governments in Europe and the United States are already considering stricter oversight of AI systems that provide health-related information, including requirements for transparency, risk assessment and clear disclaimers.

For digital health experts, the episode highlights the need for stronger safeguards when integrating artificial intelligence into consumer search tools. While AI has the potential to improve health literacy and accessibility, experts say systems must be designed to prioritise authoritative medical sources and clearly distinguish between evidence-based guidance and anecdotal experiences. As AI-driven search continues to evolve, the challenge for technology firms will be balancing innovation with the responsibility to ensure that the information reaching billions of users remains accurate, trustworthy and safe.