Are AI Chatbots Safe for Medical Advice? New Study Raises Concerns

While social media and AI tools can be helpful for learning basic concepts, clarifying definitions, and preparing questions to ask your doctor, a large Nature Medicine study shows they should never replace professional medical advice—especially when symptoms are new, severe, or worrying.

2–3 minutes
Home » News » Are AI Chatbots Safe for Medical Advice? New Study Raises Concerns

Many people now turn to the internet and AI tools first when they have a new symptom or health worry, but new research suggests that this does not always lead to better medical decisions.​ The study recently published in Nature Medicine highlights that even highly accurate AI systems can lead to poor decisions if answers are confusing, too general or not clearly linked to when to seek urgent care. AI tools and social media can be useful for learning basic concepts, definitions and questions to ask your doctor, but they should not replace professional medical advice—especially for new, severe or worrying symptoms.

AI health tools: high scores, mixed results

  • A large randomized study in Nature Medicine tested whether popular large language models (LLMs)—the AI behind many chatbots—could help 1,298 members of the public recognize possible conditions and decide what to do in ten common medical scenarios.
  • On their own, the AI systems did very well: they correctly identified likely medical conditions in about 95% of cases and recommended an appropriate level of care (for example, emergency visit vs. self‑care) in more than half of scenarios.

When people use AI, decisions do not always improve

  • When everyday users worked through the same scenarios with help from these AI tools, they correctly identified a relevant condition in fewer than 35% of cases and chose the right course of action in fewer than 45%—no better than people who used their usual sources such as search engines or general websites.
  • The study suggests that the biggest challenge is not just whether AI can produce accurate information, but whether people can understand, trust and apply that information safely in real‑world situations.

The rise of digital and AI health advice

  • Digital platforms have rapidly become a primary source of health information: surveys show that more than half of adults now use social media for health guidance, and many consult AI tools before they ever speak with a clinician.
  • Greater access, however, does not automatically mean better outcomes, especially when messages are complex, incomplete or easy to misinterpret.

Why communication and interpretation matter

The Nature Medicine study highlights that even highly accurate AI systems can lead to poor decisions if answers are confusing, too general or not clearly linked to when to seek urgent care.

What patients should do

  • AI tools and social media can be useful for learning basic concepts, definitions and questions to ask your doctor, but they should not replace professional medical advice—especially for new, severe or worrying symptoms.
  • If something feels wrong, or if online advice seems unclear or conflicts with what you have been told, the safest step is to contact a healthcare professional, urgent care clinic or emergency services rather than relying on AI or social media alone.

Reference

Bean AM, Payne RE, Parsons G, et al. Reliability of LLMs as medical assistants for the general public: a randomized preregistered study. Nat Med. 2026 Feb;32(2):609-615. 

You May Be Interested In