AI chatbot misuse tops annual list of health technology hazards

Artificial intelligence (AI) chatbot misuse ranks as the top health technology hazard for 2026, according to an annual report from ECRI, an independent, nonpartisan patient safety organization.

ECRI cites the rapid adoption of chatbots, their lack of regulatory oversight, and mounting evidence that they can generate unsafe or misleading medical guidance as key reasons for the top rating.

Clinicians, patients, and health care staff are using large language model-based chatbots (such as ChatGPT and Copilot) for quick medical information. Indeed, each day, more than 40 million people turn to ChatGPT alone for health‑related answers, according to a recent analysis conducted by OpenAI. While these tools can provide helpful, conversational responses, ECRI warns that they are not validated for clinical use and often sound more authoritative than they should.

ECRI notes that because chatbots generate text by predicting word patterns rather than truly understanding context, they can offer false or incomplete information with unwarranted confidence. “Medicine is a fundamentally human endeavor. While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals,” said Marcus Schabacker, M.D., Ph.D., president and CEO of ECRI, in the report announcement. “Realizing AI’s promise while protecting people requires disciplined oversight, detailed guidelines, and a clear‑eyed understanding of AI’s limitations.”

The risks are not hypothetical. In its evaluation, ECRI found examples of chatbots suggesting incorrect diagnoses, recommending unnecessary tests, promoting substandard medical supplies, and even inventing nonexistent anatomy when asked medical questions. In one case, when ECRI asked a chatbot whether it was acceptable to place an electrosurgical return electrode over a patient’s shoulder blade—a practice that could expose a patient to serious burn injuries—the chatbot confidently said the placement was appropriate.

These dangers may grow as rising health care costs and ongoing hospital or clinic closures reduce access to professional care, potentially pushing more patients to rely on chatbots as substitutes for clinicians. ECRI also warns that chatbots can perpetuate or worsen existing health disparities. Biases embedded in training data may cause the models to interpret symptoms or risk factors differently for certain populations, reinforcing stereotypes rather than providing equitable information.

“AI models reflect the knowledge and beliefs on which they are trained, biases and all,” Schabacker said. “If health care stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate.”

To promote safer use of AI tools, ECRI recommends that patients, clinicians, and other users understand the limitations of chatbots and verify any health information with knowledgeable professionals. Health systems should also take a more active governance role by establishing AI oversight committees, developing clear usage guidelines, training clinicians in AI literacy, and regularly auditing chatbot and AI tool performance to validate accuracy and detect bias.

ECRI will discuss the hidden dangers of AI chatbots in health care during a webcast on January 28.