Research roundup: AI in hospital billing linked to upcoding; Telemedicine for mental health care still lacking in rural areas; and more

RISE summarizes recent health care-related research, including the latest on artificial intelligence (AI), digital tools, and telemedicine use.

AI in hospital billing linked to upcoding, higher health care costs

A new analysis of tens of thousands of maternity claims shows that AI-driven coding may be inflating diagnoses without matching clinical interventions, contributing to an estimated $2.3 billion in additional spending.

The Blue Cross Blue Shield Association and its data analytics partner Blue Health Intelligence (BHI) found sharp increases in acute posthemorrhagic anemia diagnoses, even though many patients never received treatments such as blood transfusions. The findings highlight a disconnect between documented conditions and actual care delivered and underscore the need for stronger oversight of AI-enabled coding practices.

Researchers estimate that roughly $663 million in inpatient spending and at least $1.67 billion in outpatient spending may be tied to more aggressive, AI-enabled coding practices nationwide.

Telemedicine hasn’t improved mental health care access in rural areas

Although telemedicine use among mental health specialists increased sharply during and after COVID-19, new research shows it has had little impact on expanding access for rural or underserved patients.

The study, published in JAMA Network Open and conducted by researchers from the Brown University School of Public Health, Harvard Medical School, and McLean Hospital, looked at Medicare billing records from 2018 to 2023 for 17,742 mental health specialists and grouped them into categories based on how much they used telemedicine to deliver care. They found only marginal changes, such as a 0.9‑percentage‑point rise in rural patient visits, suggesting that telemedicine primarily helps clinicians maintain relationships with existing patients rather than expand to new populations.

Researchers believe policy changes for state licenses would make it easier for clinicians to practice across state lines and help specialists reach more patients in rural communities.

Some patients without access to digital tools could fall through the cracks

A cross-sectional survey conducted by the University of California San Fransico and published by JMIR Formative Research found health systems rarely assess patients’ ability to use digital health tools even as more processes move online. Researchers warn that the lack of screening may widen disparities.

In a survey of nearly 150 clinicians and informatics leaders from health care systems across the country during the first half of 2024, just 44 percent said they asked their patients if they could use digital devices. Among the institutions that serve uninsured patients, just one-third asked.

“Not everyone can access all these new digital health tools we’re rolling out, and the people who are excluded are often those who experience worse health outcomes and limited access to care,” said Elaine C. Khoong, M.D., associate professor of medicine at UCSF and a faculty member with the UCSF Action Research Center for Health, in a UCSF announcement.

Researchers suggest health care organizations train health workers to screen for digital readiness using standardized tools, and policy makers should create stronger incentives for health systems to do this type of assessment. They recommended that digital-readiness assessments should be incorporated into other routine social-risk screenings, such as housing instability, food insecurity, and domestic abuse.

ChatGPT Health may miss serious medical emergencies

A new study, conducted by researchers from Icahn School of Medicine at Mount Sinai and published in Nature Medicine, found that ChatGPT Health under-triaged more than half of cases physicians categorized as emergencies and was inconsistent with suicide prevention alerts.

Forty million people use ChatGPT Health daily to seek health information and guidance. Researchers wanted to see whether the tool would clearly tell patients to seek care in an emergency room if they were truly experiencing a medical emergency. They tested 60 realistic patient scenarios developed by physicians and found that while the tool performed well in textbook emergencies, such as stroke or severe allergic reactions, it struggled in more nuanced situations where the danger was not immediately obvious.