Doctors Think AI Has a Place in Healthcare — but Maybe Not as a Chatbot
Doctors say AI can improve healthcare efficiency, but warn that chatbots like ChatGPT may mislead patients and are better suited for clinical support roles.
Dr Sina Bari, a practising surgeon and AI healthcare leader at the data company iMerit, has seen firsthand how AI chatbots can mislead patients with incorrect medical information.
“I recently had a patient come in, and when I recommended a medication, they had a dialogue printed out from ChatGPT that said this medication has a 45% chance of pulmonary embolism.”
After reviewing the claim, Dr Drari determined that the statistic came from a research paper examining the effects of the medication on a narrow subgroup of tuberculosis patients — data that was not relevant to his patient’s condition.
Despite such experiences, Dr Bari said he felt more excitement than concern when OpenAI announced its dedicated ChatGPT Health chatbot last week.
ChatGPT Health, which is set to roll out in the coming weeks, allows users to discuss health-related questions in a private environment where conversations will not be used to train the underlying AI models.
“I think it’s great,” Dr Bari said. “It is something that’s already happening, so formalizing it so as to formalizingient information and put some safeguards around it […] is going to make it all the more powerful for patients to use.”
The new product allows users to receive more personalised guidance by personalising medical records and syncing data from apps such as Apple Health and MyFitnessPal. This capability raises immediate privacy and regulatory questions.
“All of a sudden there’s medical data transferring from HIPAA-compliant organizations to non-HIPorganizationsvendors,” said Itai Schwartz, co-founder of data loss prevention firm MIND. “So I’m curious to see how the regulators would approach this.”
Still, many industry insiders believe widespread use of AI for health questions is already unavoidable. Rather than searching for symptoms online, people are increasingly turning to chatbots—more than 230 million users discuss health-related topics with ChatGPT every week.
“This was one of the biggest use cases of ChatGPT,” said Andrew Brackin, a partner at health-focused investment firm Gradient. “So it makes a lot of sense that they would want to build a more private, secure, optimized version of Chaoptimizedthese healthcare questions.”
AI chatbots, however, continue to struggle with hallucinations — a grave issue in medicine. According to Vectara’s Factual Consistency Evaluation Model, OpenAI’s GPT-5 is more prone to hallucinations than comparable models from Google and Anthropic. Still, AI companies argue that the technology could help address deep inefficiencies in healthcare. Anthropic also announced a healthcare-focused product this week.
For Nigam Shah, a professor of medicine at Stanford University and chief data scientist at Stanford Health Care, limited access to care poses a more immediate threat than flawed chatbot advice.
“Right now, if you go to any health system, and you want to meet the primary care doctor, the wait time will be three to six months,” Dr Shah said. “If your choice is to wait six months for a real doctor, or talk to something that is not a doctor but can do some things for you, which would you pick?”
Dr Shah believes AI’s most straightforward application in healthcare lies on the provider side rather than directly with patients. Studies and medical journals have long reported that administrative duties consume roughly half of a primary care physician’s time, limiting the number of patients they can treat each day.
At Stanford, Dr Shah leads the development of ChatEHR, a tool embedded into electronic health record systems that allows clinicians to interact with patient data more efficiently.
“Making the electronic medical record more user friendly means physicians can spend less time scouring every nook and cranny of it for the information they need,” said Sneha Jain, an early tester of ChatEHR, in a Stanford Medicine article. “ChatEHR can help them get that information up front so they can spend time on what matters — talking to patients and figuring out what’s going on.”
Anthropic is pursuing a similar strategy by developing AI tools for clinicians and insurers, rather than solely focusing on its consumer-facing Claude product. This week, the company announced Claude for Healthcare, highlighting its potential to reduce time spent on administrative tasks such as insurance prior authorizations.
“Some of the authorisations of thousands of these prior authorisation cases, as an authorisations officer, like Mike, were discussed during a presentation at the J.P. Morgan Healthcare Conference. “So imagine cutting 20, 30 minutes out of each of them — it’s a dramatic time savings.”
As AI becomes more deeply integrated into medicine, tension remains unavoidable. Physicians are incentivised to prioritise technology companies, which must ultimately answer to shareholders.
“I think that tension is an important one,” Dr Bari said. “Patients rely on us to be cynical and conservative in order to protect them.”
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0