Dr. Sina Bari, a practising surgeon and AI healthcare chief at knowledge firm iMerit, has seen firsthand how ChatGPT can lead sufferers astray with defective medical recommendation.
“I lately had a affected person are available, and after I really useful a medicine, that they had a dialogue printed out from ChatGPT that mentioned this medicine has a forty five% likelihood of pulmonary embolism,” Dr. Bari informed TechCrunch.
When Dr. Bari investigated additional, he discovered that the statistic was from a paper concerning the affect of that medicine in a distinct segment subgroup of individuals with tuberculosis, which didn’t apply to his affected person.
And but, when OpenAI introduced its devoted ChatGPT Well being chatbot final week, Dr. Bari felt extra pleasure than concern.
ChatGPT Well being, which is able to roll out within the coming weeks, permits customers to speak to the chatbot about their well being in a extra personal setting, the place their messages received’t be used as coaching knowledge for the underlying AI mannequin.
“I feel it’s nice,” Dr. Bari mentioned. “It’s one thing that’s already occurring, so formalizing it in order to guard affected person data and put some safeguards round it […] goes to make it all of the extra highly effective for sufferers to make use of.”
Customers can get extra customized steering from ChatGPT Well being by importing their medical information and syncing with apps like Apple Well being and MyFitnessPal. For the security-minded, this raises instant crimson flags.
Techcrunch occasion
San Francisco
|
October 13-15, 2026
“Impulsively there’s medical knowledge transferring from HIPAA-compliant organizations to non-HIPAA-compliant distributors,” Itai Schwartz, co-founder of information loss prevention agency MIND, informed TechCrunch. “So I’m curious to see how the regulators would strategy this.”
However the way in which some business professionals see it, the cat is already out of the bag. Now, as an alternative of Googling chilly signs, individuals are speaking to AI chatbots — over 230 million people already discuss to ChatGPT about their well being every week.
“This was one of many greatest use instances of ChatGPT,” Andrew Brackin, a accomplice at Gradient who invests in well being tech, informed TechCrunch. “So it makes lots of sense that they might wish to construct a extra sort of personal, safe, optimized model of ChatGPT for these healthcare questions.”
AI chatbots have a persistent downside with hallucinations, a very delicate subject in healthcare. In keeping with Vectara’s Factual Consistency Evaluation Model, OpenAI’s GPT-5 is extra liable to hallucinations than many Google and Anthropic fashions. However AI corporations see the potential to rectify inefficiencies within the healthcare area (Anthropic additionally introduced a well being product this week).
For Dr. Nigam Shah, a professor of drugs at Stanford and chief knowledge scientist for Stanford Well being Care, the lack of American sufferers to entry care is extra pressing than the specter of ChatGPT allotting poor recommendation.
“Proper now, you go to any well being system and also you wish to meet the first care physician — the wait time can be three to 6 months,” Dr. Shah mentioned. “In case your selection is to attend six months for an actual physician, or discuss to one thing that isn’t a physician however can do some issues for you, which might you choose?”
Dr. Shah thinks a clearer path to introduce AI into healthcare methods comes on the supplier facet, relatively than the affected person facet.
Medical journals have often reported that administrative duties can eat about half of a main care doctor’s time, which slashes the variety of sufferers they will see in a given day. If that sort of work may very well be automated, docs would be capable to see extra sufferers, maybe lowering the necessity for folks to make use of instruments like ChatGPT Well being with out extra enter from an actual physician.
Dr. Shah leads a workforce at Stanford that’s growing ChatEHR, a software program that’s constructed into the digital well being document (EHR) system, permitting clinicians to work together with a affected person’s medical information in a extra streamlined, environment friendly method.
“Making the digital medical document extra consumer pleasant means physicians can spend much less time scouring each nook and cranny of it for the data they want,” Dr. Sneha Jain, an early tester of ChatEHR, mentioned in a Stanford Drugs article. “ChatEHR will help them get that data up entrance to allow them to spend time on what issues — speaking to sufferers and determining what’s happening.”
Anthropic can be engaged on AI merchandise that can be utilized on the clinician and insurer sides, relatively than simply its public-facing Claude chatbot. This week, Anthropic introduced Claude for Healthcare by explaining the way it may very well be used to cut back the time spent on tedious administrative duties, like submitting prior authorization requests to insurance coverage suppliers.
“A few of you see lots of, hundreds of those prior authorization instances every week,” mentioned Anthropic CPO Mike Krieger in a current presentation at J.P. Morgan’s Healthcare Conference. “So think about reducing 20, half-hour out of every of them — it’s a dramatic time financial savings.”
As AI and medication change into extra intertwined, there’s an inescapable pressure between the 2 worlds — a physician’s main incentive is to assist their sufferers, whereas tech corporations are finally accountable to their shareholders, even when their intentions are noble.
“I feel that pressure is a vital one,” Dr. Bari mentioned. “Sufferers depend on us to be cynical and conservative in an effort to defend them.”

