Persons are turning to Chatbots like Claude to get assist deciphering their lab check outcomes.
Smith Assortment/Gado/Archive Images/Getty Photos
cover caption
toggle caption
Smith Assortment/Gado/Archive Images/Getty Photos
When Judith Miller had routine blood work carried out in July, she bought a telephone alert the identical day that her lab outcomes had been posted on-line. So, when her physician messaged her the following day that general her exams had been fantastic, Miller wrote again to ask concerning the elevated carbon dioxide and one thing known as “low anion hole” listed within the report.
Whereas the 76-year-old Milwaukee resident waited to listen to again, Miller did one thing sufferers more and more do once they cannot attain their well being care staff. She put her check outcomes into Claude and requested the AI assistant to judge the information.
“Claude helped give me a transparent understanding of the abnormalities,” Miller mentioned. The generative AI mannequin did not report something alarming, so she wasn’t anxious whereas ready to listen to again from her physician, she mentioned.

Sufferers have unprecedented entry to their medical data, typically via on-line affected person portals corresponding to MyChart, as a result of federal regulation requires well being organizations to right away launch digital well being info, corresponding to notes on physician visits and check outcomes.
And plenty of sufferers are utilizing giant language fashions, or LLMs, like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini, to interpret their data. That assist comes with some threat, although. Physicians and affected person advocates warn that AI chatbots can produce unsuitable solutions and that delicate medical info may not stay non-public.
However does AI know what it is speaking about?
But, most adults are cautious about AI and well being. Fifty-six p.c of those that use or work together with AI aren’t assured that info offered by AI chatbots is correct, in response to a 2024 KFF ballot. (KFF is a well being info nonprofit that features KFF Well being Information.)
That intuition is born out in analysis.
“LLMs are theoretically very highly effective they usually can provide nice recommendation, however they’ll additionally give really horrible recommendation relying on how they’re prompted,” mentioned Adam Rodman, an internist at Beth Israel Deaconess Medical Middle in Massachusetts and chair of a steering group on generative AI at Harvard Medical College.
Justin Honce, a neuroradiologist at UCHealth in Colorado, mentioned it may be very tough for sufferers who aren’t medically skilled to know whether or not AI chatbots make errors.
“Finally, it is simply the necessity for warning general with LLMs. With the most recent fashions, these issues are persevering with to get much less and fewer of a problem however haven’t been fully resolved,” Honce mentioned.
Rodman has seen a surge in AI use amongst his sufferers prior to now six months. In a single case, a affected person took a screenshot of his hospital lab outcomes on MyChart then uploaded them to ChatGPT to arrange questions forward of his appointment. Rodman mentioned he welcomes sufferers’ displaying him how they use AI, and that their analysis creates a possibility for dialogue.
Roughly 1 in 7 adults over 50 use AI to obtain well being info, in response to a current ballot from the College of Michigan, whereas 1 in 4 adults beneath age 30 achieve this, in response to the KFF ballot.

Utilizing the web to advocate for higher look after oneself is not new. Sufferers have historically used web sites corresponding to WebMD, PubMed, or Google to seek for the most recent analysis and have sought recommendation from different sufferers on social media platforms like Fb or Reddit. However AI chatbots’ skill to generate personalised suggestions or second opinions in seconds is novel.
What to know: Be careful for “hallucinations” and privateness points
Liz Salmi, communications and affected person initiatives director at OpenNotes, a tutorial lab at Beth Israel Deaconess that advocates for transparency in well being care, had puzzled how good AI is at interpretation, particularly for sufferers.
In a proof-of-concept examine revealed this 12 months, Salmi and colleagues analyzed the accuracy of ChatGPT, Claude, and Gemini responses to sufferers’ questions on a medical be aware. All three AI fashions carried out nicely, however how sufferers framed their questions mattered, Salmi mentioned. For instance, telling the AI chatbot to tackle the persona of a clinician and asking it one query at a time improved the accuracy of its responses.
Privateness is a priority, Salmi mentioned, so it’s important to take away private info like your title or Social Safety quantity from prompts. Knowledge goes on to tech firms which have developed AI fashions, Rodman mentioned, including that he’s not conscious of any that adjust to federal privateness regulation or contemplate affected person security. Sam Altman, CEO of OpenAI, warned on a podcast final month about placing private info into ChatGPT.
“Many people who find themselves new to utilizing giant language fashions may not learn about hallucinations,” Salmi mentioned, referring to a response which will seem wise however is inaccurate. For instance, OpenAI’s Whisper, an AI-assisted transcription device utilized in hospitals, launched an imaginary medical remedy right into a transcript, in response to a report by The Related Press.
Utilizing generative AI calls for a brand new kind of digital well being literacy that features asking questions in a selected means, verifying responses with different AI fashions, speaking to your well being care staff, and defending your privateness on-line, mentioned Salmi and Dave deBronkart, a most cancers survivor and affected person advocate who writes a weblog dedicated to sufferers’ use of AI.
Physicians have to be cautious with AI too
Sufferers aren’t the one ones utilizing AI to clarify check outcomes. Stanford Well being Care has launched an AI assistant that helps its physicians draft interpretations of medical exams and lab outcomes to ship to sufferers.
Colorado researchers studied the accuracy of ChatGPT-generated summaries of 30 radiology experiences, together with 4 sufferers’ satisfaction with them. Of the 118 legitimate responses from sufferers, 108 indicated the ChatGPT summaries clarified particulars concerning the unique report.
However ChatGPT typically overemphasized or underemphasized findings, and a small however vital variety of responses indicated sufferers had been extra confused after studying the summaries, mentioned Honce, who participated within the preprint examine.
In the meantime, after 4 weeks and a few follow-up messages from Miller in MyChart, Miller’s physician ordered a repeat of her blood work and a further check that Miller prompt. The outcomes got here again regular. Miller was relieved and mentioned she was higher knowledgeable due to her AI inquiries.
“It is an important device in that regard,” Miller mentioned. “It helps me arrange my questions and do my analysis and stage the taking part in subject.”
KFF Well being Information is a nationwide newsroom that produces in-depth journalism about well being points and is among the core working applications at KFF .