THE NEW YORK TIMES: Five tips when consulting ‘Dr’ ChatGPT

For better or worse, many people are asking artificial intelligence chatbots for health information and advice. According to a June 2024 poll from the health research group KFF, about 1 in 6 adults do so regularly, and experts say that share has grown since.
Recent studies have shown that ChatGPT can pass medical licensing exams and solve clinical cases more accurately than humans can. But AI chatbots are also notorious for making things up, and their faulty medical advice seems to have also caused real harm.
These risks don’t mean you should stop using chatbots, said Dr Ainsley MacLean, the former chief AI officer of the Mid-Atlantic Permanente Medical Group, but they do underscore the need for caution and critical thinking.
Sign up to The Nightly's newsletters.
Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.
By continuing you agree to our Terms and Privacy Policy.Chatbots are great at creating a list of questions to ask your doctor, simplifying jargon in medical records and walking you through your diagnosis or treatment plan, MacLean said.
But no chatbot is ready to replace your physician, so be a little more cautious when asking AI for potential diagnoses or medical advice.
We asked experts how to best use chatbots for your health care questions.
Practice when the stakes are low
In general, people are used to seeking medical advice from Google and understand that the tenth page of results isn’t as good as the first, said Dr Raina Merchant, executive director of the Centre for Health Care Transformation and Innovation at Penn Medicine.
But most people don’t have as much experience using AI chatbots. There’s a learning curve, experts said, and you need to practice framing questions and scrutinizing the responses to get the best results.
So don’t wait until you have a major health concern to start experimenting with AI, said Dr Robert Pearl, the author of “ChatGPT, MD: How A.I.-Empowered Patients & Doctors Can Take Back Control of American Medicine.”
Think back to your last medical visit, Pearl suggested, and a few questions that your doctor answered well. Then, pose them to the chatbot and test different prompts, comparing its answers with your doctor’s.
This exercise can give you a sense of the chatbot’s strengths and limitations, he said.
Also, watch out for chatbots’ tendency to be sycophantic and relentlessly validating. A leading question — such as “Don’t you think I should get an MRI?” — could prompt a chatbot to agree with you, rather than provide accurate answers. But you can guard against this by asking balanced, open-ended questions.
Pearl even recommended removing yourself from the question — “What would you tell a patient who has a bad cough?” — to better bypass chatbots’ tendency to agree with you. You could even consider directly asking, “What would you say that he might not want to hear?”
Share context — within reason
Chatbots don’t know anything about you, except what you tell them, said Dr Michael Turken, an internal medicine physician at University of California San Francisco Health.
So, when asking medical questions, give chatbots as much context as you’re comfortable sharing to increase the chance of getting a more personalized answer, he said.
Let’s say you ask a chatbot about recent hip pain. There are, of course, dozens of potential causes.
“But as soon as you give the chatbot your age, your prior medical history, associated diseases, medications, your job, now it can start to come up with a very specific, personalized diagnosis,” Pearl said — one you can then ask your doctor about.
Still, there are serious privacy concerns when it comes to AI, said Dr. Ravi Parikh, director of the Human-Algorithm Collaboration Lab at Emory University.
Most popular chatbots are not bound by the Health Insurance Portability and Accountability Act, or HIPAA, and it’s not clear who might have access to your conversation history, he added.
So, avoid sharing identifying details or uploading your full medical records. These can contain your address, Social Security number and other sensitive data.
If you’re worried about privacy, many chatbots have an anonymous or incognito mode, in which conversations aren’t used to train the model and are deleted after a short period. There are also several HIPAA-compliant medical chatbots available online, Turken said, like My Doctor Friend, Counsel Health and Doctronic.
Check in during long chats
AI chatbots can sometimes forget or confuse critical details, particularly with free versions or during a long conversation, Parikh said.
So ideally, use the paid, more advanced models for medical questions, since they tend to have longer memory, a better “reasoning” process and more up-to-date data, he added.
It can also be helpful to periodically start fresh chats, but many patients find it frustrating to re-enter their medical information and get the model up to speed again.
In that case, Merchant recommended asking the chatbot to “summarise what you know about my medical history” at regular intervals. Such check-ins can help correct misunderstandings and make sure the chatbot stays on track.
Invite more questions
In general, AI chatbots are far better at offering answers than asking questions, so they tend to skip the important follow-ups a physician would ask, Turken said — like whether you have any underlying conditions or are taking any medications.
This is especially problematic when you’re asking about potential diagnoses or medical advice.
To compensate, Turken recommended prompting the chatbot with a line like: “Ask me any additional questions you need to reason safely.”
Expect a burst of questions, but try to address each question carefully. If you miss or skip a question, it probably won’t ask again, Turken said.
Pit your chatbot against itself
Every piece of health advice online or from a friend comes with a certain perspective baked in, and AI chatbots are no different, Pearl said. The problem is that, even when AI is making elementary mistakes, it still exudes confidence and appears all knowing.
So, be skeptical and ask chatbots for sources — and then confirm those sources actually exist, MacLean said. You should also ask difficult follow-up questions and make chatbots explain their reasoning. “Be really engaged,” she added. “No one cares more about your health than you.”
To push chatbots even further, prompt them to take different points of view. At first, MacLean said, you might tell the chatbot, “You’re a careful, experienced primary care doctor.”
But later on, ask it to take on the perspective of a specialist, which tends to steer the model toward deeper, domain-specific knowledge.
You can also push chatbots to think more carefully by asking them to critique their first answer and then to reconcile both responses, Turken said. But don’t simply take AI chatbots at their word — always double-check their information with reputable health resources and, of course, your doctor.
Experts say there aren’t strict redlines around what you can safely ask a chatbot; it’s what you do with that information that matters most. The key is to treat AI as an educational resource rather than as a decision maker.
This article originally appeared in The New York Times.
© 2025 The New York Times Company
Originally published on The New York Times
