Health Experts React To OpenAI’s Latest ChatGPT Health Feature

ChatGPT Health is a new space on ChatGPT created for health and wellness questions. It lets people ask about symptoms, test results and general wellbeing in one place. Users can also choose to connect health data such as medical records or Apple Health, according to an overview released by OpenAI.

The feature came out only recently after years of people turning to ChatGPT for medical questions. Many users already typed in blood test figures, medication names and descriptions of pain, even though the tool was never designed for that use. OpenAI responded with a dedicated health area rather than leaving those conversations spread across the platform.

Health chats, memories and files are kept apart from the rest of ChatGPT. OpenAI says this separation gives users more control and makes it easier to understand how health information is handled.

ChatGPT Health is described as support for medical care rather than a substitute. It explains terms, helps people prepare questions for doctors and helps them read health information without heavy jargon.

 

Why Are People Uneasy About Sharing Health Data?

 

Health information carries more meaning than other personal data. A leaked password can be changed. A medical history cannot. Source 1 explains that exposed records could affect jobs, insurance and personal safety.

Many users feel uneasy when asked to upload lab results or diagnoses into a machine. Even when sharing is optional, the act of typing or uploading feels like handing over something deeply private.

Trust is a big part of this discussion. AI systems run on large technical systems with many layers. People worry less about bad intent and more about accidents, long storage periods or access through breaches. Hospitals and public bodies get hacked, so faith in perfect protection feels thin.

Another fear relates to future use. Data shared today to explain a blood test could later feed prediction tools. People also worry about knock on effects for insurance tools or workplace screening.

 

What Has OpenAI Said And How Should Users Respond?

 

OpenAI says ChatGPT Health uses strict privacy and security rules. The company says users control what they share and that health chats are not automatically used for training. It also says people can opt out of data storage and that encryption and access limits apply.

The company also repeats that the tool does not diagnose illness. It presents the system as a guide that helps people understand information before speaking to a professional.

Even with those messages, trust won’t just come right away. AI systems grow through data, and written policies alone rarely settle nerves. This feeling is stronger in places with weak data protection laws.

Many users may feel comfortable using the tool for everyday questions or anonymous details. But uploading full records or rare condition data might need a higher level of privacy.

 

 

Health experts have responded to ChatGPT Health, answering to what this means for the industry, and what risks are involved:

Dr. Sylvie Stacy, Medical Officer, Rehab.com said: “Practically speaking, a dedicated medical AI platform is probably going to be more accurate than the terrible advice people already get from random Facebook groups, “influencers,” and other types of online message boards. If ChatGPT Health can be evidence-based and clear about what it can and can’t do, it could actually be a lot better than Google for things like basic health education and coming up with your questions before a doctor’s visit. Patients are already using ChatGPT constantly to ask health questions, so I think any way it can be improved for this type of use is great.

“Another potential benefit is efficiency. Doctors and other clinicians are already so busy and patient visits are scheduled for blocks that are too short. A medical AI platform could help patients understand their diagnoses and treatment options in plain language. Then, perhaps doctor visits could be more focused and productive.

“My biggest concern is overconfidence, especially when medical situations are unclear or complex. Patients might have misunderstandings that only get worse when they’re given AI-generated information that is too confident. Then doctors will need to spend time correcting false assumptions. Or, worse, in some situations, patients may be given information that is downright inaccurate. Real harm could occur. Some might lose trust in their doctors for disagreeing with what AI said.

“Another big concern is that ChatGPT is too agreeable. It could amplify someone’s anxiety about a health problem. Or if someone already has a mistrust of the healthcare system or their doctor, it can make this worse by further validating their feelings. This does nothing to help the doctor-patient relationship.”

Dr Ali Cadili, Health Expert said: “Chat GPT can be helpful in many ways including clarifying medical terms and simplifying technical jargon to help you better understand medical reports. It could also clarify test results and their possible meanings but cautioned should always be used as test results often heavily depend on the context and the specific patient condition and history.

“Where it can also be very helpful is in its capacity as an aide for researching health topic and a summary of all the possible treatments. This may in turn guided towards questions to discuss with your doctor on your next appointment.

“People should be keenly aware however that an algorithmic response to specific prompts may only be looking at a sliver of the information needed to make a safe and correct recommendation for your specific case. Even when all the information is there, a medical professional may change his or her assessment or treatment plan based on minor updates in history that you provide or newer or slightly different findings on physical examination in a healthcare setting.

“Therefore, while a very useful tool to summariSe the body of knowledge and information out there on the Internet, it should never replace the clinical judgment of a qualified professional who can directly examine you.

“At the end of the day, it is an extremely useful tool but people should not misuse it to the detriment of the own health and safety.”