AI health tools introduced by OpenAI and Anthropic in January 2026 can analyse users’ medical records, wearable device data, and wellness app information to answer health-related questions and provide personalised guidance, according to experts from the University of California, San Francisco and Stanford University.
Experts state that while these systems can offer contextual health information based on personal data, they are not capable of diagnosing illnesses and should not replace professional medical consultation. In cases involving serious symptoms such as difficulty breathing or chest pain, immediate contact with a doctor or medical specialist is necessary.
Some doctors and researchers say AI tools, when used properly, may provide more personalised and context-based responses than traditional internet searches. According to University of California, San Francisco expert Dr. Robert Wachter, responses may become more accurate when users share details such as age, medications, symptoms, and previous medical reports.
However, experts caution that AI systems are not always reliable and may sometimes produce incorrect advice. Users are advised to treat such information as a reference and consult qualified medical professionals before making any healthcare decisions.

Experts warn that relying on AI advice during medical emergencies can be risky. Conditions such as breathing difficulty, chest pain, or severe headache require immediate medical evaluation.
Stanford University expert Dr. Lloyd Minor said that depending solely on AI for major or minor medical decisions is not advisable, and users should contact a hospital or doctor in such situations.
Experts state that the primary purpose of these tools is not disease diagnosis but helping users understand medical reports and health data. They may also be used for routine health queries and preparing for consultations with healthcare professionals.
Using AI health tools often requires users to share personal medical information. While laws such as HIPAA impose strict data protection requirements on doctors and hospitals, chatbot companies do not fall under the same regulatory framework.
Companies claim that user data is kept secure and not used for model training, but experts say users should carefully review privacy policies before sharing sensitive health information.
Research conducted by Oxford University found that AI achieved 95 percent correct outcomes in hypothetical cases. However, due to limited experience with real user data, the possibility of incorrect guidance remains. Experts advise users to consider AI-generated advice alongside professional medical opinions.









