OpenAI Launches ChatGPT Health for Medical Record Analysis

Jan 9, 2026, 2:28 AM
Image for article OpenAI Launches ChatGPT Health for Medical Record Analysis

Hover over text to view sources

OpenAI has launched ChatGPT Health, a new feature that allows users in the US to integrate their personal health information with the AI to better manage health-related inquiries. This initiative aims to provide users with personalized responses based on their medical records and data from wellness apps like MyFitnessPal and Apple Health.
The feature operates as a separate space within the ChatGPT platform, ensuring enhanced privacy and security for sensitive health information. Conversations in ChatGPT Health are encrypted and isolated from other chats, meaning that this data will not be used to train OpenAI's AI models. OpenAI emphasizes that the tool is designed to support, not replace, medical care, and is not intended for diagnosis or treatment.
Users can connect their medical records through a partnership with b.well, which facilitates integration with approximately 2.2 million healthcare providers. This allows ChatGPT Health to analyze lab results, visit summaries, and clinical histories, providing users with tailored advice on health management. The feature is currently available only in the US, and users must sign up for a waitlist to gain access as it is being rolled out gradually.
OpenAI's decision to launch ChatGPT Health comes in response to the growing demand for health-related inquiries, with over 230 million people globally asking health and wellness questions each week. The company has worked with more than 260 physicians over the past two years to refine the product, ensuring that it meets safety and clarity standards.
Despite the potential benefits, privacy advocates have raised concerns about the handling of sensitive health data. Andrew Crawford from the Center for Democracy and Technology highlighted the importance of maintaining strict safeguards around users' health information, especially as AI companies explore new business models. He noted that while AI tools can empower patients, the sensitivity of health data necessitates robust protection measures.
OpenAI has stated that users can view or delete their health-related memories at any time, providing them with control over their data. However, the company acknowledges that it cannot fully regulate how users may utilize the AI outside of the dedicated health space. This raises questions about the potential for misinformation, particularly in light of past incidents where users received harmful advice from AI systems.
The launch of ChatGPT Health represents a significant step in integrating AI into personal healthcare management, but it also underscores the need for careful consideration of privacy and security issues. As OpenAI continues to refine the feature, the balance between innovation and user safety will be critical in shaping the future of AI in healthcare.
In summary, while ChatGPT Health offers promising tools for understanding and managing health information, the implications for privacy and data security remain a vital area of concern for users and advocates alike.

Related articles

OpenAI Launches ChatGPT Health for Personalized Medical Insights

OpenAI has introduced ChatGPT Health, a new feature allowing users to connect their medical records and wellness apps to the AI chatbot. This initiative aims to provide personalized health information while ensuring user data remains secure and separate from other interactions.

40 Million Users Turn to ChatGPT Daily for Health Questions

OpenAI reports that over 40 million users engage with ChatGPT daily for healthcare inquiries. The chatbot serves as a vital resource, especially during off-hours, helping users navigate the complexities of health insurance and medical information.

Google AI Health Summaries Mislead Users, Risking Safety

A recent investigation revealed that Google's AI-generated health summaries often contain misleading information, potentially endangering users. Experts have criticized these inaccuracies, which range from dietary advice for cancer patients to incorrect information about medical tests, highlighting the urgent need for improved accuracy in AI health guidance.

1 in 8 Young People Use AI Chatbots for Mental Health Advice

A recent study reveals that approximately 13% of US adolescents and young adults use AI chatbots for mental health advice. The findings highlight the growing reliance on these tools, particularly among those aged 18 to 21, raising questions about the effectiveness and safety of AI in addressing mental health issues.

OpenAI and Anthropic Target Health Care for AI Expansion

OpenAI and Anthropic are positioning themselves to leverage AI in the health care sector, aiming to integrate health data into existing platforms rather than creating new applications. This strategy capitalizes on their established user bases and the evolving health care infrastructure.