
Photo Credit; Getty Images
OpenAI has introduced a health-focused feature in the U.S. designed to provide personalized advice by analyzing users' medical records and data from fitness apps like MyFitnessPal. While the company emphasized that "Chat Health" data is stored separately and not used to train its AI models, the launch has sparked warnings from privacy campaigners. OpenAI also clarified that the tool is strictly for informational purposes and is not intended to provide professional medical "diagnosis or treatment.”
According to OpenAI, more than 230 million people ask its chatbot questions about their health and wellbeing every week. In a blog post, it said ChatGPT Health had "enhanced privacy to protect sensitive data".
Users can share data from apps like Apple Health, Peloton and MyFitnessPal, as well as provide medical records, which can be used to give more relevant responses to their health queries.
OpenAI said its health feature was designed to "support, not replace, medical care".
Generative AI chatbots and tools can be prone to generating false or misleading information, often stating this in a very matter-of-fact, convincing way.
But Max Sinclair, chief executive and founder of AI marketing platform Azoma, said OpenAI was positioning its chatbot as a "trusted medical adviser".
He described the launch of ChatGPT Health as a "watershed moment" and one that could "reshape both patient care and retail" - influencing not just how people access medical information but also what they may buy to treat their problems. Sinclair said the tech could amount to a "game-changer" for OpenAI amid increased competition from rival AI chatbots, particularly Google's Gemini.
The company said it would initially make Health available to a "small group of early users" and has opened a waitlist for those seeking access.
As well as being unavailable in the UK, it has also not been launched in Switzerland and the European Economic Area, where tech firms must meet strict rules about processing and protecting user data.

