More than 40 million Americans are turning to ChatGPT each day for health-related information, OpenAI reported Monday, underscoring how rapidly artificial intelligence is becoming a go-to tool for people trying to navigate the U.S. healthcare system.
The findings came from an OpenAI analysis of anonymized ChatGPT interactions and a December 2025 survey of 1,042 U.S. adults conducted using Knit, an AI survey tool.
OpenAI said healthcare is now among the most common uses of the chatbot, estimating that more than 5% of all ChatGPT messages worldwide involve health topics.
Among U.S. adults who used AI tools for health questions in the past three months, 55% said they used them to check or explore symptoms, 52% said they asked questions at any time of day, 48% used the tools to understand medical terms or instructions, and 44% used them to learn about treatment options, according to the company's summary of the survey.
OpenAI said the chatbot was an ally for patients facing high costs and complicated health coverage rules.
The company said users increasingly rely on ChatGPT to decode medical bills, flag potential overcharges, draft appeals of insurance denials, and compare health insurance plans.
OpenAI estimated users submit roughly 1.6 million to 1.9 million health insurance-related questions each week on the platform.
The report also pointed to heavy off-hours use, suggesting people are seeking help when clinicians and call centers are not available. OpenAI said nearly 7 in 10 health-related conversations occur outside normal clinic hours, and that users in rural and underserved communities send nearly 600,000 healthcare-related messages weekly.
Patients often entered symptoms and prior medical advice to get guidance on whether a condition might require urgent care or can wait for a scheduled appointment, the company said.
But OpenAI and outside experts have repeatedly warned that chatbots can be wrong, sometimes confidently so, and that errors can be especially dangerous when people treat AI output as medical advice rather than general information.
Mental health is among the most sensitive areas.
In 2025, several states moved to limit or regulate AI chatbots marketed for therapy or therapeutic decision-making without clinician oversight, reflecting concerns about safety and accountability.
OpenAI also faced a growing wave of lawsuits tied to alleged harms from chatbot interactions, including claims involving psychological distress.
OpenAI said it is working to improve safety, accuracy, and reliability for health-related responses, including collaborating with clinicians and updating how newer models handle sensitive conversations.
The company has also promoted industry evaluation efforts such as HealthBench, a benchmark it introduced to measure health performance and safety using physician-built rubrics.
As Americans continue to face access and affordability challenges, policymakers and health systems are watching whether the rise of AI assistants reshapes how patients seek guidance, and how the technology's limits will be managed.
The open questions include accuracy standards, liability when advice goes wrong, and how much personal health information users should share with chatbots.
Theodore Bunker ✉
Theodore Bunker, a Newsmax writer, has more than a decade covering news, media, and politics.
© 2026 Newsmax. All rights reserved.