Sydney / Canberra — Australia’s tech and healthcare communities are abuzz — and alarmed — after the rollout of ChatGPT Health, a new feature from OpenAI that allows users to ask health questions, upload medical data and link wellness apps. While the innovation promises to reshape how people access health information, researchers, clinicians and policy specialists warn the tool is being deployed without adequate regulatory oversight, potentially exposing Australians to misinformation, privacy risks and unsafe automated health guidance.
A New Frontier: Health Chatbot Meets Everyday Medicine
ChatGPT Health is OpenAI’s latest expansion of its artificial intelligence platform, designed to interpret medical documents, analyse data from wearables and offer guidance on diet, exercise and general wellbeing. Globally, millions already turn to AI to ask health-related questions — and the new feature formalises that demand into a dedicated interface that can centralise medical queries and personal health metrics.
In Australia, the feature is initially available to a limited number of early users, with plans to expand access in the coming weeks. Critics note that the rollout arrives before any clear regulatory classification, safety evaluations, or clinical governance standards have been established locally.
Experts Raise Red Flags Over Lack of Regulation
Healthcare experts are sounding the alarm that ChatGPT Health is being used in a domain that should be strictly regulated — yet, under Australia’s current framework for medical devices and software, it does not fall clearly under formal oversight by bodies such as the Therapeutic Goods Administration (TGA). According to the TGA’s guidelines, AI or software that makes medical claims or diagnoses may require regulatory approval; but technology that remains in a grey area, such as an “information-only” chatbot, often escapes formal classification.
This regulatory gap heightens the risk of harm, experts warn, especially when users trust personalised responses that could be inaccurate or misleading about serious health conditions. In one widely reported anecdote, a person followed AI guidance that dangerously misidentified industrial sodium bromide as table salt — leading to hallucinations after ingestion, underscoring the very real risks of unsupervised AI health advice.
Dr Elizabeth Deveny, a public health leader, has voiced concern that these tools may disproportionately benefit highly educated users while leaving vulnerable groups exposed to danger without safeguards, oversight or accountability.
Privacy, Data and Digital Vulnerabilities
Beyond accuracy, privacy and data protection pose serious issues. ChatGPT Health encourages users to upload medical records and link to wellness apps such as Apple Health — a move that raises questions about how sensitive health data is stored, processed and protected. While OpenAI has emphasised encryption and internal privacy protections, no Australian legal framework currently governs the use of AI in this context with the same rigour applied to clinical software or medical device systems.
Under Australia’s Privacy Act and health data regulations, storing medical information requires strict controls, and any breach — intentional or not — could trigger legal obligations such as notifiable data breach reporting. However, generative AI platforms like ChatGPT operate outside the regulatory purview that governs traditional patient records, creating legal uncertainty.
Global and Local Perspectives on AI Regulation
Internationally, regulators are grappling with how to oversee emerging AI tools in healthcare. In the European Union, for instance, the AI Act proposes stringent risk classifications for systems that provide health-related advice or processing of personal data. Australia, by contrast, has yet to introduce a comprehensive legal regime specifically designed to manage health-focused AI applications, despite recent calls from industry bodies and academic experts for stronger governance and national strategy.
Analysts argue that generative AI should be assessed for clinical safety, akin to other software used in medical settings, before being deployed at scale. A report from leading digital health experts notes that clear evidence of safety and appropriate regulation is lacking for many general-purpose AI tools in clinical contexts, and that Australia must develop sovereign capabilities and governance frameworks rather than import what other jurisdictions have done abroad.
Misinformation and Clinical Risk: Not Just a Theoretical Concern
Medical researchers and clinicians point to the phenomenon known as “AI hallucination” — where language models generate plausible sounding but factually incorrect information — as a central risk. Without regulatory scrutiny or post-market studies, there is no systematic way to monitor how often and under what circumstances these systems give incorrect health guidance.
The risks extend beyond simple misinformation. Conversational agents may inadvertently offer dangerous or inappropriate recommendations, delay patients from seeking care, or undermine the doctor-patient relationship by generating unverified diagnostics. Studies argue that AI systems must produce clinical evidence before being trusted in safety-critical domains like healthcare — a standard not currently fulfilled by ChatGPT Health.
Consumer and Practitioner Education: A Critical Gap
Experts agree that alongside regulation, consumer education and professional training will be essential. Many Australians already use chatbots for everything from financial advice to health queries, often without understanding the limitations or differences between licensed professional guidance and algorithmic responses designed for general information only.
Health providers are also urged to develop internal policies on AI use, recognising that unregulated applications could expose health services to privacy breaches or clinical liability if patient information is mishandled or inappropriate recommendations are followed.
Where Australia Stands — and Where It Might Be Headed
The rollout of ChatGPT Health has crystallised broader debates about AI regulation in Australia’s health landscape. Without a clear regulatory framework, experts argue the technology could outpace laws designed to protect patient safety, data integrity and equitable access to care.
Stakeholders are calling for:
- Defined regulatory categories for AI tools that handle health data or offer clinical guidance
- Transparency requirements for safety studies and performance metrics
- Mandatory reporting standards for adverse outcomes linked to AI recommendations
- Consumer education campaigns about the role and limits of AI in healthcare
Australia now finds itself at a crossroads: embracing innovation or allowing unregulated health AI to proliferate with minimal accountability. The decision will influence not only how technology integrates with health services, but how patients and clinicians interact with emerging digital tools in the decades ahead.
7 years in the field, from local radio to digital newsrooms. Loves chasing the stories that matter to everyday Aussies – whether it’s climate, cost of living or the next big thing in tech.