Why Australian GPs Are Slow to Adopt AI Clinical Tools (And Why That's Not Entirely a Bad Thing)
Walk into a major Australian teaching hospital and you’ll find AI tools screening radiology images, flagging sepsis risk, and helping triage nurses prioritise patients. Walk into the average suburban GP clinic and you’ll find a tired doctor running 40 minutes behind schedule, dictating notes between appointments, with no AI system in sight.
The gap between hospital AI adoption and general practice AI adoption in Australia is real and growing. But before we label GPs as technophobes or laggards, it’s worth understanding why the gap exists—and why some of the caution is clinically justified.
The Regulatory Landscape Is Murky at Best
Hospital-based AI tools typically operate within institutional governance frameworks. They go through ethics committees, clinical safety reviews, and IT security assessments before a single patient encounter. Hospitals have dedicated teams to manage this process.
GPs have none of that infrastructure. When a clinical decision support tool appears on the market, an individual GP is left asking: Is this TGA-approved as a medical device? What does my indemnity insurer think? If the AI suggests a diagnosis I wouldn’t have considered, and I follow it, and it’s wrong—where does liability sit?
These aren’t hypothetical questions. The TGA’s regulatory framework for AI-as-a-medical-device is still evolving. Most medical defence organisations have issued only general guidance. For a GP earning far less per consultation than a hospital receives per admission, the risk-reward calculation looks very different.
Workflow Disruption in a Six-Minute Consultation
The average Medicare-rebated GP consultation runs about six minutes. That’s six minutes to take a history, examine the patient, make a clinical decision, write a script or referral, update the record, and usher the patient out before the next one walks in.
Adding an AI tool into that workflow isn’t trivial. Even a genuinely useful tool—say, an AI that analyses a skin lesion photo and provides a risk score—introduces new steps. Upload the image. Wait for processing. Interpret the result. Document it. Explain it to the patient. Each step takes seconds, but seconds compound across 40 patients a day.
Hospital clinicians have support staff, longer consultation windows for complex cases, and IT departments that handle integration. GPs, particularly those in solo or small group practices, are doing everything themselves. The friction cost of adoption is disproportionately high.
The Evidence Base Is Still Thin for Primary Care
Here’s the uncomfortable truth: most AI clinical tools have been validated in hospital settings with hospital data. CSIRO’s Australian e-Health Research Centre has done substantial work on AI for medical imaging and clinical decision support, but much of it is oriented toward specialist and hospital contexts.
Primary care is different. The presenting complaints are vaguer. The pre-test probability of serious illness is lower. The patient population is broader. An AI trained on hospital chest X-rays may perform brilliantly at detecting pneumonia in patients sick enough to present to an emergency department, but its performance on incidental chest films ordered by a GP for a mild cough hasn’t been rigorously tested.
GPs are right to ask: does this tool work for my patients, in my setting, at my disease prevalence? If the validation data doesn’t answer that question, caution is appropriate.
The Liability Gap Is a Genuine Deterrent
Medical indemnity is the elephant in the consulting room. If a GP uses an AI diagnostic tool and follows its recommendation, and the patient comes to harm, the legal territory is uncharted. Is the GP liable for trusting the algorithm? Is the software vendor liable for the output? Does the GP’s indemnity insurance even cover AI-assisted clinical decisions?
Until medical defence organisations like Avant or MDA National provide explicit guidance—and until case law or legislation clarifies the position—many GPs will reasonably conclude that the safest option is to stick with their own clinical judgment.
But Here’s Where the Caution Has Value
Primary care is the front door of the health system. GPs see patients at their most vulnerable, their most uncertain, and their most trusting. A misapplied AI tool in general practice doesn’t just affect one patient—it can erode trust in the broader primary care relationship that keeps communities healthy.
Hospitals adopting AI early do so with guardrails: oversight committees, audit trails, specialist supervision. GPs need equivalent safeguards scaled to the primary care context before adoption makes sense. That takes time to build, and it should.
There’s also an argument that GP caution creates useful pressure on AI developers. If the primary care market demands better validation and genuine workflow integration before it will buy, that pushes the industry toward higher standards. GPs cannot absorb the rough edges of early-stage tools, and their refusal to do so is a form of quality control.
What Needs to Happen
Three things would accelerate responsible GP adoption. First, the TGA needs clearer regulatory pathways for primary care AI tools. Second, medical indemnity insurers need explicit coverage positions on AI-assisted clinical decisions. Third, AI vendors need to invest in primary-care-specific validation with Australian patient populations—not just repackage hospital data.
Until then, the gap will persist. And while it has real costs in efficiency and diagnostic support, the reasons behind it reflect a profession that takes patient safety seriously enough to say: prove it works before we use it on our patients.
That’s not resistance. That’s clinical governance.