AI in Medical Imaging Isn't Just About Detection Anymore — It's About Workflow
Three years ago, the conversation about AI in medical imaging was dominated by one question: can AI read scans as accurately as radiologists? Papers were published. Benchmarks were established. The answer, for certain well-defined tasks like detecting lung nodules or identifying breast cancer on mammography, was increasingly “yes, or close to it.”
That question has been largely settled. For specific, narrow tasks with sufficient training data, AI systems can match or exceed average radiologist performance. The TGA has approved several AI-powered diagnostic tools, and more are in the pipeline.
But accuracy was never the hard part. Integration is.
The Integration Gap
Walk into any Australian radiology department that’s deployed an AI diagnostic tool, and you’ll hear the same frustrations.
“It flags too many findings.” AI systems, particularly those trained to maximise sensitivity, produce false positive rates that create more work, not less. A chest X-ray AI that flags 15% of studies as requiring urgent review when the true positive rate is 2% means radiologists are spending time investigating 13% false positives. In a busy department processing 300 studies per day, that’s 39 additional studies requiring extra attention. The net effect on workflow is negative.
“It doesn’t integrate with our PACS.” Most AI tools run as separate systems alongside the Picture Archiving and Communication System (PACS) that radiologists use to view and report studies. This means radiologists need to check a separate screen, a separate application, or a separate notification system to see AI results. In a workflow where speed matters — emergency departments, screening programs — this friction undermines the value.
“It can’t handle our case mix.” AI systems trained on academic datasets don’t always perform well on the messy reality of clinical imaging. Post-surgical patients, unusual body habitus, poor-quality portable studies, paediatric cases — these represent a significant portion of real-world imaging that differs substantially from the curated datasets used to train and validate AI algorithms.
The point isn’t that AI in medical imaging doesn’t work. It’s that making it work within existing clinical workflows is a fundamentally different problem than making it work in a research setting.
What Good Integration Looks Like
The departments getting the most value from imaging AI share several characteristics.
Tight PACS integration. AI results appear in the same viewer the radiologist is already using. The finding is overlaid on the image, or flagged in the worklist, or appended to the preliminary report. The radiologist doesn’t need to open anything new or change their reading pattern. Health IT Central has documented how integration depth correlates directly with adoption rates.
Worklist prioritisation, not just detection. Instead of flagging individual findings, the most useful implementations use AI to reorder the worklist. A chest X-ray with a high probability of pneumothorax moves to the top of the queue. A study with no significant findings drops lower. The radiologist reads in AI-guided priority order, which means critical findings get reported faster without the radiologist needing to explicitly interact with the AI at all.
Calibrated thresholds. The sensitivity-specificity trade-off matters enormously in practice. A system calibrated for a screening program (where missing a cancer is worse than a false positive) needs very different thresholds than a system used in an emergency department (where false positives create dangerous alert fatigue). Good implementations allow clinical teams to adjust these thresholds for their specific context.
Radiographer-facing tools. Some of the most practical AI applications aren’t for radiologists at all — they’re for radiographers at the point of acquisition. Quality check algorithms that flag if a study is likely non-diagnostic before the patient leaves the room. Positioning feedback that suggests adjustments. Protocol recommendation based on the clinical indication. These tools improve the quality of the imaging that reaches the radiologist, which improves everything downstream.
The Australian Context
Australia’s imaging environment has specific characteristics that affect AI deployment.
Distributed delivery. Radiology services in Australia range from major metropolitan departments reading 500+ studies per day to regional practices where a single radiologist covers a broad case mix with limited sub-specialist support. AI has potentially enormous value in regional settings — providing a second opinion when there’s no subspecialist colleague to consult. But the implementation challenges are also greater because IT infrastructure, network bandwidth, and technical support are often limited.
Mixed public-private delivery. The public hospital system and private radiology groups have different procurement processes, different IT environments, and different incentive structures. An AI tool that works well in a large public hospital’s enterprise PACS environment may not integrate with the cloud-based systems used by private practice groups, and vice versa.
MBS considerations. The current MBS schedule doesn’t specifically fund AI-assisted radiology reporting. Radiologists using AI still bill the same item numbers as those without AI. This means the cost of AI tools comes directly out of practice revenue or hospital budgets with no additional reimbursement. Until the MBS recognises AI-assisted reporting — possibly with new item numbers or modified descriptors — the financial incentive for adoption is limited to efficiency gains.
The Path Forward
Medical imaging AI is going to become standard practice. The diagnostic capability is there. But the timeline depends less on algorithmic improvements and more on solving the integration, workflow, and reimbursement challenges.
For hospital IT and radiology leaders evaluating imaging AI, the questions to ask vendors have shifted. Don’t ask “how accurate is your algorithm?” Ask:
- How does it integrate with our specific PACS platform?
- What’s the false positive rate at clinically useful thresholds?
- Can our radiologists adjust sensitivity settings?
- What’s the impact on reading time per study?
- What infrastructure does it need and does our network support it?
- How does it handle edge cases and out-of-distribution studies?
These aren’t as exciting as breakthrough accuracy numbers. But they’re what determines whether AI actually improves patient care in your department or just adds another notification to ignore.