AI Triage in Emergency Departments: Early Results from Australian Hospitals
Emergency department triage nurses face an impossible task. They’ve got 90 seconds to assess whether chest pain is indigestion or an impending heart attack, whether abdominal pain needs urgent imaging or can wait. Get it wrong either way—undertriage or overtriage—and the consequences can be severe.
That’s why the early results from AI-assisted triage pilots at several Australian hospitals deserve attention. Not hype, not breathless declarations about revolution, but genuine clinical interest.
What’s Actually Being Tested
Three major pilot programs are underway. Royal Melbourne Hospital has been testing an AI system since mid-2025 that analyzes patient-reported symptoms, vital signs, and historical data to suggest an initial triage category. Princess Alexandra Hospital in Brisbane is running a similar trial focused specifically on pediatric presentations. And Westmead Hospital in Sydney has partnered with harrison.ai to test integration with their existing emergency systems.
The systems don’t replace triage nurses. They function as decision support tools, providing a suggested category alongside the information nurses typically consider. The nurse makes the final call, always.
The Numbers So Far
Royal Melbourne’s six-month data shows agreement rates between AI suggestions and experienced triage nurses sitting at around 87%. That’s respectable, but the real story lives in the disagreements. When researchers reviewed discordant cases, they found the AI was actually correct in roughly 40% of those instances—catching presentations that could have been undertriaged.
Princess Alexandra’s pediatric-focused system shows even higher concordance at 91%, likely because pediatric presentations follow more predictable patterns. Fever, respiratory distress, dehydration—the clinical decision trees are well-established.
What’s more interesting is what the systems struggle with. Psychiatric presentations, complex poly-pharmacy patients, and anything involving significant social determinants of health. The AI reads vital signs and symptom descriptions well. Context? Not so much.
Clinical Perspectives
I spoke with several ED nurses involved in these trials, and their feedback isn’t what you’d expect. There’s no technophobia, no resistance to change. There’s pragmatism.
“It’s useful for straightforward presentations during high-volume periods,” one triage nurse told me. “When we’re slammed and I’ve got a waiting room full of people, having that second opinion on a chest pain case gives me confidence. But I wouldn’t trust it alone on anything ambiguous.”
Several clinicians mentioned the system’s value for junior nurses, particularly in regional hospitals where experienced triage nurses are scarce. The AI strategy support team working on the Princess Alexandra trial have focused heavily on this use case—supporting workforce gaps rather than replacing experienced staff.
The emergency physicians I spoke with were cautiously optimistic but emphasized the need for ongoing monitoring. “We need to track not just initial triage accuracy but patient outcomes,” one ED director noted. “Are we seeing faster treatment times for high-acuity patients? Are we catching time-sensitive conditions earlier? Those are the metrics that matter.”
The Implementation Reality
Here’s what doesn’t make headlines: the unglamorous work of integration. These systems need to connect with triage software, EMRs, pathology systems, and imaging databases. They need to handle interrupted workflows when nurses need to step away for a procedure. They need to function when the hospital WiFi goes down (because it always goes down).
Westmead’s pilot has spent significant time on user interface design. Turns out that if a system takes longer to consult than it saves, nurses won’t use it. The sweet spot seems to be presenting the AI suggestion simultaneously with the patient data, requiring no additional clicks or screens.
What Success Actually Looks Like
None of these hospitals are planning to remove nurses from triage. The goal isn’t automation—it’s augmentation. Reducing cognitive load during high-pressure periods. Providing decision support for complex cases. Catching the occasional presentation that might otherwise slip through.
The Australian Digital Health Agency has been watching these pilots closely, and early frameworks for governance and safety monitoring are taking shape. There’s talk of standardized metrics, required audit trails, and ongoing validation requirements.
If these systems eventually roll out more widely, it’ll be because they genuinely improve patient safety and clinician wellbeing. Not because of vendor promises or technological enthusiasm, but because the evidence supports their use.
That’s how it should be in healthcare. We don’t need innovation for innovation’s sake. We need tools that work, that integrate smoothly, and that make clinicians’ impossible jobs slightly more manageable.
The early results suggest AI-assisted triage might be one of those tools. Emphasis on might. We’ll know more in another year when longer-term outcome data becomes available.
Until then, the nurses are watching, testing, and—when appropriate—trusting. That seems like the right approach.