×

Harvard Study: AI Outperforms Doctors in Diagnosing Emergency Cases

It seems the era of the “all-knowing physician” might soon take a digital form. While we are busy debating the accuracy of AI in writing emails, Harvard University has been testing something far more significant: its ability to save lives. In a recent and controversial study, AI models from OpenAI demonstrated a remarkable, and even surprising, superiority over human doctors in diagnosing complex medical conditions within emergency rooms—a setting where there is no room for error or delay.

From Phonegram: A doctor in a white coat examines medical scans displayed on two computer screens in a clinical setting, highlighting the role of AI in medicine in improving diagnostic accuracy.


The Experiment Details: Machine vs. Human Expertise

The study, published in the prestigious journal Science, was led by a team of physicians and computer scientists at Harvard Medical School and the Beth Israel Deaconess Medical Center. The researchers did not rely on theoretical tests; instead, they delved into the actual medical records of 76 patients who entered the emergency department and compared the diagnoses of human doctors with those of OpenAI’s latest models: o1 and 4o.

What made this study unique is that the models did not receive pre-processed or carefully curated data; they were fed the same raw information found in electronic medical records at the time of diagnosis. The result? The o1 model, in particular, outperformed, performing either nominally better than or on par with consultant physicians at every diagnostic assessment point.


What do the numbers say? A landslide victory in critical moments

Numbers do not lie, and in this study, they favored “Dr. Bot.” In triage cases—the moment when the least amount of information about a patient is available and urgency is at its peak—the o1 model was able to provide an accurate or very near-accurate diagnosis in 67% of cases. In contrast, one human doctor achieved an accuracy of 55%, while the other reached 50%.

From Phonegram: Doctors reviewing medical scans on screens, piles of patient files, hospital staff moving a patient on a stretcher, and a digital patient record displayed on a tablet, highlighting the role of AI in medicine and improving patient care.

Arjun Manrai, head of the AI lab at Harvard Medical School, stated that the model was tested against almost every possible benchmark and outperformed previous models as well as physician benchmarks. This means we are not talking about a mere “simulation” of human intelligence, but an analytical capability that sometimes exceeds the limits of human memory and focus within the chaotic environment of an emergency room.


Between Enthusiasm and Reality: Can we trust automated diagnosis?

Despite these impressive results, researchers and experts are raising caution. Adam Rodman, one of the physicians involved in the study, warned that there is currently no formal framework for accountability regarding AI diagnoses. Furthermore, patients ultimately still prefer to be guided by humans during life-or-death decisions and difficult treatments.

From Phonegram: Doctors analyze brain scans on a screen, showing concern over patient data, and watch as a robotic hand—powered by AI in medicine—reaches toward a human hand, with an emergency hospital scene in the background.

On the other hand, Dr. Christine Pantagani pointed out a fundamental issue: the study compared AI to internal medicine physicians rather than specialized emergency doctors. She also noted that the primary goal of an emergency doctor is not just to guess the correct final diagnosis, but to ensure the patient is not suffering from a condition that could kill them in the next moment. Therefore, although AI has proven its prowess in connecting textual data, it still lacks the human touch and medical intuition possessed by an experienced doctor in the field.

After these results, would you feel comfortable knowing that AI was the one who created your treatment plan in the emergency room?

Source:

techcrunch.com

Leave a Reply