When the patient safety field began a decade ago with the publication of the IOM report on medical errors, one of its first thrusts was to import lessons from “safer” industries, particularly aviation. Most of these lessons - a focus on bad systems more than bad people, the importance of teamwork, the use of checklists, the value of simulation training - have served us well.
But one lesson from aviation has proved to be wrong, and we are continuing to suffer from this medical error. It was an unquestioning embrace of using incident reporting (IR) systems to learn about mistakes and near misses.
The Aviation Safety Reporting System, by all accounts, has been central to commercial aviation’s remarkable safety record. Near misses and unsafe conditions are reported (unlike healthcare, aviation doesn’t need a reporting system for “hits” – they appear on CNN). The reports go to an independent agency (run by NASA, as it happens), which analyses the cases looking for trends. When it finds them, it disseminates the information through widely read newsletters and websites; when it discovers a showstopper, ASRS personnel inform the FAA, which has the power to ground a whole fleet if necessary. Each year, the ASRS receives about 40,000 reports from the entire U.S. commercial aviation system.
In the early years of the patient safety movement, the successes of the ASRS led us to admonish hospital staff to “report everything – errors, near misses, everything!” Many caregivers listened to these admonitions (particularly nurses; few docs submit IRs, which leads IR systems to paint incomplete pictures of the breadth of hospital hazards) and reporting took off. At my hospital (UCSF Medical Center), for example, we now receive about 20,000 reports a year.
Yes, 20,000 reports – fully half of what the ASRS receives for the entire nation! And believe me, we don’t report everything. If we really did, I’d estimate that my one hospital would receive at least five times as many IRs: 100,000 yearly reports.
But even at 20,000, recall that we are only one hospital among 6,000 in the United States. Since we’re a relatively large hospital, let’s say the average hospital only collects one-quarter as many IRs as UCSF, 5,000/year. That would amount to 30 million reports a year in the United States! (Oh yeah, and then there are SNFs, nursing homes, and all of ambulatory care, but let’s leave them out for now.)
Is this a problem? Yep-per, it is. First of all, IRs are all-but-useless in determining the actual frequency of errors, though they’re often used for this purpose. When I visit hospitals to talk about patient safety, they often show me their IR reporting trends. If the number of IRs has gone up over the past year, they breathlessly proclaim, “This is great. We’ve succeeded in creating a reporting culture – the front line personnel believe that we take errors seriously. We’re getting safer!”
That would sound more credible if hospitals with downward trends didn’t invariably shout, “This is great, we have fewer errors! Our efforts are paying off!”
The point is that we have no idea which one is true – IRs provide no useful information about the true frequency of errors in an institution.
But that isn’t their major flaw. The bigger problem is that IRs waste huge amounts of time and energy that could better be used elsewhere in patient safety (or in patient care, for that matter). Let’s return to my hospital for a moment (and let me apologize to those who thought there would be no math). I’d estimate that input time for the average IR is about 20 minutes (the system requires the reporter to log in, and then prompts her to describe the incident, the level of harm, the location, the involved personnel….).
Once an IR has been submitted, it is read by several people, including “category managers” such as individuals in charge of analyzing falls or medication errors; the charge nurse and the doctor on the relevant floor; and often a risk manager, the patient safety officer, and more. These individuals often post comments about the case to our computerized IR system, and some IRs generate additional fact finding and analyses. I’d estimate that this back-end work comes to about 60 minutes per IR.
Continued on next page...