Rethinking Peer Review: What Aviation Can Teach Radiology about Performance Improvement
Radiology: Volume 259: Number 3-June 2011
David B. Larson, MD, MBA John J. Nance, JD
Since the mid-20th century, aviation has been transformed from a relatively high-risk human endeavor into the extremely low-risk enterprise that it is today. During the past 20 years, fatal accidents occurred in fewer than one in every 4.5 million departures and in fewer than one in 9 million since 2001 (1). The risk of dying in an airline accident is now infinitesimal.
Equipment improvement accounts for some of that record, but the primary driver was the ability of aviation to learn rapidly from its mistakes for the improvement of performance-especially human performance (2). While radiology is one of the most technologically advanced medical specialties, like most of the medical field, it is a relative new-comer to the science of human performance improvement (3). Radiology can learn relatively cheaply, quickly, and effectively from the experience of those in aviation-a field that has paid for its knowledge in terms of lives lost as well as dollars spent.
For the first three-quarters of its history, aviation treated human errors as unique events, attributable to the in-dividual operators (4). This was analo-gous to the way that many radiology de-partments use peer review, the quality assurance process whereby a radiologist reviews the report of a prior imaging study and judges whether it was interpreted correctly. The most common peer-review scoring system is the American College of Radiology's RADPEER (5), which scores the original interpretation by using a scale from 1 to 4, where 1 is concur with interpretation; 2, difficult diagnosis, not ordinarily expected to be made; 3, diagnosis should be made most of the time; and 4, diagnosis should be made almost every time, misinterpretation of findings.
The number of errors can be tallied to produce an error rate for an indi- vidual radiologist. Individual error rates can be compared with one another to identify outliers who have unacceptable error rates; in such cases the department chief radiologist can be informed to determine the need for further training or other action (6). On the basis of this approach, sophisticated mathematical models are being developed to pinpoint error rates (7), and efforts are under-way to embed peer-review scoring into picture archiving and communication systems, voice dictation, and electronic medical record systems (8).
After relying on a similar approach to human error for decades, it became increasingly apparent to aviation experts by the 1970s that the underlying causes of human failures were a systemic problem and had to be treated as such. This was made painfully clear on the morn¬ing of December 1, 1974 (9).
Trans World Airlines (TWA) Flight 514, a Boeing 727 inbound from Columbus, Ohio, was scheduled to land at Washington National Airport. Because of high winds at National, the flight was diverted to Washington Dulles International Airport. The cloud ceiling was low, hiding the ground from view. At 1104 hours (11:04 AM), when the flight was 44 miles from the airport, the Dulles tower controller said, "TWA 514, you're cleared for a VOR/DME [an aircraft positioning system] approach to runway 12."The captain understood the instructions to mean that they could descend to the initial approach altitude of 1800 feet immediately. In fact, the controller actually meant that theaircraft had permission to make the approach according to the published charts, which dictated that it should not descend until it reached the Round Hill intersection, several miles ahead.
Shortly after 1107 hours, the captain expressed doubt about whether their altitude was supposed to be 1800 or 3400 feet on the basis of informa-tion on the unfamiliar chart. The captain