rss
J Am Med Inform Assoc 10:115-128 doi:10.1197/jamia.M1074
  • The Practice of Informatics
  • Review Paper

Detecting Adverse Events Using Information Technology

  1. David W Bates,
  2. R Scott Evans,
  3. Harvey Murff,
  4. Peter D Stetson,
  5. Lisa Pizziferri,
  6. George Hripcsak
  1. Affiliations of the authors: Division of General Medicine, Department of Medicine, Brigham and Women's Hospital; Center for Applied Medical Information Systems, Partners Healthcare System; and Harvard Medical School, Boston, Massachusetts (DWB, HM, LP); LDS Hospital/Intermountain Health Care and University of Utah, Salt Lake City, Utah (RSE); Department of Medical Informatics, Columbia University, New York (PDS, GH)
  1. Correspondence and reprints to: David W. Bates, MD, MSc, Division of General Medicine and Primary Care, Brigham and Women's Hospital, 75 Francis Street, Boston, MA 02115; e-mail: <dbates{at}partners.org>
  • Received 7 January 2002
  • Accepted 29 October 2002

Abstract

Context Although patient safety is a major problem, most health care organizations rely on spontaneous reporting, which detects only a small minority of adverse events. As a result, problems with safety have remained hidden. Chart review can detect adverse events in research settings, but it is too expensive for routine use. Information technology techniques can detect some adverse events in a timely and cost-effective way, in some cases early enough to prevent patient harm.

Objective To review methodologies of detecting adverse events using information technology, reports of studies that used these techniques to detect adverse events, and study results for specific types of adverse events.

Design Structured review.

Methodology English-language studies that reported using information technology to detect adverse events were identified using standard techniques. Only studies that contained original data were included.

Main Outcome Measures Adverse events, with specific focus on nosocomial infections, adverse drug events, and injurious falls.

Results Tools such as event monitoring and natural language processing can inexpensively detect certain types of adverse events in clinical databases. These approaches already work well for some types of adverse events, including adverse drug events and nosocomial infections, and are in routine use in a few hospitals. In addition, it appears likely that these techniques will be adaptable in ways that allow detection of a broad array of adverse events, especially as more medical information becomes computerized.

Conclusion Computerized detection of adverse events will soon be practical on a widespread basis.

Patient safety is an important issue and has received substantial national attention since the 1999 Institute of Medicine (IOM) report, “To Err is Human.”1 A subsequent IOM report, “Crossing the Quaity Chasm,” underscored the importance of patient safety as a key dimension of quality and identified information technology as a critical means of achieving this goal.2 These reports suggest that 44,000–98,000 deaths annually in the U.S. may be due to medical errors.

Although the “To Err is Human” report brought patient safety into the public eye, the principal research demonstrating this major problem was reported years ago, with much of the data coming from the 1991 Harvard Medical Practice Study.3 4 The most frequent types of adverse events affecting hospitalized patients were adverse drug events, nosocomial infections, and surgical complications.4 Earlier studies identified similar issues,5 6 although their methodology was less rigorous.

Hospitals routinely underreport the number of events with potential or actual adverse impact on patient safety. The main reason is that hospitals historically have relied on spontaneous reporting to detect adverse events. This approach systematically underestimates the frequency of adverse events, typically by a factor of about 20.7 8 9 Although manual chart review is effective in identifying adverse events in the research setting,10 it is too costly for routine use.

Another approach to finding events in general and adverse events in particular is computerized detection. This method generally uses computerized data to identify a signal that suggests the possible presence of an adverse event, which can then be investigated by human intervention. Although this approach still typically involves going to the chart to verify the event, it is much less costly than review of unscreened charts,11 because only a small proportion of charts need to be reviewed and the review can be highly focused.

This paper reviews the evidence regarding the use of electronic tools to detect adverse events, first based on the type of data, including ICD-9 codes, drug and laboratory data, and free text, and then on the type of tool, including keyword and term searches and natural language processing. We then discuss the evidence regarding the use of these tools to identify nosocomial infections, adverse drug events in both the inpatient and outpatient setting, falls, and other types of adverse events. The focus of this discussion is to detect the events after they occur, although such tools can also be used to prevent or ameliorate many events.

Electronic Tools for Detecting Adverse Events

Developing and maintaining a computerized screening system generally involve several steps. The first and most challenging step is to collect patient data in electronic form. The second step is to apply queries, rules, or algorithms to the data to find cases with data that are consistent with an adverse event. The third step is to determine the predictive value of the queries, usually by manual review.

The data source most often applied to patient safety work is the administrative coding of diagnoses and procedures, usually in the form of ICD-9-CM and CPT codes. This coding represents one of the few ubiquitous sources of clinically relevant data. The usefulness of this coding—if it is accurate and timely—is clear. The codes provide direct and indirect evidence of the clinical state of the patient, comorbid conditions, and the progress of the patient during the hospitalization or visit. For example, administrative data have been used to screen for complications that occur during the course of hospitalization.12 13

However, because administrative coding is generated for reimbursement and legal documentation rather than for clinical care, its accuracy and appropriateness for clinical studies are variable at best. The coding suffers from errors, lack of temporal information, lack of clinical content,15 and “code creep”—a bias toward higher-paying diagnosis-related groups (DRGs).16 Coding is usually done after discharge or completion of the visit; thus its use in real-time intervention is limited. Adverse events are poorly represented in the ICD-9-CM coding scheme, although some events are present (for example, 39.41 “control of hemorrhage following vascular surgery”). Unfortunately, the adverse event codes are rarely used in practice.17

Despite these limitations, administrative data are useful in detecting adverse events. Such events may often be inferred from conflicts in the record. For example, a patient whose primary discharge diagnosis is myocardial infarction but whose admission diagnosis is not related to cardiac disease (e.g., urinary tract infection) may have suffered an adverse event.

Pharmacy data and clinical laboratory data represent two other common sources of coded data. These sources supply direct evidence for medication and laboratory adverse events (e.g., dosing errors, clinical values out of range). For example, applications have screened for adverse drug reactions by finding all of the orders for medications that are used to rescue or treat adverse drug reactions—such as epinephrine, steroids, and antihistamines.18 19 20 Anticoagulation studies can utilize activated partial thromboplastin times, a laboratory test reflecting adequacy of anticoagulation. In addition, these sources supply information about the patient's clinical state (a medication or laboratory value may imply a particular disease), corroborating or even superseding the administrative coding. Unlike administrative coding, pharmacy and laboratory data are available in real time, making it possible to intervene in the care of the patient.

With increasing frequency, hospitals and practices are installing workflow-based systems such as inpatient order entry systems and ambulatory care systems. These systems supply clinically rich data, often in coded form, which can support sophisticated detection of adverse events. If providers use the systems in real time, it becomes possible to intervene and prevent or ameliorate patient harm.

The detailed clinical history, the evolution of the clinical plan, and the rationale for the diagnosis are critical to identifying adverse events and to sorting out their causes. Yet this information is rarely available in coded form, even with the growing popularity of workflow-based systems. Visit notes, admission notes, progress notes, consultation notes, and nursing notes contain important information and are increasingly available in electronic form. However, they are usually available in uncontrolled, free-text narratives. Furthermore, reports from ancillary departments such as radiology and pathology are commonly available in electronic narrative form. If the clinical information contained in these narrative documents can be turned into a standardized format, then automated systems will have a much greater chance of identifying adverse events and even classifying them by cause.

A study by Kossovsky et al.22 found that distinguishing planned from unplanned readmissions required narrative data from discharge summaries and concluded that natural language processing would be necessary to separate such cases automatically. Roos et al.23 used claims data from Manitoba to identify complications leading to readmission and found reasonable predictive value, but similar attempts to identify whether or not a diagnosis represented an in-hospital complication of care based on claims data met with difficulties resolved only through narrative data (discharge abstracts).

A range of approaches is available to unlock coded clinical information from narrative reports. The simplest is to use lexical techniques to match queries to words or phrases in the document. A simple keyword search, similar to what is available on Web search engines and MEDLINE, can be used to find relevant documents.12 25 26 27 This approach works especially well when the concepts in question are rare and unlikely to be mentioned unless they are present.26 A range of improvements can be made, including stemming prefixes and suffixes to improve the lexical match, mapping to a thesaurus such as the Unified Medical Language System (UMLS) Metathesaurus to associate synonyms and concepts, and simple syntactic approaches to handle negation. A simple key-word search was fruitful in one study of adverse drug events based on text from outpatient encounters.17 The technique uncovered a large number of adverse drug events, but its positive predictive value was low (0.072). Negative and ambiguous terms had the most detrimental effect on performance, even after the authors employed simple techniques to avoid the problem (for example, avoid sentences with any mention of negation).

Natural language processing28 29 promises improved performance by better characterizing the information in clinical reports. Two independent groups have demonstrated that natural language processing can be as accurate as expert human coders for coding radiographic reports as well as more accurate than simple keyword methods.30 31 32 A number of natural language processing systems are based on symbolic methods such as pattern matching or rule-based techniques and have been applied to health care.30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 These systems have varied in approach: pure pattern matching, syntactic grammar, semantic grammar, or probabilistic methods, with different tradeoffs in accuracy, robustness, scalability, and maintainability. These systems have done well in domains, such as radiology, in which the narrative text is focused, and the results for more complex narrative such as discharge summaries are promising.36 41 46 47 48 49 50

With the availability of narrative reports in real time, automated systems can intervene in the care of the patient in complex ways. In one study, a natural language processor was used to detect patients at high risk for active tuberculosis infection based on chest radiographic reports.45 If such patients were in shared rooms, respiratory isolation was recommended. This system cut the missed respiratory isolation rate approximately in half.

Given clinical data sources, which may include medication, laboratory, and microbiology information as well as narrative data, the computer must be programmed to select cases in which an adverse event may have occurred. In most patient safety studies, someone with knowledge of patient safety and database structure writes queries or rules to address a particular clinical area. For example, a series of rules to address adverse drug events can be written.17 One can broaden the approach by searching for general terms relevant to patient safety or look for an explicit mention of an adverse drug event or reaction in the record. Automated methods to produce algorithms may also be possible. For example, one can create a training set of cases in which some proportion is known to have suffered an adverse event. A machine learning algorithm, such as a decision tree generator, a neural network, or a nearest neighbor algorithm, can be used to categorize new cases based on what is learned from the training set.

Finally, the computer-generated signals must be assessed for the presence of adverse events. Given the relatively low sensitivity and specificity that may occur in computer based screening,17 51 it is critical to verify the accuracy of the system. Both internal and external validations are important. Manual review of charts can be used to estimate sensitivity, specificity, and predictive value. Comparison with previous studies at other institutions also can serve to calibrate the system.

Identification of Studies Using Electronic Tools to Detect Adverse Events

To identify studies assessing the use of information technology to detect adverse events, we performed an extensive search of the literature. English-language studies involving adverse event detection were identified by searching 1966–2001 MEDLINE records with two Medical Subject Headings (MeSH), Iatrogenic Disease and Adverse Drug Reporting Systems; with the MeSH Entry Term, Nosocomial Infection; and with key words (adverse event, adverse drug event, fall, and computerized detection). In addition, the bibliographies of original and review articles were hand-searched, and relevant references were cross-checked with those identified through the computer search. Two of the authors (HJM and PDS) initially screened titles and abstracts of the search results and then independently reviewed and abstracted data from articles identified as relevant.

Studies were included in the review if they contained original data about computerized methods to detect nosocomial infections, adverse drug events, adverse drug reactions, adverse events, or falls. We excluded studies that focused on adverse event prevention strategies, such as physician order entry or clinical decision support systems, and did not include detailed information regarding methods for adverse event detection. We also excluded studies of computer programs designed to detect drug-drug interactions.

Included studies evaluated the performance of a diagnostic test (an adverse event monitor). The methodologic quality of each study was determined using previously described criteria for assessing diagnostic tests.52 Studies were evaluated for the inclusion of a “gold standard.” For the purpose of this review the gold standard was manual chart review, with the ultimate judgment of an adverse event performed by a clinician trained in adverse event evaluation. Furthermore, the gold standard had to be a blinded comparison applied to charts independently of the application of the study tool. Only studies that evaluated their screening tool against a manual chart review of records without alerts were considered to have properly utilized the gold standard.

Reviewers abstracted information concerning the patients included, the type of event monitor implemented, the outcome assessed, the signals used for detection, the performance of the monitor, and any barriers to implementation described by the authors. The degree of manual review necessary to perform the initial screening for an adverse event was assessed to determine the level of automation associated with each monitor. An event monitor using signals from multiple data sources that generated an alert that was then directly reviewed by the clinician making the final adverse event judgements was considered “high-end” automation. An event monitor that relied on manual entry of specific information into the monitor for an alert to be generated was considered “low-end.” All disagreements were settled by consensus of the two reviewers.

Twenty-five studies were initially identified for review Table 1). Of these studies, seven included a gold standard in the assessment of the screening tool (Table 2).

Table 1

Studies Evaluating Computerized Adverse Event Monitors

Table 2

Results and Barriers to Implementation of Studies Evaluating an Adverse Event Monitor Using a Gold Standard

Finding Specific Types of Adverse Events

Frequent types of adverse events include nosocomial infections, adverse drug events (ADEs), and falls. Substantial work has been done to detect each by using information technology techniques.

Nosocomial Infections

For more than 20 years before the recent interest in adverse events, nosocomial or hospital-acquired infection surveillance and reporting have been required for hospital accreditation.53In 1970, the Centers for Disease Control set up national guidelines and provided courses to train infection control practitioners to report infection rates using a standard method.54However, the actual detection of the nosocomial infections was based mainly on manual methods, and this process consumed most of infection control practitioners' time.

A number of groups have since developed tools to assist providers in detecting nosocomial infections, using computerized detection approaches.55 56These tools typically work by searching clinical databases of microbiology and other data (Figure 1) and producing a report that infection control practitioners can use to assess whether a nosocomial infection is present (Figure 2). This approach has been highly effective. In a comparison between computerized surveillance and manual surveillance, the sensitivities were 90% and 76%, respectively.55Analysis revealed that shifting to computerized detection followed by practitioner verification saved more than 65% of the infection control practitioners' time and identified infections much more rapidly than manual surveillance. Most infections that were missed by computer surveillance could have been identified with additions or corrections to the medical logic modules.

Figure 1

Steps involved in computerized surveillance for nosocomial infections. This figure illustrates the LDS Hospital structure for nosocomial infection surveillance, including the key modules, which must interact for successful surveillance.

Figure 2

Example of an alert for a nosocomial infection. This report from the Infectious Disease Monitor program at LDS Hospital aggregates substantial clinical detail, which makes it easier for an infection control provider to assess rapidly whether a nosocomial infection is present.

Adverse Drug Events in Inpatients

Hospital information systems can be used to identify adverse drug events (ADEs) by looking for signals that an ADE may have occurred and then directing them to someone—usually a clinical pharmacist—who can investigate.19Examples of signals include laboratory test results, such as a doubling in creatinine, high serum drug levels, use of drugs often used to treat the symptoms associated with ADEs, and use of antidotes.

Before developing its computerized ADE surveillance program, LDS Hospital had only ten ADEs reported annually from approximately 25,000 discharged patients. The computerized suveillance identified 373 verified ADEs in the first year and 560 in the second year.20A number of additional signals or flags were added to improve the computerized surveillance during the second year.

Others have developed similar programs.11 57 58For example, Jha et al. used the LDS rule base as a starting point, assessed the use of 52 rules for identifying ADEs, and compared the performance of the ADE monitor with chart review and voluntary reporting. In 21,964 patient-days, the ADE monitor found 275 ADEs (rate: 9.6 per 1000 patient-days), compared with 398 (rate: 13.3 per 1000 patient-days) using chart review. Voluntary reporting identified only 23 ADEs. Surprisingly, only 67 ADEs were detected by both the computer monitor and chart review. The computer monitor performed better than chart review for events that were associated with a change in a specific parameter (such as a change in creatinine), whereas chart review did better for events associated with symptom changes, such as altered mental status. If more clinical data—in particular, nursing and physician notes—had been available in machine-readable form, the sensitivity of the computer monitor could have been improved. The time required for the computerized monitor was approximately one-sixth that required for chart review.

A problem with broader application of these methods has been that computer monitors use both drug and laboratory data and in many hospitals the drug and laboratory databases are not integrated. Nonetheless, this approach can be successful in institutions with less sophisticated information systems.58In a hospital that did not have a linkage between the drug and laboratory databases, Senst et al. downloaded information from both to create a separate database that was used to detect ADEs. Not all of the rules could be applied to this separate database, but a high proportion could be, and the resulting application successfully identified a large number of ADEs. Furthermore, the epidemiology of the events found differed from prior reports—in particular, admissions caused by ADEs in psychiatric patients were frequent—and this information proved useful in targeting improvement strategies.

Adverse Drug Events in Outpatients

Although many studies address the incidence of ADEs in inpatients, fewer data are available regarding ADE rates in the outpatient setting. Honigman et al. hypothesized that it would be possible with electronic medical records to detect may ADEs using techniques analogous to the inpatient setting. They used four approaches: ICD-9 codes, allergy records, computer event monitoring, and free-text searching of patient notes for drug–symptom pairs (e.g., cough and ACE inhibitor) to detect ADEs. In an evaluation including one year's data of electronic medical records for 23,064 patients, including 15,665 patients that came for care, 864 ADEs were identified. Altogether, 91% of the ADEs were identified using text searching, 6% with allergy records, 3% with the computerized event monitor, and only 0.3% with ICD-9 coding. The dominance of text searching was a surprise and emphasizes the importance of having clinical information in the electronic medical record, even if it is not coded.

Falls

Inpatient falls are relatively common and are widely recognized as causing significant patient morbidity and increased costs. Several interventions have been found to decrease fall rates.59Hripcsak, Wilcox, and Stetson used this domain as a test area for natural language processing. They began by looking for any radiology reports (e.g., x-ray, head CT, MRI) indicating that a patient fall was the reason for the exam (e.g., R/O fall, S/P fall) and occurred after the second day of hospitalization. They also counted the number of radiology reports in which a fracture was found (thus exploiting the ability of natural language processing to handle negation). They found that 1447 of 553,011 inpatient visits had at least one report to rule out a fall (2.6 falls per thousand admissions), and 14% of those involved a fracture (overall rate of injurious falls: 0.35 per thousand). The number of reports was within the range found in the literature using chart review.60

Detection of Other Types of Adverse Events

The “holy grail” in computerized adverse event detection has been a tool to detect a large fraction of all adverse events, including not only the types of events mentioned in this report, but also other frequent adverse events such as surgical events, diagnostic failures, and complications of procedures. Such a tool could be used by hospitals for routine detection of adverse events on an ongoing basis and in real time. Preliminary studies suggest that techniques such as term searching and natural language processing in reviewing electronic information hold substantial promise for detecting a large number of diverse adverse events affecting inpatients.61 The tools would search discharge summaries, progress notes, and computerized sign-outs as well as other types of electronic data to look for signals that suggest the presence of an adverse event.

Conclusions

The current approach used by most organizations to detect adverse events—spontaneous reporting—is clearly insufficient. Computerized techniques for identifying adverse drug events and nosocomial infections are sufficiently developed for broad use. They are much more accurate than spontaneous reporting and more timely and cost-effective than manual chart review. Research will probably allow development of techniques that use tools such as natural language processing to mine electronic medical records for other types of adverse events. We believe that a key benefit of electronic medical records will be that they can be used to detect the frequency of adverse events and to develop methods to reduce the number of such events.

Acknowledgments

The authors thank Adam Wilcox, PhD, for his role in acquiring the Columbia-Presbyterian Medical Center falls data. This work was supported in part by grants from the Agency for Healthcare Research and Quality (U18 HS11046-03), Rockville, MD, and from the National Library of Medicine (R01 LM06910), Bethesda, MD.

References

Free Sample

This recent issue is free to all users to allow everyone the opportunity to see the full scope and typical content of JAMIA.
View free sample issue >>

Access policy for JAMIA

All content published in JAMIA is deposited with PubMed Central by the publisher with a 12 month embargo. Authors/funders may pay an Open Access fee of $2,000 to make the article free on the JAMIA website and PMC immediately on publication.

All content older than 12 months is freely available on this website.

AMIA members can log in with their JAMIA user name (email address) and password or via the AMIA website.