rss
J Am Med Inform Assoc 20:305-310 doi:10.1136/amiajnl-2012-001055
  • Research and applications

Understanding and preventing wrong-patient electronic orders: a randomized controlled trial

  1. William N Southern1,2
  1. 1Departments of Medicine, Albert Einstein College of Medicine, Montefiore Medical Center, Bronx, NY, USA
  2. 2Division of Hospital Medicine, Albert Einstein College of Medicine, Montefiore Medical Center, Bronx, NY, USA
  3. 3Division of Infectious Diseases, Albert Einstein College of Medicine, Montefiore Medical Center, Bronx, NY, USA
  4. 4Division of General Internal Medicine, Albert Einstein College of Medicine, Montefiore Medical Center, Bronx, NY, USA
  5. 5Departments of Medicine Epidemiology and Population Health, Albert Einstein College of Medicine, Montefiore Medical Center, Bronx, NY, USA
  6. 6Family and Social Medicine, Albert Einstein College of Medicine, Montefiore Medical Center, Bronx, NY, USA
  7. 7Boston College, Chestnut Hill, Massachusetts, USA
  8. 8Harvard University, Cambridge, Massachusetts, USA
  9. 9Emerging Health Information Technology, Bronx, New York, USA
  1. Correspondence to Dr Jason S Adelman, Montefiore Medical Center, 111 East 210th Street, Bronx, NY 10467, USA; jadelman{at}montefiore.org
  1. Contributors JSA and CBS had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: JSA, MAB, SHR, HWC, CBS. Acquisition of data: DAB, SJL, GEK, JMW, SHR. Analysis and interpretation of data: CBS, JSA, WNS. Drafting of the manuscript: JSA, WNS, CBS. Critical revision of the manuscript for important intellectual content: JSA, WNS, CBS, GEK, JMW, MAB, SHR, HWC, SJL, DAB. Statistical analysis: CBS, JSA. Obtained funding: GEK, JSA. Study supervision: JSA.

  • Received 21 April 2012
  • Accepted 4 June 2012
  • Published Online First 29 June 2012

Abstract

Objective To evaluate systems for estimating and preventing wrong-patient electronic orders in computerized physician order entry systems with a two-phase study.

Materials and methods In phase 1, from May to August 2010, the effectiveness of a ‘retract-and-reorder’ measurement tool was assessed that identified orders placed on a patient, promptly retracted, and then reordered by the same provider on a different patient as a marker for wrong-patient electronic orders. This tool was then used to estimate the frequency of wrong-patient electronic orders in four hospitals in 2009. In phase 2, from December 2010 to June 2011, a three-armed randomized controlled trial was conducted to evaluate the efficacy of two distinct interventions aimed at preventing these errors by reverifying patient identification: an ‘ID-verify alert’, and an ‘ID-reentry function’.

Results The retract-and-reorder measurement tool effectively identified 170 of 223 events as wrong-patient electronic orders, resulting in a positive predictive value of 76.2% (95% CI 70.6% to 81.9%). Using this tool it was estimated that 5246 electronic orders were placed on wrong patients in 2009. In phase 2, 901 776 ordering sessions among 4028 providers were examined. Compared with control, the ID-verify alert reduced the odds of a retract-and-reorder event (OR 0.84, 95% CI 0.72 to 0.98), but the ID-reentry function reduced the odds by a larger magnitude (OR 0.60, 95% CI 0.50 to 0.71).

Discussion and conclusion Wrong-patient electronic orders occur frequently with computerized provider order entry systems, and electronic interventions can reduce the risk of these errors occurring.

Background

Currently, at least 70 000 US physicians use computerized provider order entry (CPOE) systems to place orders.1 ,2 This number is expected to rise sharply in coming years as hospitals take advantage of federal incentives for adopting electronic health record technology.3 Although CPOE systems are associated with a reduction in some types of medical errors,4–7 certain types of errors may occur frequently in these systems, including placing orders on the wrong patient.8–13 To date, there has neither been a reported method for efficiently measuring wrong-patient electronic orders, nor a proved intervention for preventing them. To address this, we aimed to develop a reliable method to measure the frequency of wrong-patient electronic orders, study their underlying root causes, and test electronic interventions designed to avert them in a randomized controlled trial.

In March 2011 the Institute for Safe Medication Practices published a case report of a physician who accidentally ordered a sedative and paralytic agent for the wrong patient using a CPOE system, which resulted in respiratory arrest and death.14 The danger of wrong-patient electronic orders was further highlighted by one hospital's report that after implementing a CPOE system, medications were prescribed for the wrong patient several times per month.15 In 2003, the US Pharmacopeia analyzed 7029 voluntarily reported medication errors over a 7-month period and found a mean of nine wrong-patient errors at each of 120 participating institutions using a CPOE system.16 This report probably underrepresented the true extent of wrong-patient electronic orders, as voluntary reporting is known to be an unreliable method for identifying errors.17 ,18 Automated surveillance of electronic clinical data has been demonstrated to be a more effective approach for identifying errors,19–21 but no automated method for identifying wrong-patient electronic orders has been described. We developed a simple and reliable automated method for measuring wrong-patient electronic orders that could be used at any institution with an electronic ordering system.

We hypothesized that sometimes wrong-patient orders are recognized by the orderer shortly after entry, promptly retracted, and then reentered on the correct patient. Koppel et al22 found that medication orders discontinued within 2 h of placing the order were a good predictor for prescribing errors, (positive predictive value 55%, 95% CI 46% to 64%). Such errors caught and corrected by the ordering provider before being carried out are examples of near miss errors, which have been shown to occur up to 100 times more frequently than adverse events.23 Safety research has demonstrated that near miss errors share the same causal pathways as errors that cause harm.23 Tools that measure near miss errors may thus uncover faulty system designs that can lead to harmful errors, and interventions that prevent near miss wrong-patient electronic orders should also prevent the wrong-patient electronic orders that reach the patient and cause harm. Accordingly, the Agency for Healthcare Research and Quality has included near miss errors in the common formats, the standard definitions used to facilitate the collection and reporting of patient safety events to patient safety organizations.24

Common types of errors that lead to wrong-patient orders are juxtaposition errors, in which the wrong patient is selected from a list of names by mis-clicking,8 ,10–13 and interruption errors, in which providers are interrupted while toggling between patients.13 Other causes of errors that have been reported include small font size,8 failure to log off between providers,8 and the ease of transposing incorrect numbers when entering medical record numbers.10 In a study that tracked provider eye movements while using CPOE, none of the providers looked at a second identifier before selecting a patient from an alphabetical list, even when two patients had the same last name and similar first names.25 We hypothesized that requiring providers to reaffirm patient identification when placing orders using a second and tertiary identifier would reduce wrong-patient electronic orders.

In phase 1 of the present study we assessed the effectiveness of an automated measure of wrong-patient electronic orders. In phase 2 we tested the effectiveness of two interventions designed to reduce these errors in a three-armed randomized controlled concurrent trial.

Methods

The research protocol was designed as a two-phase study within Montefiore Medical Center, an academic medical center in the Bronx, New York, consisting of three general and one children's hospital, 1500 inpatient beds, using a Centricity CPOE system (GE Healthcare, Wisconsin, USA).

Phase 1: evaluating the performance of the retract-and-reorder measurement tool

To measure wrong-patient electronic orders, we developed the ‘retract-and-reorder’ measurement tool that identifies orders, including medications, blood tests, imaging, and general care orders, placed on a patient that were retracted within 10 min, and then reordered by the same provider on a different patient within 10 min of retraction. Orders were not identified as potential errors if they were reordered on the initial patient by any provider within 24 h of retraction.

To determine if the retract-and-reorder events identified by the measurement tool represent true wrong-patient orders, we conducted twice-daily semistructured phone interviews with providers who were identified by the retract-and-reorder tool from 21 May to 18 August 2010. Providers were contacted within 12 h of the event. After obtaining oral consent, providers were asked if the event represented a true wrong-patient error, and if so, to classify the type of error as juxtaposition, interruption, or other.

The primary endpoints of phase 1 included the proportion of retract-and-reorder events that were true positive wrong-patient electronic orders based on the provider interviews, and the overall frequency of retract-and-reorder events. A secondary measure included the proportion of true positive wrong-patient electronic orders categorized as juxtaposition errors, interruption errors, or other errors.

In addition, each retract-and-reorder event was independently classified by two study physicians for the potential for harm if the order had been carried out. Harm was classified as clinically significant, serious, or life-threatening.26 Any disagreements were resolved by consensus. To determine if our results were dependent on the specific definition of retract-and-reorder events, we performed a sensitivity analysis using several combinations of time-to-retraction and time-to-reorder intervals.

Phase 2: intervention efficacy trial

After phase 1 was completed, two interventions were developed to prevent wrong-patient electronic orders: an ‘ID-verify alert’ and an ‘ID-reentry function’. The ID-verify alert is triggered on opening the order entry screen, and displays the patient's name, gender and age. Using a single click response, a provider must acknowledge they are ordering on the correct patient before they can proceed. The ID-reentry function blocks access to the order entry screen until the provider actively reenters the patient's initials, gender and age. After establishing the effectiveness of our measurement tool in phase 1, we performed a three-armed randomized controlled concurrent trial to investigate the effectiveness of both interventions in preventing wrong-patient electronic orders compared with controls. As we were not able to measure wrong-patient electronic orders directly, we used the retract-and-reorder tool developed in phase 1 to measure retract-and-reorder events. All providers, including attending physicians, residents, physician assistants, registered nurses, nurse practitioners and pharmacists who placed orders on inpatients from 16 December 2010 to 17 June 2011 were randomly assigned always to receive either the ID-verify alert, the ID-reentry function, or no intervention. The requirement for consent was waived by the institutional review board. Although it was not possible to blind the participants to their study group assignment, all data extraction, management, and analyses were carried out with study personnel unaware of study group assignment. Study group assignments were unblinded only after all analyses were complete.

The primary endpoint of phase 2 was the proportion of ordering sessions that contained retract-and-reorder events as a marker for wrong-patient electronic orders. As a secondary measure, we calculated the time added to a provider's daily work for each intervention compared with control by measuring the interval between request and access granted to the order entry screen.

The research protocol was approved by the institutional review board, and registered at clinicaltrials.gov (#NCT01262053). A data and safety monitoring board blindly reviewed interim results at 6-week intervals, and also reviewed any safety events that came to attention.

Statistical analysis

In phase 1, the positive predictive value of the retract-and-reorder tool was calculated by dividing the number of confirmed wrong-patient order sessions by the total number of retract-and-reorder sessions surveyed by phone interview. Then, the frequency of wrong-patient electronic orders in 2009 was estimated by multiplying the frequency of retract-and-reorder events by the positive predictive value of the retract-and-reorder measurement tool. Results for phase 1 were reported as rates, with retract-and-reorder events and estimates of wrong-patient electronic orders expressed per 100 000 orders, and grouped by provider type, order type, visit type and potential harm.

In phase 2, demographic descriptions of participant providers were obtained from the medical center administrative database and linked to the provider records in our data. All providers using the CPOE system participated in the study; there was no withdrawal or non-compliance with randomization assignment.

The unit of analysis for the randomized controlled trial was the ordering session, during which providers selected a patient, verified the patient's identification if in an intervention group, and then placed one or more orders. An ordering session was classified as a retract-and-reorder session when it contained at least one retract-and-reorder event. Mixed-model logistic regression with provider level random effects was used to estimate the OR for retract-and-reorder sessions in the two intervention arms compared with the control arm. Based on the number of providers using the system and the distribution of ordering sessions per provider per day, simulations revealed that the planned study duration of 6 months provided 98% power to detect a 50% reduction in the odds of retract-and-reorder sessions in either of the intervention arms, using a two-tailed z-test of the coefficient of a study-arm indicator at the 0.05 significance level. Because the Data Safety Monitoring Board was authorized to recommend early termination of the study if conclusive results in either direction emerged, an O'Brien–Fleming style alpha-spending plan for four scheduled comparisons relying on the Lan–DeMets algorithm was implemented.27 Results for the three groups were presented to the Data Safety Monitoring Board in a blinded fashion without identifying which group corresponded to which arm of the study.

To study potential delays introduced by the interventions, we extracted from the order entry system a complete enumeration of the duration of the interval from ordering session request to unlocking of the order pad for all ordering sessions in the study. We presented the mean additional ordering session times per session in each study arm.

Datasets were collected using Fair Isaacs Blaze ruleset embedded within GE Healthcare Centricity Enterprise electronic medical record versions 6.1 and 6.6, and analyzed using Stata V.11.2MP after importation using Stat Transfer V.9.

Results

Phase 1: performance of the retract-and-reorder measurement tool

We interviewed 236 providers identified by the automated retract-and-reorder measurement tool. Of those contacted, 13 providers did not remember the details of the event and were excluded. Of the remaining 223 providers, 170 acknowledged erroneously placing an electronic order on the wrong patient resulting in a positive predictive value of 76.2% (95% CI 70.6% to 81.9%). Sensitivity analysis was used to test several combinations of time to retraction and time to reorder, and demonstrated similar positive predictive values (data not shown). Of the 170 wrong-patient orders identified, 18 (10.6%) were classified as juxtaposition errors, 137 (80.6%) as interruption errors, and 15 (8.8%) as other.

Estimated frequency of wrong-patient order errors

We reviewed all nine million electronic orders placed by 6147 providers at Montefiore in 2009. We found 6885 retract-and-reorder events attributed to 1388 providers with a mean time to retraction of 1 min and 18 s. Table 1 lists the frequency of retract-and-reorder events by provider type, order type, unit type, and by degree of potential harm. Applying the positive predictive value found in phase 1, we estimated that the 6885 retract-and-reorder events represented 5246 wrong-patient electronic orders placed in 2009, an average of 14 wrong-patient electronic orders per day. By this measure, approximately one in six providers placed at least one electronic order on the wrong patient, and approximately one in 37 patients admitted to the hospital had an order placed for them that was intended for another patient. All of these errors were near misses, self-caught by the provider before causing patient harm.

Table 1

Retract-and-reorder events and wrong-patient orders in 2009 by provider type, order type, visit type, and degree of harm

Phase 2: intervention trial

During the randomized intervention trial, we examined a total of 901 776 ordering sessions, which represented 3 281 544 inpatient orders, among 4028 providers from 16 December 2010 to 17 June 2011 (figure 1). Table 2 depicts provider demographic characteristics.

Figure 1

Patient enrollment and randomization.

Table 2

Demographic characteristics of providers in randomized control trial

The effect of the interventions on retract-and-reorder events are presented in table 3. The rates of retract-and-reorder sessions in the three study arms were 1.5 per 1000 ordering sessions in the control group, 1.2 per 1000 in the ID-verify alert group, and 0.9 per 1000 sessions in the ID-reentry function group. Compared with control, the ID-verify alert significantly reduced the odds of a retract-and-reorder event (OR 0.84, 95% CI 0.72 to 0.98). The ID-reentry function significantly reduced the odds of a retract-and-reorder event by a larger magnitude (OR 0.60, 95% CI 0.50 to 0.71).

Table 3

Results of randomized controlled trial evaluating two interventions designed to reduce wrong-patient electronic orders

The mean additional ordering time per session resulting from the interventions was 0.5 s for the ID-verify alert, and 6.6 s for the ID-reentry function (table 3).

Discussion

We developed a tool to measure wrong-patient electronic orders in CPOE systems, and examined its performance to find errors in our system. Using this tool, we identified 6885 retract-and-reorder events in 1 year, which represent an estimated 5246 wrong-patient orders. Although all were near miss errors, previous research has demonstrated that near miss errors share a common causal pathway with errors that cause harm.23 Wrong-patient errors may thus represent a significantly larger hazard than was previously reported.16 We further demonstrated that an ID-verify alert (single-click confirmation of patient identity) reduced wrong-patient electronic orders by 16%, while an active ID-reentry function (requiring active reentry of identifiers) achieved a 41% reduction. All hospitals that implement CPOE systems should consider measuring retract-and-reorder events to estimate the frequency of wrong-patient orders, and optimize their software to minimize these errors.

Using semistructured phone interviews, we identified 170 of 223 retract-and-reorder events as wrong-patient electronic orders, resulting in a positive predictive value of 76.2% (95% CI 70.6% to 81.9%). A common explanation for false positive retract-and-reorder events included providers who, for example, while on ‘total parenteral nutrition (TPN) rounds’ in the neonatal intensive care unit, cancelled a TPN order shortly after placing it for a reason other than a wrong-patient error, and then moved to the next neonatal intensive care unit patient in need of TPN. Providers on ‘warfarin rounds’ and ‘potassium rounds’ gave similar explanations. Although these events involved rapidly retracted orders, they did not represent wrong-patient errors.

In addition to estimating the frequency of wrong-patient electronic orders, the automated retract-and-reorder tool allowed us to interview providers involved in these events in near real-time to determine the cause of the error. Although previous research highlighted juxtaposition errors as a prominent mechanism,9–11 ,13 we found that only 11% of the errors were reported to result from this cause. The more commonly reported cause was interruptions, accounting for 81% of the errors. The ease of toggling between patients in CPOE systems and the frequent interruptions of a busy hospital unit were found to induce wrong-patient electronic orders. However, providers may be more likely to catch interruption errors than other types of errors. If so, we have underestimated the frequency of non-interruption errors. Regardless of the mechanism of error, our interventions were designed to avert wrong-patient electronic orders by requiring that providers reaffirm patient identification before placing orders.

Through our clinical trial, we evaluated two risk-reduction strategies. The ID-reentry function proved more effective than the ID-verify alert, with error reductions of 41% and 16%, respectively. While a 41% error reduction is significant, the ID-reentry failed to eliminate completely all retract-and-reorder events. This may be due partly to some providers inattentively reentering patients' initials, gender and age without carefully verifying identity. In addition, 23.8% of retract-and-reorder events did not represent wrong-patient errors, and thus were not likely to be impacted by the ID-reentry function. Human factors and usability experts, who are trained to optimize the integration of computerized technology into clinical practice,28–32 may be called on to help develop improved solutions for preventing wrong-patient electronic orders. The retract-and-reorder measurement tool will allow these experts to test their strategies, an option not previously available to them.

Although our ID-reentry function proved effective, it required an average of 6.6 additional seconds for every order session. This time (6.6 s) does not appear to be a significant amount of time for one order session, but in the aggregate it represents approximately 3300 h per year at our institution. This additional time added to providers' busy days may lead to varying types of unforeseen errors. The balance between errors prevented by reverifying patient identification versus the potential harm inadvertently caused by further burdening providers was not evaluated, and requires further study. Despite the additional time, the interventions were well tolerated by providers. The trial lasted 6 months with over three million orders, and there were minimal complaints to the Clinical Information Systems Helpdesk or to leadership.

We estimated that 5246 wrong-patient errors occurred in 2009 in our institution, substantially more than the average of nine errors per facility per 7-month period identified in the 120 hospitals with physician order entry systems that submitted data to the Pharmacopeia voluntary reporting system.16 This discrepancy is most likely due to the retract-and-reorder tool identifying near miss errors, which occur more frequently,23 and the increased sensitivity of automated surveillance systems compared with voluntary reporting systems for identifying errors.19–21 In our study, the automated retract-and-reorder tool provided the reliable data needed to power a large-scale randomized controlled trial testing multiple interventions.

A recent study by Wilcox et al33 found a rate of 51 per 100 000 electronic notes written in the wrong-patient record, which is consistent with our findings that an estimated 58 per 100 000 electronic orders were placed on the wrong patient. As in our study, Wilcox et al33 relied on self corrections to identify errors, and used an intervention similar to our ID-verify alert to achieve an estimated 40% reduction. The consistency between these two studies helps to substantiate the existence of a patient identification hazard in electronic health record technology.

It is interesting to note that nurses had a lower retract-and-reorder rate than other providers, while radiology orders and outpatients had higher retract-and-reorder rates than their comparison groups (table 1). Why particular provider types, order types and locations had varying degrees of retract-and-reorder rates should be the subject of further study.

In November 2011, the Institute of Medicine published a new report highlighting the potential for health information technology (HIT) systems to cause harm, and called for an action and surveillance plan within 12 months for assessing the impact of HIT on patient safety, and for minimizing the risk of its implementation and use.34 Specific recommendations include developing standardized testing procedures to assess the safety of HIT products, and for Agency for Healthcare Research and Quality and the Office of the National Coordinator to promote post- deployment testing for HIT-related patient safety risks. The retract-and-reorder measurement tool can be used to assess HIT systems for the hazard of wrong-patient electronic orders, and may be considered as part of a comprehensive HIT safety plan.

This study has several limitations. First, the results are limited to four hospitals using the same CPOE system, and may not be generalized to institutions using different provider order entry systems. Second, we developed a method for estimating near miss errors to identify and monitor wrong-patient electronic order hazards, but we did not measure errors that reach the patient and cause harm. Research has demonstrated that near miss errors and errors that cause harm share the same root cause pathway,23 suggesting that the ID-verify alert probably prevents both of these types of errors. Additional research verifying a shared root cause pathway between near miss wrong-patient errors that are self-caught by the provider, and wrong-patient errors that reach the patient and cause harm, is warranted. Finally, providers in the control group may have been educated to the importance of reverifying patient identification before placing orders by observing their colleagues in the intervention groups, potentially causing a contamination bias.

In summary, we found that a retract-and-reorder tool was a valid measure of near miss wrong-patient orders. When we applied the tool in our system we found that near miss wrong-patient orders occur frequently and may pose a major safety hazard. All hospitals that implement CPOE systems should measure retract-and-reorder events to estimate the frequency of wrong-patient orders in their system, and optimize their software to minimize these errors.

Acknowledgments

The authors would like to thank Ellie Schoenbaum (Albert Einstein College of Medicine), Tejal Gandhi (Partners Healthcare System), Barbara Rabin Fastmen (Mount Sinai School of Medicine), James Bagian (University of Michigan), Andrea Hassol (Abt Associates), Robert Frank (Wayne State University School of Medicine), and the CLG Research Group (Division of General Internal Medicine, Montefiore Medical Center) for their thoughtful manuscript feedback; and Purvi Shah, Ira Sussman, Frank Sosnowski, Jane O'Rourke, and Maureen Scanlan (all of Montefiore Medical Center), for participating in the Data Safety Monitoring Board; and Tomasz Motyka, Gillian Wendt, and Debbie Moshief (all of Emerging Health Information Technology) for assistance with data extraction.

Footnotes

  • Funding This work was supported by institutional funds from Montefiore Medical Center, and in part by the CTSA grant UL1RR025750, KL2 RR025749 and TL1 RR025748 from the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH), and the NIH roadmap for medical research. Its contents are solely the responsibility of the authors and do not necessarily represent the official view of Montefiore Medical Center, the NCRR or the NIH. Montefiore Medical Center had no role in the design and conduct of the study; collection, analysis, or interpretation of the data; or preparation, review or approval of the manuscript.

  • Competing interests None.

  • Ethics approval Ethics approval was provided by Montefiore Medical Center institutional review board.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement The dataset used for this study includes one entry for each order placed at Montefiore Medical Center from 1 January 2009 to 20 June 2011. Only study personnel (listed as authors) had access to the data. The data have not been shared with any other entity.

References

Related Article

Free Sample

This recent issue is free to all users to allow everyone the opportunity to see the full scope and typical content of JAMIA.
View free sample issue >>

Access policy for JAMIA

All content published in JAMIA is deposited with PubMed Central by the publisher with a 12 month embargo. Authors/funders may pay an Open Access fee of $2,000 to make the article free on the JAMIA website and PMC immediately on publication.

All content older than 12 months is freely available on this website.

AMIA members can log in with their JAMIA user name (email address) and password or via the AMIA website.