Ability of pharmacy clinical decision-support software to alert users about clinically important drug–drug interactions
- Kim R Saverno1,
- Lisa E Hines2,
- Terri L Warholak1,2,
- Amy J Grizzle2,
- Lauren Babits3,
- Courtney Clark3,
- Ann M Taylor2,
- Daniel C Malone1,2
- 1Department of Pharmacy Practice and Science, The University of Arizona College of Pharmacy, Tucson, Arizona, USA
- 2Center for Health Outcomes and PharmacoEconomic Research (HOPE), The University of Arizona College of Pharmacy, Tucson, Arizona, USA
- 3The University of Arizona College of Pharmacy, Tucson, Arizona, USA
- Correspondence to Dr Daniel C Malone, University of Arizona, PO Box 210202, 1295N Martin, Drachman Hall B307F, Tucson, AZ 85721-0202, USA;
- Received 26 July 2010
- Accepted 3 November 2010
- Published Online First 3 December 2010
Objective Pharmacy clinical decision-support (CDS) software that contains drug–drug interaction (DDI) information may augment pharmacists' ability to detect clinically significant interactions. However, studies indicate these systems may miss some important interactions. The purpose of this study was to assess the performance of pharmacy CDS programs to detect clinically important DDIs.
Design Researchers made on-site visits to 64 participating Arizona pharmacies between December 2008 and November 2009 to analyze the ability of pharmacy information systems and associated CDS to detect DDIs. Software evaluation was conducted to determine whether DDI alerts arose from prescription orders entered into the pharmacy computer systems for a standardized fictitious patient. The fictitious patient's orders consisted of 18 different medications including 19 drug pairs—13 of which were clinically significant DDIs, and six were non-interacting drug pairs.
Measurements The sensitivity, specificity, positive predictive value, negative predictive value, and percentage of correct responses were measured for each of the pharmacy CDS systems.
Results Only 18 (28%) of the 64 pharmacies correctly identified eligible interactions and non-interactions. The median percentage of correct DDI responses was 89% (range 47–100%) for participating pharmacies. The median sensitivity to detect well-established interactions was 0.85 (range 0.23–1.0); median specificity was 1.0 (range 0.83–1.0); median positive predictive value was 1.0 (range 0.88–1.0); and median negative predictive value was 0.75 (range 0.38–1.0).
Conclusions These study results indicate that many pharmacy clinical decision-support systems perform less than optimally with respect to identifying well-known, clinically relevant interactions. Comprehensive system improvements regarding the manner in which pharmacy information systems identify potential DDIs are warranted.
According to the 2007 Institute of Medicine (IOM) report entitled, Preventing Medication Errors: Quality Chasm Series, approximately 1.5 million preventable adverse drug events (ADEs) occur annually in the USA.1 Many ADEs are unavoidable; however, recognition of potentially interacting drug pairs, and subsequent appropriate action, is essential to protecting the public's health and safety. Despite this, studies show that prescribers' ability to recognize well-documented drug interactions is limited.2 3 Prescribers commonly rely on pharmacists as a key source of drug-interaction information.3 4 However, research indicates that pharmacists' ability to identify important drug interactions is also lacking.5
Drug-interaction alerting is one of several types of computerized medication-related clinical decision-support (CDS) used by pharmacists to improve patient safety. Although DDI screening software may augment pharmacists' ability to detect clinically significant interactions, these systems are far from fail-safe, oftentimes alerting pharmacists of clinically insignificant drug interactions or failing to alert them about clinically important interactions.6 7 Too many intrusive alerts are mentally draining and time-consuming, and result in providers ignoring both relevant and irrelevant warnings (‘alert fatigue’).2 8 9 Although problems with high alert over-ride rates and alert fatigue are frequently discussed within the context of CPOE, pharmacists have long-experienced such challenges associated with medication-related CDS.10–15 For example, one study reported that most pharmacists over-rode the majority of DDI alerts (mean 73.8%).15
A limited number of reports focus on medication-related CDS uses by pharmacists. The purpose of this study was to evaluate the performance of DDI software programs currently in use in pharmacy practice.
This evaluation was conducted at participating community pharmacies throughout Arizona between December 2008 and November 2009. Pharmacies were eligible to participate if they were a University of Arizona College of Pharmacy rotation site.
Sixty-four pharmacies agreed to participate. Pharmacies were categorized as community, inpatient hospital or other (ie, non-hospital institutional, long-term care). Initial recruitment of retail ‘chain pharmacies,’ defined by the National Association of Chain Drug Stores as pharmacies belonging to a company that operates at least four retail pharmacies,16 was limited to inclusion of two urban and one rural site per chain to prevent oversampling of this type of pharmacy. Recognizing that many of Arizona's retail pharmacies are part of national or regional chain operations and that all pharmacies under the same ownership are likely to use the same software systems, subsequent recruitment of retail chain pharmacies within the same chain ensued only when the results of the initial three sites differed; recruitment attempts continued until the DDI results from three pharmacies were identical. This study was deemed exempt from Human Subject Protection by University of Arizona's Institutional Review Board.
Researchers made on-site visits to participating pharmacies to analyze their respective information systems and associated CDS software's ability to detect DDIs. To maintain consistency in data collection, detailed information about the computer-generated alerts was recorded on data-collection forms and augmented by screen prints, for all participating pharmacies. Specifically, software evaluation was conducted by identifying DDI alerts that arose from prescription orders entered into respective software systems, for a standardized fictitious patient (figure 1). The fictitious orders consisted of 18 different medications including 19 drug pairs, 13 of which were clinically significant DDIs, and six were non-interacting drug pairs.
The interacting medications, mainly oral cardiovascular medications, were chosen by consensus among the researchers based on their widespread use, clinical importance, propensity to cause adverse events, and level of documentation. A list of clinically important DDIs developed by Malone et al served as a starting-point for item selection,17 with a focus on relatively frequently coprescribed interacting drug pairs.18 Following a review of the primary literature and DDI-specific resources (Drug Interaction Facts,19 Drug Interactions Analysis and Management,20 The Top 100 Drug Interactions: A Guide to Patient Management,21 and Stockley's Drug Interactions22), an evidence-based summary was developed for each of the 13 interacting drug pairs. The four DDI compendia were selected on the basis that they are commonly used by clinicians, have been used in previous research, and were readily available to the authors.
The absence of an interaction was also verified for the remaining six drug pairs using the primary and tertiary literature. Two of the six non-interacting drug pairs contained ophthalmic erythromycin ointment. This medication was chosen to determine whether pharmacy software systems could discern interactions based on route of administration. For example, absorption of ophthalmic erythromycin typically does not result in an interaction with a systemic medication. Pravastatin was included in an attempt to distinguish whether pharmacy software classified the interaction based on a class effect (some statins interact with warfarin, whereas pravastatin does not) or interaction potential of the drug itself.
The presence or absence of an alert, regardless of its severity level, was the basis for determining whether a response was classified as correct. Sensitivity refers to the software program's ability to accurately recognize interacting drug pairs that are clinically significant, and was calculated by dividing the number of true positives by the number of clinically significant interacting drug pairs. Specificity refers to the software program's ability to disregard any pairs of medications not defined as clinically significant. This value was calculated by dividing the number of true negatives by the number of non-interacting drug pairs. Positive predictive value (PPV) is a measure of the usefulness of the alert – the probability that a DDI alert represents a true DDI. This was calculated by dividing the number of true positives by the sum of the number of true positives plus the number of false positives. Negative predictive value is the probability that the absence of a DDI alert represents a true absence of a DDI. This was calculated by dividing the number of true negatives by the sum of the number of true negatives plus the number of false negatives. The number of correct responses, sensitivity, specificity, PPV, and NPV were calculated for individual pharmacies. In addition, the median sensitivity, specificity, PPV, NPV, and median percentage of correct responses were calculated for all participating pharmacies combined.
In addition to whether an alert was generated for a specific, DDI, the following information was gathered at each pharmacy: date of visit; location (urban or rural); software vendor; software version; date of most recent software update; and whether the pharmacist on duty believed any categories/levels of interactions were ‘suppressed’ or ‘turned off.’
A post-hoc analysis using a Kruskal–Wallis test was undertaken to determine if there were any statistically significant differences between community, inpatient, and ‘other’ pharmacies with respect to DDI sensitivity and specificity. Probability values less than 0.05 were considered statistically significant.
Descriptive statistics were summarized using Microsoft Excel 2007. Statistical comparisons were made using Stata software, version 10.0.
Sixty-four pharmacies throughout Arizona participated in the study. Of those participating, 40 (63%) were community pharmacies, 14 (22%) were inpatient hospital pharmacies, and 10 (16%) pharmacies were classified as ‘other.’ Of the 40 community pharmacies, approximately two-thirds (n=27) were classified as retail ‘chain pharmacies.’ Those classified as ‘other’ included two infusion pharmacies, two pharmacies located in penal institutions, one long-term care pharmacy, two pharmacies associated with rehabilitative-care facilities, and three Indian Health Service facilities.
For all the participating pharmacies, the median percentage of correct DDI responses was 89% (range 47%–100%). Table 1 shows the number of correct responses by drug combination and pharmacy type.
Four of the surveyed pharmacies (6%) were unable to enter at least one of the medications in the fictitious profile; either the medication was not on the formulary, or it was unavailable in their drug database. Consequently, the results for these pharmacies were calculated based on the drug pairs that were successfully entered into their respective systems. Exclusion of a particular medication within a pharmacy software system negatively affected detection of two to four drug pairs, depending on the pharmacy.
Eighteen of 64 pharmacies (28%) correctly classified eligible possible interacting and non-interacting drug pairs. The digoxin and itraconazole drug pair was incorrectly identified more often than any other drug-interacting pair. Only 45% (27 of 60) of participating pharmacies correctly classified this well-documented drug pair as a clinically significant drug interaction.23–30 Five of the six non-interacting drug pairs were correctly classified by all of the pharmacies. Carbamazepine and erythromycin ophthalmic was the most common incorrectly identified, non-interacting pair: seven (11%) of pharmacies misidentified this pair—of these, six were community-based, and one was an inpatient pharmacy.
Participating pharmacies' DDI software programs had a median sensitivity of 0.85 (range 0.23–1.0). These same programs had a median specificity of 1.0 (range 0.83–1.0), a median PPV of 1.0 (range 0.88–1.0), and a median NPV of 0.75 (range 0.38–1.0). Table 2 provides a summary of the performance of drug-interaction computer software by pharmacy type. The post-hoc analysis failed to detect any statistically significant differences in sensitivity or specificity between the community, inpatient, and ‘other’ pharmacies. The study may have been underpowered to detect such differences, since this was not a main focus of the research.
Of the pharmacies surveyed, a total of 24 different software vendors were represented, including both commercial and proprietary systems; Health Business Systems, PDX, and QS/1 were the three most commonly utilized software vendors. Performance of the DDI software systems varied both within and between vendors. Of the software vendors utilized by five or more pharmacies, none of the results were consistent across all sites. Two of the three most common vendors had perfect specificity across all sites (1.00), while the third vendor's specificity varied from 0.83 to 1.00. To exemplify the degree of variation observed within a single software vendor, the sensitivity of the software of one of the most common vendors in this study ranged from 0.54 to 1.00.
Of the pharmacists on duty that were able to respond to more detailed questioning about their software systems, less than half (40%) confirmed the suppression of some DDI alerts based on severity. The majority of pharmacists on-duty at the time of data collection were knowledgeable about the frequency with which their software was updated; monthly was most common (44%).
Research has demonstrated that without the use of a computer, DDI knowledge is poor among health professionals, including both physicians and pharmacists.2 3 5 Drug-interaction screening, a CDS tool, has been a mainstay of pharmacy software packages for many years and is now available in many computerized prescriber order-entry systems. These provider- and pharmacist-based DDI CDS systems share many similarities, including their source of drug information: the knowledge bases are created, maintained, and sold by a small number of firms (eg, First DataBank, Wolters Kluwer Health, Cerner, Thomson Reuters Healthcare). However, each firm typically relies on their own classification system for interactions.
Recently, much attention has focused on features that constitute a useful CDS tool, in part, due to the Office of the National Coordinator for Health Information Technology's task of advancing the adoption of health technology.31 The general consensus, from a recent CDS workshop, is that information presented by CDS must be pertinent and beneficial, and prevent overburdening the user with irrelevant alerts.31 For instance, drug-interaction screening was specifically mentioned as an area in which the sensitivity and specificity of software settings may have a significant impact on the quality and number of alerts presented. In the current study, the researchers observed a high level of variability in the performance of pharmacy software systems and, more importantly, the failure of some systems to detect well-documented DDIs.
This CDS software evaluation study provides insight into the poor performance of pharmacy systems in alerting pharmacists of clinically significant drug interactions. Additionally, the current results may address broader public safety concerns associated with the manner in which potential DDIs are detected within CDS systems.
Metzger et al evaluated the performance of electronic alerts arising from computerized prescriber order entry in hospitals further exemplifying potential safety issues associated with the use of CDS systems.32 They also utilized fictitious patient profiles to examine the ability of software systems to detect various medication safety issues. However, Metzger et al examined a more comprehensive set of medication alerts, including but not limited to drug-allergy, drug-diagnosis, and therapeutic duplications. Despite differences in settings and types of alerts examined, both studies demonstrated significant variability in CDS system performance between and within vendors as well as the failure of some systems to detect clinically significant medication safety issues.
Suboptimal performance of the pharmacy DDI software systems in this study was confirmed, in part, by the failure of these systems to detect approximately one in seven clinically significant DDIs. The most poorly performing software system had a sensitivity of 0.23, meaning that approximately 77% of the DDIs evaluated would go undetected. Community pharmacies failed to detect approximately one in 12 clinically significant DDIs, while hospital pharmacy systems failed to detect approximately one in four DDIs. In addition, systems in other settings incorrectly categorized approximately one in seven of the DDIs evaluated. Based on the current study, it is evident that additional efforts are needed to improve the ability of pharmacy software systems to detect clinically significant DDIs.
Prior research on the accuracy and reliability of pharmacy software programs suggests that poor performance is due, in part, to the inability of these systems to warn pharmacists of potentially clinically significant DDIs.6 7 Cavuto et al determined the likelihood that pharmacists would fill prescriptions for two medications whose concurrent use was contraindicated (terfenadine and ketoconazole).33 Of the 48 pharmacies with computerized DDI screening, approximately one-third of pharmacies filled the two prescriptions. Hazlet et al evaluated nine community-pharmacy software programs and found that one-third of DDIs went undetected.7 Another study conducted by this research group in Tucson, Arizona evaluated pharmacy information systems in eight community pharmacies and five hospital pharmacies.6 For community pharmacies, the median sensitivity and specificity were 0.88 and 0.91, respectively; hospital pharmacies had a median sensitivity and specificity of 0.38 and 0.95, respectively.
The current study confirms continued variability in system performance across and within pharmacy organizations. This variability in CDS software system performance may be due, in part, to software customization by its users, at the pharmacist, pharmacy, or corporate level. Specifically, clinicians may ‘customize’ the software by suppressing certain categories or tiers of drug-interaction warnings in an attempt to minimize alert fatigue, a phenomenon caused by excessive warnings including irrelevant, non-significant, or repetitious alerts.11 34 35 Alert fatigue may compromise patient safety, especially if the CDS program presents excessive warnings (ie, low signal-to-noise ratios), thus causing clinician desensitization to warnings and even over-riding clinically significant warnings.9 The literature contains many studies documenting widespread dissatisfaction with alerts perceived as inappropriate, inconsequential, disruptive, or redundant and high rates (up to 89%) of ‘over-ridden’ DDI alerts.2 4 8 14 15 36–39 Despite well-documented research on issues with CDS program alerts, including the current study results, clinicians continue to face challenges when using this type of software.
The drug knowledge database may also be a source of variability in software performance; the database is integrated into the software and serves as a basis of its drug information. Currently, no universal standard exists for classifying the severity of drug interactions.40 Furthermore, many drug combinations have not been thoroughly studied; case reports, in vitro studies, and retrospective reviews are common in the drug-interaction literature.41 Consequently, research has demonstrated substantial variation among the major drug compendia regarding inclusion of drug interactions and assignment of severity levels to known interactions.40 42 In a comparison study of four drug-interaction compendia, only 2% of the major DDIs were included in all four compendia.40 The DDI knowledge bases tend to be highly inclusive with respect to drug-interaction alerts, focusing on the scope of drug-interaction alert coverage rather than the clinical significance and estimated rate of occurrence.8 43–45 The tendency for DDI knowledge bases to be more inclusive may be due, in part, to perceived potential for legal liability.
Compared with other previous relevant research, the methodology employed in the current study imparted several advantages. With the unit of analysis at the individual pharmacy level, no central corporate locations were included in the analysis. This design feature enabled researchers to examine more closely the variability within pharmacy chains. The relatively large number and variety of participating sites improved the generalizability of the results. Furthermore, researchers recorded all data on site, thereby mitigating opportunities for participants to misrepresent results.
There are limitations to this study that need to be considered when interpreting the results. The fictitious patient profile reflected a limited set of available medications; results are likely to vary based on the set of interactions evaluated. In addition, using a single patient profile may have caused many pharmacy software systems to generate additional DDI alerts (eg, amiodarone and clarithromycin) as well as alerts for therapeutic duplication (eg, pravastatin and simvastatin). However, these alerts for non-targeted drug combinations were not documented or analyzed. Some variability in the systems' performance may be due, in part, to software updates for clinical evidence of interaction potential that may have occurred during the data-collection period (almost 1 year). For example, pharmacies whose site visits occurred earlier in the data collection period may have been using an older software version than those pharmacies visited in the latter part of the study. Updated software may account for some of the differences in the number of DDIs detected by the various software systems over the study period and, in particular, between the early and later site visits.
The generalizability of the results may be limited because a non-random process was used to recruit pharmacies. All participating pharmacies were affiliated with the university and located within the state of Arizona. Computer software systems installed in other pharmacies may not be comparable; however, many of the pharmacies evaluated were national retail chain pharmacies. In addition, future studies should include objective verification of the pharmacy software vendor, ascertainment of knowledge base vendor used by the pharmacy software, and the date of the last update.
We found that comprehensive system improvements are essential to improve the manner in which pharmacy information systems identify potential DDIs. To this end, issues surrounding the reliability of pharmacy systems are likely to transcend to other systems that utilize drug-interaction screening algorithms. Additional research is warranted to improve pharmacists' ability to detect DDIs, to prevent potential adverse events, and to protect patient health and safety.
The authors thank the participating pharmacists and pharmacies.
This study was presented as a poster at the American Medical Informatics Association AMIA Now! 2010 Conference, Phoenix, AZ, May 25–27, 2010.
Agency for Healthcare Research and Quality 5 U18 HS017001-02
Funding This study was funded by the Agency for Healthcare Research and Quality (# 5 U18 HS017001-02).
Ethics approval This study was conducted with the approval of the University of Arizona.
Provenance and peer review Not commissioned; externally peer reviewed.