rss
J Am Med Inform Assoc 16:637-644 doi:10.1197/jamia.M3111
  • Original Investigation
  • Research Paper

Clinical Decision Support Capabilities of Commercially-available Clinical Information Systems

  1. Adam Wright, PhDa,b,c,
  2. Dean F Sittig, PhDd,
  3. Joan S Ash, PhDe,
  4. Sapna Sharma, MBIe,
  5. Justine E Panga,b,
  6. Blackford Middleton, MD, MPH, MSca,b,c
  1. aPartners HealthCare, Boston, MA
  2. bBrigham and Women's Hospital, Boston, MA
  3. cHarvard Medical School, Boston, MA
  4. dUT–Memorial Hermann Center for Healthcare Quality and Safety, University of Texas School of Health Information Sciences at Houston, Houston, TX
  5. eOregon Health & Science University, Portland, OR
  1. Correspondence: Adam Wright, PhD, Partners HealthCare System, 93 Worcester St, Wellesley, MA 02481 (Email: awright5{at}partners.org).
  • Received 17 December 2008
  • Accepted 28 May 2009

Abstract

Background The most effective decision support systems are integrated with clinical information systems, such as inpatient and outpatient electronic health records (EHRs) and computerized provider order entry (CPOE) systems.

Purpose The goal of this project was to describe and quantify the results of a study of decision support capabilities in Certification Commission for Health Information Technology (CCHIT) certified electronic health record systems.

Methods The authors conducted a series of interviews with representatives of nine commercially available clinical information systems, evaluating their capabilities against 42 different clinical decision support features.

Results Six of the nine reviewed systems offered all the applicable event-driven, action-oriented, real-time clinical decision support triggers required for initiating clinical decision support interventions. Five of the nine systems could access all the patient-specific data items identified as necessary. Six of the nine systems supported all the intervention types identified as necessary to allow clinical information systems to tailor their interventions based on the severity of the clinical situation and the user's workflow. Only one system supported all the offered choices identified as key to allowing physicians to take action directly from within the alert.

Discussion The principal finding relates to system-by-system variability. The best system in our analysis had only a single missing feature (from 42 total) while the worst had eighteen.This dramatic variability in CDS capability among commercially available systems was unexpected and is a cause for concern.

Conclusions These findings have implications for four distinct constituencies: purchasers of clinical information systems, developers of clinical decision support, vendors of clinical information systems and certification bodies.

Introduction and Background

Clinical Decision Support

Clinical decision support (CDS) systems are a key part of clinical information systems designed to aid clinician decision making during the process of care. While CDS can be delivered via a variety of media, including paper, the term CDS is most widely used for computer-based interventions delivered through clinical information systems. Common types of clinical decision support include drug-interaction checking,1 preventive care reminders2 and adverse drug event detection.3 There is substantial evidence to suggest that clinical decision support systems, when well designed and effectively used, can be powerful tools for improving the quality of patient care and preventing errors and omissions.4 5 6 7 8 9 10 11 12

Challenges in Implementing Decision Support

Although the evidence for the potential effectiveness of well-designed clinical decision support is strong, adoption of clinical decision support has been somewhat limited outside of a relatively small number of academic medical centers and integrated healthcare delivery networks.13 14 A variety of causes for this limited adoption have been posited, including:

  • The significant resources required to develop, curate and maintain large knowledge bases of clinical decision support content.15

  • A lack of technical standards and approaches that facilitate effective sharing of clinical decision support content.16

  • The difficulty of integrating clinical decision support into clinical workflow effectively and unobtrusively while avoiding alert fatigue.17

  • Clinician fears of “cookbook” medicine18

  • A lack of clear business case for use of clinical decision support.19 20

  • A relatively small number of hospitals and practices that have CPOE or EHRs.21

Clinical Decision Support Capabilities of Clinical Information Systems

In addition to challenges relating to decision support content and workflow, many sites have reported significant limitations in the ability of their clinical information systems to accommodate decision support. Although decision support systems can be standalone,22 the most effective decision support systems are integrated with clinical information systems, such as inpatient and outpatient electronic health records (EHRs) and computerized provider order entry (CPOE) systems.9 Such integrated systems allow for proactive, data-driven decision support;22 however, such integration makes significant feature demands on clinical information systems. Consider, for example, a decision support rule regarding monitoring patients for hypokalemia while they are taking digoxin. One might design the rule such that, when a new potassium value is stored in the electronic health record, it is checked against a reference range (to determine whether the patient is hypo-, hyper-, or normokalemic). If hypokalemia is detected, the rule would then check the medication list to determine whether the patient was on digoxin. The system might then page the responsible physician, notify him or her of the situation and offer therapeutic options, such as adding potassium supplementation or reducing or discontinuing the digoxin.

In 2006, we proposed a taxonomy of clinical decision support capabilities in clinical information systems.23 This taxonomy was based on a comprehensive analysis of the clinical decision support knowledge base in use at Partners HealthCare system. The taxonomy described functional capabilities that could be provided by a clinical information system along four axes:

  • “Triggers: The events that cause a decision support rule to be invoked. Examples of triggers include prescribing a drug, ordering a laboratory test, or entering a new problem on the problem list.”

  • “Input data: The data elements used by a rule to make inferences. Examples include laboratory results, patient demographics, or the patient's problem list.”

  • “Interventions: The possible actions a decision support module can take. These include such actions as sending a message to a clinician, showing a guideline, or simply logging that an event took place.”

  • “Offered choices: Many decision support events require users of a clinical system to make a choice. For example, a rule that fired because a physician entered an order for a drug the patient is allergic to might allow the clinician to cancel the new order, choose a safer alternative drug, or override the alert and keep the order as written but provide an explanation.”23

In addition to identifying the taxa, the taxonomy also indicated the number of rules in use at Partners that depended on each one. The taxa within these four axes are listed in Table 1. The digoxin example above uses the “laboratory result stored” trigger, the “laboratory result/observation” and “drug list” data elements, the “notify” intervention and the “write order”, “cancel existing order” and “edit existing order” offered choices.

Table 1

Elements of the Taxonomy

Table 1 also shows the frequency of usage of each element of the taxonomy at Partners Healthcare System in the columns labeled “Rules” and “Rule Types”. The Partners knowledge base contains 181 rule types and 7,120 unique rules. An example of a rule type is “drug interaction checking” while an example of a rule within that rule type would be “co-administration of sildenafil and nitroglycerin is contra-indicated.”

If particular functional capabilities are not available in a particular EHR, the ability to carry out decision support is necessarily limited to rules that do not require the missing functionality. For example, if a particular EHR system did not support triggering based on new laboratory results, this alert could not run in real time. In many cases, CDS interventions can be modified (for example, the digoxin checking rule could be set to run on demand), but such remediation can yield rules that are less effective. For example, researchers at the University of Pittsburgh Medical Center (UPMC) developed a heart failure decision support intervention in a commercial clinical information system from Cerner (Cerner Corporation, St Louis, MO) that alerted physicians to patients who might have heart failure.24 The alert asked physicians to review the patient's condition and order an angiotensin-converting enzyme inhibitor (ACEI) or angiotensin receptor blocker (ARB). However, Cerner's system had limited support for “offered choices”, so instead of allowing physicians to order the medication directly from within the alert they were asked to simply acknowledge the alert and then subsequently enter the order. Only 62% of physicians who said they would start an ACEI or an ARB actually entered the required order.24

The decision support capabilities of commercial EHR systems have not been previously characterized. It is notable, however, that most of the reports of successful decision support systems come from sites that have self-developed rather than commercial EHR systems.14 In this paper, we describe and quantify the results of a study of decision support capabilities in certification Commission for health information technology (CCHIT) certified electronic health record systems. The CCHIT is a United States-based nonprofit organization which tests and certifies ambulatory and inpatient electronic health record systems that adhere to CCHIT's functional requirements.

Methods

We identified the best-selling clinical information systems in the United States using figures from Klas (Orem, UT) and HIMSS Analytics (Chicago, IL) and contacted the companies that developed the systems as well as their customers by e-mail or phone. Based on the responses to our initial inquiries (which were generally positive) we identified a purposive sample of nine clinical information systems.

Three of the authors (AW, DFS, SS) conducted a series of interviews and evaluations of these systems, evaluating their capabilities against the 42 elements of the taxonomy described in Table 1. In cases where the respondent was unsure about a particular capability, or where their answer suggested that the capability might be extremely limited we consulted with other users or contacts within the vendor organization, referred to product manuals and carried out hands-on evaluations of the information systems until the capability's presence or absence could be determined. This research was approved by the Oregon Health and Science University Institutional Review Board.

Results

We were able to successfully complete interviews with knowledgeable individuals for nine commercially available clinical information systems. The systems included are listed in Table 2. These systems represent a broad cross section of the inpatient and outpatient electronic medical record markets and include most of the major systems in both markets. All the systems included in the analysis were certified by the CCHIT. Based on data from HIMSS Analytics, these systems have a collective market share of 76% of the nonself-developed EMR market in the United States.25 To protect the confidentiality of vendors, many of whom consider their product capabilities to be sensitive, our results are presented pseudonymously. We have identified the vendors that are included in our study, but the results are presented using code numbers (note that the results across sections and tables use consistent code numbers, so, for example, “System 3” in Table 3 is the same system as “System 3” in Table 4. Note that there are two systems from GE and two systems from McKesson. These are actually distinct systems from acquisitions (GE acquired IDX in early 2006 and McKesson acquired Practice Partner in early 2007).

Table 2

Systems Evaluated

Table 3

Availability of Triggers

Table 4

Availability of Input Data Elements

Triggers

Triggers are critical to providing event-driven, action-oriented, real-time clinical decision support and represent the initiating condition for a decision support intervention. Table 3 shows the results of our analysis for triggers. All the triggers in the taxonomy were widely supported, with many of them being supported by all nine systems. One system was scored N/A for the “outpatient encounter opened” trigger because it was an inpatient-only system. Four systems were scored N/A for the hospital admission trigger because they are outpatient only systems. Two systems were unable to trigger decision support logic based on the entry of a new problem. In our earlier analysis, this trigger was mainly used to initiate care protocols and data entry forms (i.e., requesting information on severity or initiating a management plan when asthma is added to the problem list). Likewise, two systems were unable to trigger decision support based on the entry of weight, which is used for retrospective weight-based dosing checks (i.e., rechecking dose appropriateness when a new weight is entered for an infant). One system was unable to trigger based on storage of a new laboratory result—this was the second most commonly used trigger at Partners (responsible for triggering 998 rules) and is critical for panic laboratory value detection as well as detection of many adverse drug events. Likewise, one system was unable to trigger decision support based on the entry of an allergy (used for retrospective drug-allergy interaction and cross-sensitivity checks).

These omissions aside, six of the nine systems offered all the possible triggers (save for ones assessed as not applicable). System 2 missed a single trigger and System 3 missed two. System 8 offered only four of the nine triggers (with one not applicable and four missing).

Input Data Elements

Nearly all decision support rules require patient-specific data to make their inferences. Table 4 shows the availability of the various data elements in the taxonomy in the nine systems. As with triggers, the four outpatient-only systems2 4 5 9 were not rated on the “hospital unit” or “reason for admission” data elements. Seven of the fourteen data elements (laboratory result/observation, drug list, hospital unit, age, gender, allergy list, and weight) were available in all the information systems to which they were applicable. The other input data elements (diagnosis/problem, nondrug orders, family history, surgical history, reason for admission and prior visit types) were each missing from two systems. Most of these data elements were rarely used in the Partners knowledge base, but the problem/diagnosis input data element was used by 1,587 rules (particularly preventive care reminders which are often condition-specific, as in retinopathy screening for diabetic patients).

The system-by-system performance was quite variable. Five of the nine systems had no missing capabilities. Two systems missed only a single capability. However, System 3 missed five capabilities and System 8 missed six.

Interventions

Triggers and input data elements represent the input arm of decision support. Interventions, by contrast, are efferent. The best decision support systems tailor their interventions based on the severity of clinical situation and the user's workflow,1 17 so offering a broad palette of interventions is important. Table 5 shows the availability of the various intervention types in the nine systems. As with triggers and input data elements, most systems supported most interventions. The most basic intervention type is notification (which might take the form of a pop-up, alert, telephonic page or e-mail among other possibilities) and, not surprisingly, all nine systems support notification. The ability to collect freetext in response to an alert or to show a data entry template was also universal. Only one system was unable to provide decision-support-informed defaults or pick lists. Likewise, only a single system was unable to perform logging in response to a decision support intervention. Two systems lacked the ability to show a guideline to a user and three were unable to seek approval in response to a decision (for, say, a high-cost therapy or restricted-use antibiotic).

Table 5

Availability of Interventions

Six of the nine systems offered all possible interventions, disregarding those assessed as not applicable. The same three systems that missed triggers also had missing interventions. Systems 2 and 8 missed two interventions, while System 3 missed 3 interventions.

Offered Choices

The final axis of the taxonomy is the offered choice, shown in Table 6. Such choices are usually offered alongside a notification, as in the digoxin example given in the background section. Performance on the offered choice axis was much lower than for the other three dimensions of the taxonomy. Of the twelve offered choices, only three (override rule/keep order, cancel current order and enter weight, height, or age) were available in all nine systems. Three of the offered choices (defer warning, edit existing order, and set allergies) were available in less than half the systems.

Table 6

Availability of Offered Choices

Only a single system (System 5) supported all of the offered choices. Six of the nine systems had at least three gaps, while the worst-performing system (System 3) had a total of eight missing choice capabilities.

System-by-System Performance

Table 7 shows the number of capability deficiencies for each system by category. No system had all forty-two capabilities, although a single system (System 5) was missing only a single capability. Two systems had three gaps and two had four. The sixth best system had six gaps and the seventh had nine. The two worst-performing systems had eighteen gaps each: in other words, they were each missing 43% (18 of 42) of the decision support capabilities in the taxonomy.

Table 7

Deficiencies by System (Count of Capabilities Lacking in Each System Across the Four Axes)

Discussion

Principal Findings

There are two principal findings of this analysis. First, the trigger, input data element, and intervention axes are generally well covered by the major clinical information systems. Offered choices, by contrast, are much less well covered. Offering choices is critical to creating actionable decision support, and evidence and experience suggest that decision support interventions that offer users tailored, clinically appropriate choices are more likely to be successful,9 24 so this gap is likely to be significant. Information system vendors should strongly consider improving support for offered choice where it is missing in their current offerings.

The second principal finding relates to system-by-system variability. The best system in our analysis had only a single gap while the worst had eighteen. This dramatic variability was unexpected and is a cause for concern. Vendors with a significant number of gaps should urgently remediate their systems and purchasers of information systems should exercise careful diligence to ensure that the systems they purchase will meet their decision support needs. In addition, we recommend that the CCHIT include all of these features in their future certification criteria.

Limitations

Our study has three principal limitations. The first is that our analysis is primarily binary (i.e., a system either provides the capability or it does not) but, in practice, the ability of an information system to provide decision support capabilities may be much more fluid. For example, vendors may add (or in some cases remove) capabilities when a new software version is released. Also, in some cases, particular capabilities required an add-on module to support a particular capability (such systems were scored as providing the capability so long as the add-on was currently available from the vendor). Finally, the capability may exist but be difficult to use or inefficient (for example, complex queries might be needed to access certain input data elements). In this case, too, any system that had a capability was scored as having it, even if it was very difficult to use. In this analysis, we did not account for usability of the clinical decision support function which may dramatically attenuate its potential impact.

The second limitation of our study is actually a limitation of the underlying taxonomy that we employed. The taxonomy is based on a review of clinical decision support content in use at Partners HealthCare System, a single large health system in the Boston, MA area. There are other functional capabilities that might be useful for decision support but which are not included in the taxonomy because they are not used at Partners. For example, LDS Hospital in Salt Lake City, UT has described a decision support system for ventilator management that relies on a direct interface with ventilators to interrogate their settings.26 However, medical device interoperability is not a taxon in the “input data elements” axis of the taxonomy because no rules at Partners use such information. That said, the taxonomy we used is the only one of its kind in the informatics literature. If future extensions to the taxonomy are proposed, it would be useful to re-review the capabilities of these commercial systems to assess their ability to accommodate any new taxa.

The third limitation of our study is our reliance on self reports from customers and vendors. We believe that the data, as collected, are fairly reliable. When we had doubts about an answer, we asked for a demonstration, spoke with another customer or reviewed product manuals.

It is also worth noting that our purposive sample was weighted towards the most widely used EHRs. Although the systems we included command a 76% share of the market, there are also many less-widely used information systems which we did not include. As such, our data should be viewed as descriptive and we have therefore avoided extrapolating our findings to the entire market or testing hypotheses.

Implications

Our findings have implications for four distinct constituencies: purchasers of clinical information systems, developers of clinical decision support, vendors of clinical information systems and certification bodies.

System buyers, particularly those who intend to use their systems for clinical decision support, should carefully inspect systems they are considering to ensure that they have the needed functionality to enable the decision support interventions they wish to include. Buyers should also evaluate the relevance of each capability at their organization. None of the systems we reviewed for this paper had all the capabilities described in the taxonomy, so these decisions are likely to entail tradeoffs.

Decision support developers should also be aware of the capabilities of the information systems in which their decision support content will run. In many cases, rule logic may need to be scaled back or adjusted to accommodate the capabilities of the information system. This poses a special challenge for decision support developers who design interventions to be portable across multiple information systems. In this case, they must either design to the lowest common denominator across all their targeted systems, or they must develop contingencies. For example, a rule might be designed to offer a medication order if the EHR supports offered choices, or a text-only alert if choices are not supported.

Clinical information system vendors should also be aware of the decision support capabilities of their own products as well as those of their competitors. As more customers begin to prioritize decision support capabilities, features to support decision support may become an important differentiating factor in the marketplace.

Finally, certification bodies, particularly the Certification Commission for Health Information Technology (CCHIT; http://www.i2b2.org and http://www.cchit.org) should consider certifying decision support capabilities of clinical information systems. The CCHIT criteria development process takes into account the importance of particular features as well as their availability in the marketplace. The previously published taxonomy quantitatively describes the extent of use of various decision support-related features (a possible proxy for importance), and this paper surveys the current marketplace capabilities. Since input data elements are widely available already, it may make sense to include them as certification criteria for the short-term (indeed, many of them are already required by CCHIT27). Offered choices, while important, are much less widely available. As such, they may be candidates for the one or two year certification roadmaps, allowing vendors time to build these capabilities into their EHRs.

We should note that, just as we identified a dichotomy between availability and use of a feature, there also exists a dichotomy in the realm of certification. Like our binary classification system (feature present or absent), CCHIT certifies products based on the presence or absence of required features in systems as marketed by vendors. The LeapFrog group, a coalition of large employers in the United States focused on improving healthcare quality, has taken an alternative approach. Leapfrog tests implementations of clinical systems at clinical sites, and looking for evidence of effective and correct implementation and use of the system rather than the raw presence or absence of features.28 We believe that these certification approaches are complementary and, in the future, hope to assess the extent of use of these functions of which we have, thus far, only assessed availability.

Footnotes

  • The authors are grateful to James Carpenter, Brian Churchill, Sarah Corley, Melissa Honour, Micheal Krall, James McCormack, Dolores Pratt, Sandi Rosenfeld, Eric Rose, and Nicole Vassar, who provided the information on system capabilities used in this work. Without their willingness to be interviewed, to conduct demonstrations and to provide us with access to their information systems, the authors could not have completed the study.

  • This study was funded, in part, by AHRQ contract HHHSA29020080010 and NLM Research Grant R56-LM006942-07A1.

  • The funding agencies had no role in the design of the study, analysis of the data, interpretation of the results, or the decision to publish.

References

Free Sample

This recent issue is free to all users to allow everyone the opportunity to see the full scope and typical content of JAMIA.
View free sample issue >>

Access policy for JAMIA

All content published in JAMIA is deposited with PubMed Central by the publisher with a 12 month embargo. Authors/funders may pay an Open Access fee of $2,000 to make the article free on the JAMIA website and PMC immediately on publication.

All content older than 12 months is freely available on this website.

AMIA members can log in with their JAMIA user name (email address) and password or via the AMIA website.