rss

Recent eLetters

Displaying 1-1 letters out of 1 published

  1. Response to Applying Human Factors Principles to Alert Design

    In this issue of JAMIA, Russ et al[1] present a very interesting study which demonstrated that redesign of computerised alerts can improve effectiveness of alerts, in terms of usability, efficiency and safety. Twenty prescribers completed six case-scenarios (2 x three scenarios) where they were required to prescribe and respond to alerts generated in their Veteran's Affairs system. In three scenarios, doctors were presented with alerts they typically experienced while prescribing, and for the other three scenarios, they were presented with alerts that were redesigned using well-established human factors principles. The authors found that prescribers were faster to read the alerts and complete the scenarios, experienced less workload, and made fewer prescribing errors during scenarios when presented with the redesigned alerts. Users were also more satisfied with the redesigned alerts than original alerts.

    Although a well-designed and thought-out study, the authors' conclusions are at times overstated. The demonstrated efficiency benefits are not very surprising when one considers that 1) redesigned alerts contained less text than the original designs and 2) fewer alerts and dialogue boxes were presented in scenarios containing redesigned alerts. With less to read and fewer responses to make, it is no surprise that prescribers were faster at completing the tasks. The authors also claim that the new alerts 'foster learnability' but can this be concluded when no data is presented on learnability of the original alert types and 15% of users encountered problems when trying to move past the redesigned alerts?

    While the differences in user satisfaction appear robust, the differences that emerged in perceived workload when using the two alert types are not as convincing. One wonders whether a difference of 2.6 on the NASA TLX scale (which ranges from 0-100) really equates to a difference in perceived workload.

    More of a concern is the data on prescribing errors presented. On closer inspection of the error data (Figure 4B), it appears that alert redesign primarily resulted in fewer errors committed during one scenario (Scenario 2), with little benefit observed in the other scenarios. The authors acknowledge that their study was not intended to examine the effect of a single design feature on error occurrence, but some discussion of why this scenario benefited from alert redesign (e.g. original alerts were those that were not all visible on the screen until redesign took place) and not other scenarios, would have strengthened their case.

    Most importantly, transferability of these findings to real-life contexts is questionable. The authors explain that they did not observe any prescriber click past an alert without reading it, a result which is in stark contrast to what we have observed to take place in naturalistic settings[2]. Alert fatigue, recognised as a common problem for most users of electronic prescribing systems[3], results in prescribers dismissing alerts, sometimes even those that signal unsafe prescribing[4]. An obvious question is whether the benefits of alert redesign on efficiency, workload and prescribing errors also occur when prescribers are presented with alerts outside of the simulation lab? And does good alert design overshadow poor alert sensitivity and specificity, a characteristic of many alert systems[5]?

    Overall, Russ et al's study represents one of the first systematic investigations of human factors design on alert impact and it provides us with an excellent starting point for further research on alert design. However, until this work moves out of the lab and into practice, we are left wondering whether human factors design, although intuitively beneficial, is actually able to increase an alert's capacity to improve prescribing.

    References

    1. Russ AL, Zillich AJ, Melton BL, et al. Applying human factors principles to alert design increases efficiency and reduces prescribing errors in a scenario-based simulation. Journal of the American Medical Informatics Association. March 25, 2014 2014. 2. Baysari MT, Westbrook JI, Richardson KL, Day RO. The influence of computerized decision support on prescribing during ward-rounds: are the decision-makers targeted? J Am Med Inform Assoc. Jun 14 2011;18:754-759. 3. Jenders RA, Osheroff JA, Sittig DF, Pifer EA, Teich JM. Recommendations for clinical decision support deployment: Synthesis of a roundtable of medical directors of information systems. AMIA Symposium Proceedings. 2007:359-363. 4. van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. Journal of the American Medical Informatics Association. 2006;13(2):138-147. 5. Weingart SN, Toth M, Sands DZ, Aronson MD, Davis RB, Phillips RS. Physicians' decisions to override computerised drug alerts in primary care. Archives of Internal Medicine. 2003;163:2625-2631.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response

Free Sample

This recent issue is free to all users to allow everyone the opportunity to see the full scope and typical content of JAMIA.
View free sample issue >>

Access policy for JAMIA

All content published in JAMIA is deposited with PubMed Central by the publisher with a 12 month embargo. Authors/funders may pay an Open Access fee of $2,000 to make the article free on the JAMIA website and PMC immediately on publication.

All content older than 12 months is freely available on this website.

AMIA members can log in with their JAMIA user name (email address) and password or via the AMIA website.