Displaying 1-3 letters out of 3 published
Laboratory Simulation Studies Uncover Risks and Inform Evidence-Based Design
We appreciate the response letter from Baysari and Lehnbom, which allows us to highlight the value of laboratory-based research and the results of our recent study. The evidence for the effectiveness of the redesigned computerized medication alerts compared to the original alerts in our study was mixed, as Baysari and Lehnbom note. However, there was a clear positive overall impact of the redesigned alerts across several measures. We also took care to outline the limitations of our study so that the readers could take those limitations into consideration when interpreting our study findings.
We find the main criticism from Baysari and Lehnbom, of the transferability of laboratory findings to a real-life context, to be a well-worn one that has been argued for decades by many others and has even surfaced previously in JAMIA. Human factors methods include both laboratory-based and field-based approaches and each has inherent advantages and disadvantages. Field methods can capture the complexity and preserve the context of the work environment within which health information technology is implemented. In fact, our laboratory research stemmed from our previous work, in which we examined alerts in the context of care. We agree that the contextual factors we identified in that initial study are difficult to simulate in a laboratory.
Laboratory-based methods, on the other hand, allow for the manipulation of experimental conditions and measurement of dependent variables in a carefully controlled setting. For example, we can measure efficiency (time to complete tasks) and perceived mental workload. Imagine if we had only measured the same usability outcomes in the context of care. One could easily criticize those methods as being at risk for introducing extraneous variability that exists in the "naturalistic" setting, thus potentially contaminating the results for efficiency, perceived workload, and other human factors measures. Clearly, an ideal study plan often takes a multi-method approach that leverages the advantages of both field and laboratory human factors methods, which is what we intend to pursue.
A laboratory also provides a safe environment separate from patient care, where unintended consequences of information technology designs can be assessed prior to implementation.[7,8]
Baysari and Lehnbom note that: "...The demonstrated efficiency benefits are not very surprising when one considers that 1) redesigned alerts contained less text than the original designs and 2) fewer alerts and dialogue boxes were presented in scenarios containing redesigned alerts. With less to read and fewer responses to make, it is no surprise that prescribers were faster at completing the tasks." This is an oversimplified characterization of the redesigned alerts. Design enhancements were based on more than just minimalistic design; they were also based on human factors constructs and principles such as situation awareness, the proximity compatibility principle, chunking of information units, consistency, hierarchy of hazard control, function allocation, warning design guidelines, and other relevant human factors literature (see Table 1 in our article). Many of the human factors design enhancements may seem "intuitively beneficial", as Baysari and Lehnbom note. However, we still need evidence that they increase efficiency and do so without negatively affecting other measures, such as number of errors. We were able to demonstrate this in the simulation laboratory, which now provides us with an evidence-based foundation for continuing with piloting the redesigned alerts in the field. Testing these design changes initially in the field during real patient care tasks would be riskier, and perhaps unethical, without some initial evidence of their effectiveness. Our article identifies specific design modifications that should not be implemented because of risks that were identified in the laboratory simulation.
We have always subscribed to a multi-method approach to human factors research, and we encourage others to follow.
Acknowledgements: This work was supported by VA HSR&D grant #PPO 09- 298 (PI: Alissa L Russ) and manuscript preparation was supported by the Center for Health Information and Communication, Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service, CIN 13-416. (Work occurred under the Center of Excellence on Implementing Evidence-Based Practice, HFP 04-148.) Drs. Russ and Saleem were supported by VA HSR&D Research Career Development Awards (CDA 11-214 and CDA 09-024-1, respectively). Views expressed in this letter are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs or the U.S. government.
1. Baysari MT, Lehnbom E. Response to applying human factors principles to alert design. J Am Med Inform Assoc [E-Letter] April 14, 2014.
2. Russ AL, Zillich AJ, Melton B, et al. Applying human factors principles to alert design increases efficiency and reduces prescribing errors in a scenario-based simulation. J Am Med Inform Assoc Published online ahead of print March 25, 2014.
3. Dipboye RL, Flanagan MF. Research settings in industrial and organizational psychology: Are findings in the field more generalizable than in the laboratory? American Psychologist 1979;34(2):141-50.
4. Falk A, Heckman JJ. Lab experiments are a major source of knowledge in the social sciences. Science 2009;326:535-8.
5. Staggers N, Brennan PF. Translating knowledge into practice: Passing the hot potato! J Am Med Inform Assoc 2007;14:684-5.
6. Russ AL, Zillich AJ, McManus MS, et al. Prescribers' interactions with medication alerts at the point of prescribing: a multi-method, in situ investigation of the human-computer interaction. Int J Med Inform 2012;81:232-43.
7. Russ AL, Weiner M, Russell SA, et al. Design and implementation of a hospital-based usability laboratory: Insights from a Veterans Health Administration laboratory for health information technology. Jt Comm J Qual Patient Saf 2012; 38(12):531-40.
8. Saleem JJ, Russ AL, Sanderson P, et al. Current challenges and opportunities for better integration of human factors research with development of clinical information systems. Yearb Med Inform 2009:48-58.
Conflict of Interest:
Practical implications of an Electronic Health Record inclusive of Gender Identity
As a health professional caring for transgender patients, I commend Dr. Deutsch et al for their recommendations in the article "Electronic medical records and the transgender patient: recommendations from the World Professional Association for Transgender Health EMR Working Group (1)." These recommendations to (a) include preferred name, gender identity, and pronoun preference as identified by patients as demographic variables; (b) create of mechanisms to inventory patients current anatomy and medical transition history; (c) smoothly transition one listed name, anatomic inventory and/or sex to another without affecting the remainder of the patient's record and (d) notify health professionals and clinic staff of a patient's preferred name and/or pronoun are vital tools to providing high quality care for Transgender people and achieving the Triple Aim in Health Care (2). To review, the goals of the Triple Aim are to:
Improving the patient experience of care (including quality and satisfaction); Improving the health of populations; and Reducing the per capita cost of health care.
I serve as the Clinic Director of the PRIDE Clinic in Cleveland, OH. This hospital-based health service focuses on the health needs of Lesbian, Gay, Bisexual and Transgender (LGBT) patients, their friends and families. Approximately 30% of our patients self-identify as transgender, and are in various points in their gender transition. Some patients have had sex reassignment surgery and are well established on cross gender hormonal care, while others are just exploring their gender identities. Some patients have legally changed their names and gender markers, while others have yet the opportunity to make those changes. Our patients represent a diverse spectrum of gender identities, in which their gender identity and gender expression may be at times incongruent with a binary sex-assigned electronic health record. The recommendations set forth by the Transgender EMR Working Group can improve health care delivery to improve patient safety, transgender population health, and improve health care costs.
Patient Care Experience and Safety:
Transgender people face discrimination in many facets of their life including health care access. A 2011 national survey of Transgender people found that nearly one out of five individuals were denied care due to their gender identity (3). About 30% of transgender people postponed care when sick or injured and postponed preventive health care due to discrimination and disrespect by providers. Female-to-male transgender people were most likely to postpone care due to discrimination (4). Among transgender people who are successful in accessing health care, many still face challenges with potentially stigmatizing diagnoses commonly used by health professionals: ICD 9 Gender Dysphoria, ICD10 Transsexualism (F64), Gender Dysphoria in Childhood (F64.2), Gender Dysphoria NOS (64.9).
Moreover, diagnostic codes which are binary and gender specified in the EHR, add to the confusion to assess and identify the appropriate anatomy and procedures for a given patient. An example of such confusion is encountered when the need arises before a surgical procedure to place a foley catheter for bladder drainage in a male identified patient who is transgender and has a vagina. A binary EHR omits relevant anatomic information important to the care of this patient. Correct identification of a patient's anatomic inventory independent of their gender identity supports best practices in accordance to The Joint Commission's Universal Protocol to prevent wrong site, wrong procedure, wrong person surgeries (5).
Health Care Costs and Revenue Cycle:
In current lean clinical environments and tight budgets, processes which allow for accurate diagnosis and billing, decreased audits and claims rejections and prompt chart completion avoids loss of income and interruption of the Revenue Cycle. Alignment of anatomic procedures and diagnoses allows for the prompt processing of health care claims. However, gender exclusionary binary algorithms may delay this vital process and delay reimbursement.
To illustrate, I recently saw a young transman for a routine exam, and as part of his routine care we conducted a pelvic exam, Pap smear and other health surveillance testing for his reproductive anatomy. A problem arose when I attempted to close the chart and I received an error message stating "the patient's diagnosis/procedure is not allowed based on patient gender." After several e-mails, phone calls, and interactions with our clinical site coordinator, health informatics specialist and senior administrators I received the help necessary to complete the patient's chart. The assistance provided to close the chart was exemplary, but the chart was closed over 90 days from the day the patient was seen.
Health care reform as part of the Affordable Care Act will bring more patients into the health care system, including those whose gender identities and anatomies do not fit a binary system of care. Our health record systems must develop methods to collect sexual orientation and gender identity data and adapt to meet the needs of these patients for better care outcomes and experiences.
1. Deutsch MB, Green J, Keatley JA, et al. J Am Med Inform Assoc 2013;20:700-703 2. Institute for Health Care Improvement website. http://www.ihi.org/offerings/Initiatives/TripleAIM/Pages/default.aspx (accessed 25 Nov 2013. 3. National Gay and Lesbian Task Force Injustice at Every Turn: A Report of the National Transgender Discrimination Survey. 2011 http://www.thetaskforce.org/ reports_and_research/ntds8b (accessed 25 Nov 2013). 4. Ibid 5. Joint Commission website, Universal Protocol. http://www.jointcommission.org/assets/1/18/UP_Poster1.PDF (accessed 25 Nov 2013)
Conflict of Interest:
I am the President of GLMA: Health Professionals Advancing LGBT Equality and a member of the National Advisory Council to the Agency for Health Care Research and Quality.
Response to Applying Human Factors Principles to Alert Design
In this issue of JAMIA, Russ et al present a very interesting study which demonstrated that redesign of computerised alerts can improve effectiveness of alerts, in terms of usability, efficiency and safety. Twenty prescribers completed six case-scenarios (2 x three scenarios) where they were required to prescribe and respond to alerts generated in their Veteran's Affairs system. In three scenarios, doctors were presented with alerts they typically experienced while prescribing, and for the other three scenarios, they were presented with alerts that were redesigned using well-established human factors principles. The authors found that prescribers were faster to read the alerts and complete the scenarios, experienced less workload, and made fewer prescribing errors during scenarios when presented with the redesigned alerts. Users were also more satisfied with the redesigned alerts than original alerts.
Although a well-designed and thought-out study, the authors' conclusions are at times overstated. The demonstrated efficiency benefits are not very surprising when one considers that 1) redesigned alerts contained less text than the original designs and 2) fewer alerts and dialogue boxes were presented in scenarios containing redesigned alerts. With less to read and fewer responses to make, it is no surprise that prescribers were faster at completing the tasks. The authors also claim that the new alerts 'foster learnability' but can this be concluded when no data is presented on learnability of the original alert types and 15% of users encountered problems when trying to move past the redesigned alerts?
While the differences in user satisfaction appear robust, the differences that emerged in perceived workload when using the two alert types are not as convincing. One wonders whether a difference of 2.6 on the NASA TLX scale (which ranges from 0-100) really equates to a difference in perceived workload.
More of a concern is the data on prescribing errors presented. On closer inspection of the error data (Figure 4B), it appears that alert redesign primarily resulted in fewer errors committed during one scenario (Scenario 2), with little benefit observed in the other scenarios. The authors acknowledge that their study was not intended to examine the effect of a single design feature on error occurrence, but some discussion of why this scenario benefited from alert redesign (e.g. original alerts were those that were not all visible on the screen until redesign took place) and not other scenarios, would have strengthened their case.
Most importantly, transferability of these findings to real-life contexts is questionable. The authors explain that they did not observe any prescriber click past an alert without reading it, a result which is in stark contrast to what we have observed to take place in naturalistic settings. Alert fatigue, recognised as a common problem for most users of electronic prescribing systems, results in prescribers dismissing alerts, sometimes even those that signal unsafe prescribing. An obvious question is whether the benefits of alert redesign on efficiency, workload and prescribing errors also occur when prescribers are presented with alerts outside of the simulation lab? And does good alert design overshadow poor alert sensitivity and specificity, a characteristic of many alert systems?
Overall, Russ et al's study represents one of the first systematic investigations of human factors design on alert impact and it provides us with an excellent starting point for further research on alert design. However, until this work moves out of the lab and into practice, we are left wondering whether human factors design, although intuitively beneficial, is actually able to increase an alert's capacity to improve prescribing.
1. Russ AL, Zillich AJ, Melton BL, et al. Applying human factors principles to alert design increases efficiency and reduces prescribing errors in a scenario-based simulation. Journal of the American Medical Informatics Association. March 25, 2014 2014. 2. Baysari MT, Westbrook JI, Richardson KL, Day RO. The influence of computerized decision support on prescribing during ward-rounds: are the decision-makers targeted? J Am Med Inform Assoc. Jun 14 2011;18:754-759. 3. Jenders RA, Osheroff JA, Sittig DF, Pifer EA, Teich JM. Recommendations for clinical decision support deployment: Synthesis of a roundtable of medical directors of information systems. AMIA Symposium Proceedings. 2007:359-363. 4. van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. Journal of the American Medical Informatics Association. 2006;13(2):138-147. 5. Weingart SN, Toth M, Sands DZ, Aronson MD, Davis RB, Phillips RS. Physicians' decisions to override computerised drug alerts in primary care. Archives of Internal Medicine. 2003;163:2625-2631.
Conflict of Interest:
This recent issue is free to all users to allow everyone the opportunity to see the full scope and typical content of JAMIA.
View free sample issue >>
Access policy for JAMIA
All content published in JAMIA is deposited with PubMed Central by the publisher with a 12 month embargo. Authors/funders may pay an Open Access fee of $2,000 to make the article free on the JAMIA website and PMC immediately on publication.
All content older than 12 months is freely available on this website.
AMIA members can log in with their JAMIA user name (email address) and password or via the AMIA website.