J Am Med Inform Assoc doi:10.1136/amiajnl-2013-001613
  • Research and applications

Formative evaluation of the accuracy of a clinical decision support system for cervical cancer screening

Open Access
  1. Rajeev Chaudhry7
  1. 1Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, Minnesota, USA
  2. 2Division of Family Medicine, Mayo Clinic, Rochester, Minnesota, USA
  3. 3Division of Obstetrics–Gynecology, Mayo Clinic, Rochester, Minnesota, USA
  4. 4Division of Anatomic Pathology, Mayo Clinic, Rochester, Minnesota, USA
  5. 5Department of Biomedical Informatics, Arizona State University, Phoenix, Arizona, USA
  6. 6Department of Health Science Research, Mayo Clinic, Scottsdale, Arizona, USA
  7. 7Division of Primary Care Internal Medicine, Center for Innovation, Mayo Clinic, Rochester, Minnesota, USA
  1. Correspondence to Dr Kavishwar Wagholikar, Division of Biomedical Statistics and Informatics, Mayo Clinic, 200 First Street SW, Rochester, MN 55901, USA; waghsk{at}
  • Received 1 January 2013
  • Revised 16 February 2013
  • Accepted 6 March 2013
  • Published Online First 5 April 2013


Objectives We previously developed and reported on a prototype clinical decision support system (CDSS) for cervical cancer screening. However, the system is complex as it is based on multiple guidelines and free-text processing. Therefore, the system is susceptible to failures. This report describes a formative evaluation of the system, which is a necessary step to ensure deployment readiness of the system.

Materials and methods Care providers who are potential end-users of the CDSS were invited to provide their recommendations for a random set of patients that represented diverse decision scenarios. The recommendations of the care providers and those generated by the CDSS were compared. Mismatched recommendations were reviewed by two independent experts.

Results A total of 25 users participated in this study and provided recommendations for 175 cases. The CDSS had an accuracy of 87% and 12 types of CDSS errors were identified, which were mainly due to deficiencies in the system's guideline rules. When the deficiencies were rectified, the CDSS generated optimal recommendations for all failure cases, except one with incomplete documentation.

Discussion and conclusions The crowd-sourcing approach for construction of the reference set, coupled with the expert review of mismatched recommendations, facilitated an effective evaluation and enhancement of the system, by identifying decision scenarios that were missed by the system's developers. The described methodology will be useful for other researchers who seek rapidly to evaluate and enhance the deployment readiness of complex decision support systems.

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: and

Related Article

Open Access

Free Sample

This recent issue is free to all users to allow everyone the opportunity to see the full scope and typical content of JAMIA.
View free sample issue >>

Access policy for JAMIA

All content published in JAMIA is deposited with PubMed Central by the publisher with a 12 month embargo. Authors/funders may pay an Open Access fee of $2,000 to make the article free on the JAMIA website and PMC immediately on publication.

All content older than 12 months is freely available on this website.

AMIA members can log in with their JAMIA user name (email address) and password or via the AMIA website.

Navigate This Article