rss
J Am Med Inform Assoc 13:61-66 doi:10.1197/jamia.M1780
  • Original Investigation
  • Research Paper

Acute Infections in Primary Care: Accuracy of Electronic Diagnoses and Electronic Antibiotic Prescribing

  1. Jeffrey A Linder,
  2. David W Bates,
  3. Deborah H Williams,
  4. Meghan A Connolly,
  5. Blackford Middleton
  1. Affiliations of the authors: From the Division of General Medicine, Brigham and Women's Hospital (JAL, DWB, DHW, BM); Harvard Medical School (JAL, DWB, BM); Clinical Informatics Research and Development, Partners HealthCare System (BM); Yale School of Nursing and Yale University School of Medicine (MAC)
  1. Correspondence and reprints: Jeffrey A. Linder, MD, MPH, Division of General Medicine, Brigham and Women's Hospital, 1620 Tremont Street, BC-3-2X, Boston, MA 02120; e-mail: <jlinder{at}partners.org>
  • Received 20 December 2004
  • Accepted 21 September 2005

Abstract

Objective To maximize effectiveness, clinical decision-support systems must have access to accurate diagnostic and prescribing information. We measured the accuracy of electronic claims diagnoses and electronic antibiotic prescribing for acute respiratory infections (ARIs) and urinary tract infections (UTIs) in primary care.

Design A retrospective, cross-sectional study of randomly selected visits to nine clinics in the Brigham and Women's Practice-Based Research Network between 2000 and 2003 with a principal claims diagnosis of an ARI or UTI (N = 827).

Measurements We compared electronic billing diagnoses and electronic antibiotic prescribing to the gold standard of blinded chart review.

Results Claims-derived, electronic ARI diagnoses had a sensitivity of 98%, specificity of 96%, and positive predictive value of 96%. Claims-derived, electronic UTI diagnoses had a sensitivity of 100%, specificity of 87%, and positive predictive value of 85%. According to the visit note, physicians prescribed antibiotics in 45% of ARI visits and 73% of UTI visits. Electronic antibiotic prescribing had a sensitivity of 43%, specificity of 93%, positive predictive value of 90%, and simple agreement of 64%. The sensitivity of electronic antibiotic prescribing increased over time from 22% in 2000 to 58% in 2003 (p for trend < 0.0001).

Conclusion Claims-derived, electronic diagnoses for ARIs and UTIs appear accurate. Although closing, a large gap persists between antibiotic prescribing documented in the visit note and the use of electronic antibiotic prescribing. Barriers to electronic antibiotic prescribing in primary care must be addressed to leverage the potential that computerized decision-support systems offer in reducing costs, improving quality, and improving patient safety.

To determine the accuracy of electronic diagnoses and electronic antibiotic prescribing for acute infections in primary care, we performed a study of acute respiratory infection (ARI) and urinary tract infection (UTI) visits in a practice-based research network.

Background

Accurate electronic health information is important for patient care, clinical operations, quality improvement, and research efforts. Increasingly, clinical decision support systems are being designed to assist clinicians in providing high-quality care, but such systems are also dependent on accurate electronic information. Electronic diagnoses and electronic medication information have been shown to be generally accurate for many chronic conditions.1 2 However, less is known about the accuracy of electronic health information for acute conditions in primary care. Two of the most common acute problems in primary care are ARIs and UTIs.

ARIs, including nonspecific upper respiratory tract infections, otitis media, sinusitis, pharyngitis, acute bronchitis, influenza, and pneumonia, are the most common symptomatic reason for seeking care in the United States, accounting for 7% of all ambulatory visits.3 ARIs are the number one reason for antibiotic prescribing in the United States and account for about 50% of all antibiotic prescriptions to adults.4 Although antibiotic prescribing for ARIs has decreased,4 much antibiotic prescribing for ARIs remains inappropriate. Inappropriate antibiotic prescribing does not improve outcomes, but exposes individual patients to the risk of adverse drug events,5 6 7 8 9 10 11 increases the prevalence of antibiotic-resistant bacteria,12 13 14 and increases costs. The total cost, direct and indirect, of ARIs is at least $40 billion per year, with unnecessary antibiotics accounting for over $1.1 billion of the total.15

UTIs represent another common condition in primary care for which physicians commonly prescribe antibiotics. In contrast to ARIs, antibiotics are generally indicated in the treatment of UTIs, and physicians prescribe antibiotics to 60% to 75% of patients diagnosed with UTIs.4 16 However, there has been concern about the type of antibiotic chosen for UTIs, with clinicians increasingly selecting broader spectrum antibiotics. Broader spectrum antibiotics made up 20% of antibiotic prescriptions for UTIs in 1991–1992 and over 40% of antibiotic prescriptions in 1998–1999.4

Electronic health records with integrated clinical decision support systems have the potential to improve antibiotic prescribing in primary care for ARIs and UTIs in a cost-effective, sustainable way.17 18 However, to be effective, clinical decision support systems for ARIs and UTIs must have access to accurate diagnostic and prescribing information. We performed a cross-sectional study from 2001 to 2003 to measure the accuracy of electronic ARI and UTI diagnoses and to measure the accuracy of electronic antibiotic prescribing in a primary care practice-based research network.19

Hypotheses

We had two main hypotheses: (1) that claim-derived, electronic ARI and UTI diagnoses would be generally accurate and (2) that the accuracy of electronic antibiotic prescribing in primary care would be less than previously shown for long-term medications.

Methods

Setting

The Brigham and Women's Primary Care (BWPC) Practice-Based Research Network (PBRN) includes nine primary care clinics in the greater Boston area. The BWPC clinics have approximately 95 practicing attending physicians, as well as internal medicine residents who provide longitudinal and urgent care. The BWPC clinics include two community health centers, four hospital-based clinics, and three community-based clinics. In 2002, the BWPC clinics provided primary care for over 72,000 adults and children and had over 230,000 patient visits.

Longitudinal Medical Record

The BWPC-PBRN is linked with a common Web-based electronic health record, the Longitudinal Medical Record (LMR). Data contained in the LMR is the official patient record for the BWPC-PBRN. The LMR is an internally developed, fully functioning electronic record including typed and dictated notes from primary care and subspecialty clinics; International Classification of Diseases, Ninth Edition, Clinical Modification (ICD-9-CM)–coded problem lists; medication lists; coded allergies; and laboratory test and radiographic results. Clinicians directly type 65% of BWPC-PBRN notes. Some clinicians have developed personal templates for various problems, but there are no systemwide templates.

BWPC clinicians use the LMR to write prescriptions that can be printed or transmitted to pharmacies electronically. The medication list in the LMR is meant to be both descriptive—lists of all medications the patient is taking and has taken as well as prescriptive—new prescriptions are to be written using the LMR. Physicians are strongly encouraged to use the electronic prescribing functionality of the LMR for all prescriptions. To prescribe a medication using the LMR, clinicians select “medications” from a drop-down menu within a patient's electronic chart; type in the first few letters of the medication to be prescribed; select the medication from a list; complete a “prescription pad” screen with dose, frequency, number of pills to be dispensed, and number of refills; and electronically “sign” the prescription, which results in the prescription being printed in the examination room.

The LMR prescribing module has nine of 14 functional capabilities recently identified in a conceptual model of electronic prescribing: patient selection, medication selection menus, safety alerts, formulary alerts, dosage calculation, medication administration aides, patient education material, data transmission, and alerts for patients' failure to refill.20 The LMR prescribing module lacks five functional capabilities: diagnosis selection, in-office dispensing, refill and renewal reminders, corollary orders, and automated questionnaires.

During the study period, enhancements to the LMR prescribing module included improvements to make it easier to prescribe multiple medications, facilitate selecting the route of administration, facilitate printing or faxing prescriptions, and improvements in drug-allergy, drug-pregnancy, drug-drug, and drug-lab warnings.

Computers running the LMR are available in most examination rooms. The LMR was introduced in eight BWPC clinics in July 2000 and has been in use in all the BWPC clinics since June 2001. The LMR has a down time of 0.7%.

Data Sources

Partners HealthCare, of which Brigham and Women's is a part, maintains the Research Patient Data Repository (RPDR), which pools inpatient and outpatient encounter data from all Partners HealthCare sites.21 The RPDR identifies claim diagnoses by ICD-9-CM codes and includes information about visit dates, site of care, visit notes, and patient demographics. Diagnosis codes are generated using clinic-specific “superbills,” with evaluation and management codes on the front and ICD-9 codes on the back. Clinicians select the appropriate diagnosis code, which is later entered into the system by administrative staff. Diagnosis codes are not derived from the LMR but are electronically available.

We linked data derived from the RPDR to data from the LMR, including medication prescribing and provider characteristics.

Data Extraction

We identified visits made to a BWPC clinic with a principal diagnosis of an ARI or UTI between January 1, 2000, and November 13, 2003, using the RPDR. ARI diagnoses included nonspecific URIs (ICD-9-CM 460, 464, and 465), otitis media (ICD-9-CM 381 and 382), sinusitis (ICD-9-CM 461 and 473), pharyngitis (ICD-9-CM 034.0, 462, and 463), acute bronchitis (ICD-9-CM 466 and 490), pneumonia (ICD-9-CM 481-486), and influenza (ICD-9-CM 487). UTI diagnoses included cystitis (ICD-9-CM 595) and UTI, site not specified (ICD-9-CM 599.0). From this pool of encounters, we randomly selected 1,000 visits, stratified by calendar year and ARI or UTI diagnosis. Because the LMR was introduced in some clinics early in the study period, we excluded visits for which the LMR was not in use during the study period. We only included visits for which we could find the corresponding visit note in the LMR.

Although ARIs seem heterogeneous, it is useful to consider them as a group because of significant overlap in pathophysiology, signs, and symptoms. In addition, clinicians may be inclined to use “diagnosis shifting” if they are aware that antibiotic prescribing practices are being monitored.22 For example, when treating a patient with sinus congestion but prescribing antibiotics, a clinician might be more inclined to diagnose a patient with sinusitis, an antibiotic-appropriate diagnosis, instead of nonspecific URI, an antibiotic-inappropriate diagnosis.

Data Analysis

One author reviewed the visit notes blinded to the encounter diagnosis and electronic antibiotic prescribing. From the notes, we extracted the primary diagnosis responsible for the visit into nine possible categories (nonspecific URIs, otitis media, sinusitis, pharyngitis, bronchitis, pneumonia, influenza, UTI, or other diagnosis) and whether an antibiotic was prescribed. We used the first listed diagnosis in the “assessment and plan” as the primary diagnosis. In our definition of antibiotic prescribing, we included “delayed prescriptions,” in which the physician instructed the patient to fill the prescription only if symptoms failed to resolve after a specific amount of time. For quality control, we randomly selected 100 visits for duplicate chart abstraction by a second author for the diagnosis and antibiotic prescribing. For the nine potential diagnoses, interobserver agreement was 94% and for antibiotic prescribing, interobserver agreement was 99%. We did not assess the appropriateness of antibiotic prescribing.

We considered data abstracted from the visit notes as the gold standard. We considered any electronic antibiotic prescription within 30 days of the index visit to be attributable to that visit because (1) documentation can occur after the visit date, (2) the possibility that prescribing was done subsequent to the visit (e.g., as a result of telephone follow-up documented in an addendum to the original visit with the original visit date), and (3) to maximize the apparent sensitivity of electronic antibiotic prescribing.

To assess the accuracy of electronic ARI and UTI diagnoses, we calculated sensitivity, specificity, and positive predictive value.23 To assess the accuracy of electronic antibiotic prescribing, we calculated the sensitivity, specificity, positive predictive value, and simple agreement. We used sensitivity as a measure of the completeness and positive predictive value as a measure of the correctness of the electronic diagnoses and electronic antibiotic prescribing.1 24 25 We also examined how the antibiotic prescribing rate changed over time according to electronic antibiotic prescribing and according to the visit note.

Statistical Analysis

For sample characteristics, we used standard descriptive statistics. Because we hypothesized increasing adoption of electronic antibiotic prescribing, we used the linear trend test to assess changes in electronic antibiotic prescribing over time. To examine interclinician and interclinic effects for statistically significant trends over time, we evaluated models that adjusted for clustering by clinic and by provider using generalized estimating equations.26 We performed all statistical analyses using SAS version 8.02 (SAS Institute, Cary, NC). P-Values less than 0.05 were considered significant. The Institutional Review Board of Brigham and Women's Hospital approved the study protocol.

Results

Sample Derivation and Characteristics

During the study period, we identified 65,285 visits with a primary diagnosis of an ARI or UTI. From these, we randomly selected 1,000 visits, stratified by calendar year and ARI or UTI diagnosis (Fig. 1). We excluded visits for which the LMR was not in use early in the study period (n = 102) and duplicate encounters (n = 3). We were unable to locate visit notes for 68 encounters and excluded these encounters from the analysis. Women (81%) with a diagnosis of UTI (78%) accounted for most of these missing encounters. This left 827 visits in the final sample for analysis.

Figure 1

Visit flow. ARI = acute respiratory infection; UTI = urinary tract infection; EHR = electronic health record. The random selection was stratified by calendar year and by acute respiratory infection visit or urinary tract infection visit. For encounters for which there were no corresponding notes, 55 (81%) were by women and 53 (78%) were for urinary tract infection.

The sample of 827 visits was 79% women, 7% younger than 18 years old, with a mean patient age of 37 (Table 1). The race/ethnicity of the sample was 43% white, 29% Hispanic, 10% black, and 18% other. Eighty percent of the sample patients spoke English and 16% spoke Spanish as their primary language. Most patients had insurance through a health maintenance organization, private insurance, or Medicaid or the Massachusetts Uncompensated Care Pool System (“free care”). Reflecting the stratified nature of the sampling, 51% of the visits had a primary diagnosis of an ARI and 49% had a primary diagnosis of UTI.

Table 1

Demographic Characteristics (N = 827)

Validity of Electronic Diagnoses

The sensitivity of ARI billing diagnoses ranged from 65% for nonspecific URI to 93% for pharyngitis (Table 2). The specificity of ARI diagnoses ranged from 96% for pharyngitis to 100% for pneumonia. The positive predictive value for ARI diagnoses ranged from 20% for influenza to 87% for otitis media and pneumonia. As a group, the ARI diagnoses had a sensitivity of 98%, specificity of 96%, and positive predictive value of 96%. An electronic diagnosis of UTI had a sensitivity of 100%, specificity of 87%, and positive predictive value of 85%.

Table 2

Diagnostic Accuracy of Encounter Data Compared to the Visit Note for Acute Respiratory Infection and Urinary Tract Infection Visits (N = 827)

Electronic Antibiotic Prescribing

According to the note, clinicians prescribed antibiotics in 59% of all visits. Clinicians prescribed antibiotics in 45% of ARI visits and 73% of UTI visits. Compared to this gold standard, electronic antibiotic prescribing had a sensitivity of 43%, specificity of 93%, and a positive predictive value of 90% (Table 3). Simple agreement between electronic antibiotic prescribing and antibiotic prescribing according to the visit note was 64%. Among the nine clinics, the number of visits varied from 16 to 210 and the agreement between electronic antibiotic prescribing and antibiotic prescribing according to visit note varied significantly, ranging from 25% to 79% (p < 0.0001).

Table 3

Antibiotic Prescribing for Acute Respiratory Infections and Urinary Tract Infections According to Electronic Antibiotic Prescribing and the Visit Note*

The sensitivity of electronic antibiotic prescribing increased from 22% in 2000 to 58% in 2003 (p < 0.0001; Fig. 2). The simple agreement between electronic antibiotic prescribing and the visit notes increased from 51% in 2000 to 73% in 2003. Adjustment for clustering by clinic or by clinician did not change these results.

Figure 2

Accuracy over time of electronic antibiotic prescribing compared to the visit note (N = 827). For agreement, p for trend over time < 0.0001. With adjustment for clustering by physician, p = 0.001. With adjustment for clustering by clinic, p = 0.002.

For ARI visits, according to the visit note, there was no significant change in antibiotic prescribing over time (p = 0.99; Fig. 3). Electronic antibiotic prescribing increased significantly over time for ARI visits (15% in 2000 to 25% in 2003; p = 0.03), but this became nonsignificant after adjusting for clustering by either clinic or clinician. The sensitivity of electronic antibiotic prescribing for ARIs increased from 26% in 2000 to 54% in 2003.

Figure 3

Antibiotic prescribing for acute respiratory infections in primary care over time according to the visit note or to electronic prescribing (n = 421). For visit note, p for trend = 0.99. For electronic prescribing, p for trend = 0.03 (adjusted for clustering by physician, p = 0.23; adjusted for clustering by clinic, p = 0.18).

For UTI visits, according to the visit notes, there was no significant change in antibiotic prescribing over time (p = 0.58; Fig. 4). Electronic antibiotic prescribing increased significantly over time for UTI visits (from 16% in 2000 to 48% in 2003; p < 0.0001). Adjustment for clustering by clinic or clinician did not change the significance of these results. The sensitivity of electronic antibiotic prescribing for UTIs increased from 20% in 2000 to 60% in 2003.

Figure 4

Antibiotic prescribing for urinary tract infections in primary care over time according to the visit note or to electronic prescribing (n = 406). For visit note, p for trend = 0.58. For electronic prescribing, p for trend < 0.0001 (adjusted for clustering by physician, p < 0.0001; adjusted for clustering by clinic, p = 0.0004).

Discussion

Clinical decision-support systems, to be effective, must have access to accurate diagnostic and prescribing information. We found that electronic, claims-derived diagnosis codes have good accuracy for identifying ARI and UTI visits in our practice-based research network. ARIs as a group would be expected to have better accuracy than individual ARI diagnoses: a wider diagnostic definition leads to increased sensitivity and positive predictive value and decreased specificity. These findings compare well to other studies that examined the accuracy of electronic diagnoses, mostly chronic, that had sensitivities from 40% to 100%, specificities from 91% to 100%, and positive predictive values from 85% to 100%.2 23 25 27 28 29

In contrast, we identified poor accuracy in electronic antibiotic prescribing for ARIs and UTIs. The specificity (93%) and positive predictive values (90%) for electronic antibiotic prescribing were good. However, the sensitivity was low (43% overall), reflecting a gap between note-documented antibiotic prescribing and electronic antibiotic prescribing. Although there was substantial improvement over time—perhaps because of increased clinician familiarity, improvements in the prescribing module, or encouragements of leadership—at the end of the study period, the sensitivity of electronic antibiotic prescribing was still only 58%.

Hogan and Wagner1 performed a systematic review of the accuracy of data in electronic health records and found that medications had a sensitivity (completeness) of between 93% and 100% and a positive predictive value (correctness) of 83%. Thiru et al.2 found that electronic prescribing information in primary care had a sensitivity of between 93% and 100% and a positive predictive value of 100%. A more recent study of the accuracy of medication lists for older Veterans Affairs patients found a sensitivity of 75% and a positive predictive value of 87%.30 These studies generally examined chronic medications and medication lists that were “descriptive” rather than “prescriptive,” as is our system. Future work should examine whether there are differences in the use of electronic prescribing between acute and chronic medications within a single network or health system.

Understanding the accuracy of electronic data is important for patient care.31 32 The low sensitivity of electronic antibiotic prescribing represents a safety problem.20 Electronic antibiotic prescribing avoids errors associated with handwritten prescriptions and provides medication interaction checking, allergy checking, medical problem interaction checking, laboratory checking, and prospective monitoring for potential adverse drug events.33 34 35 The failure of clinicians to use electronic prescribing also limits the potential benefits of clinical decision support; clinicians are not using a key “effector arm” of clinical decision support.36 For example, clinicians not using electronic prescribing will not interact with clinical decision support that recommends penicillin as the antibiotic of choice for group A β-hemolytic streptococcal pharyngitis or that recommends not prescribing antibiotics for acute bronchitis.37 38

Understanding the accuracy of electronic data is also important for quality improvement and research purposes.31 39 The apparent antibiotic prescribing rate differs depending on whether one examines chart-documented antibiotic prescribing or electronic antibiotic prescribing. In an analysis of the appropriateness of antibiotic prescribing for ARIs and UTIs, electronic antibiotic prescribing would presumably appear “better,” with a lower antibiotic prescribing rate. However, this is simply an artifact of the quality of data. Similarly, for intervention studies that seek to reduce antibiotic prescribing for ARIs and UTIs, the more easily accessible electronic antibiotic prescribing rate could be misleading. The baseline rate would be too low and the effectiveness of the intervention would be blunted by an artifactual “floor effect.”

This study has limitations that should be considered. First, this study was performed on a sample of visits with a primary billing diagnosis of an ARI or UTI that constrains our ability to comment on specificity and sensitivity of the diagnosis. Use of a data set with broader inclusion criteria would likely lead to decreased specificity, but presumably increased sensitivity. In addition, the claims diagnoses that we evaluated are used primarily for administrative purposes, not clinical. However, claims diagnoses are electronically available and frequently used for quality improvement, profiling, and clinical operations. We are presently implementing an “end-of-visit” system in which the clinician enters the diagnostic code prospectively within the LMR.

Second, the visit notes are an imperfect gold standard. Even though the physician documented an antibiotic prescription in the visit notes, we did not validate that the physician actually wrote or called in the prescription, the patient had it filled at a pharmacy, or the patient took the antibiotic. While the visit note is only a proxy for “the truth,” it is an accessible, economical measure of what the clinician is trying to do. Similarly, we did not assess whether the documentation supported the diagnosis. Third, we were unable to locate the visit note for some of our randomly selected visits. Most of the missing visits were for women with a diagnosis of UTI and may represent a woman coming to the clinic to have a urinalysis performed. Considering that failure of documentation is a marker of poor quality, exclusion of these visits would probably bias the results toward higher sensitivity. Fourth, including electronic antibiotic prescriptions within 30 days of the index visit likely inflated the sensitivity and decreased the specificity of electronic antibiotic prescribing. Despite these last two limitations, we still found a disturbingly low sensitivity for electronic antibiotic prescribing.

Finally, this analysis was performed in an urban and suburban PBRN with a single electronic health record and focused only on the prescribing of one class of medication for two acute conditions. Although these results may not generalize to other settings, electronic health records, conditions, and medications, they demonstrate that clinicians, researchers, and clinical leaders need to understand the accuracy of electronic information that they are using.

Conclusion

For clinical decision-support systems to be effective in improving care for patients with acute problems, systems need to have access to accurate diagnoses and engage clinicians at the time of prescribing. We found that claims-derived, electronic diagnoses for ARIs and UTIs appear accurate. While the specificity of electronic antibiotic prescribing was good, the sensitivity was poor. Although there is a trend toward improvement, a large gap persists between antibiotic prescribing documented in the visit notes and the use of electronic antibiotic prescribing. Barriers to electronic antibiotic prescribing in primary care must be addressed to leverage the potential that computerized decision-support systems offer in reducing costs, improving quality, and improving patient safety.

Footnotes

  • This project was supported by grant number R03 HS014420 from the Agency for Healthcare Research and Quality. Dr. Linder is supported by a Career Development Award (K08 HS014563) from the Agency for Healthcare Research and Quality.

  • We thank Joseph C. Chan for his editorial assistance in preparing the manuscript.

References

Free Sample

This recent issue is free to all users to allow everyone the opportunity to see the full scope and typical content of JAMIA.
View free sample issue >>

Access policy for JAMIA

All content published in JAMIA is deposited with PubMed Central by the publisher with a 12 month embargo. Authors/funders may pay an Open Access fee of $2,000 to make the article free on the JAMIA website and PMC immediately on publication.

All content older than 12 months is freely available on this website.

AMIA members can log in with their JAMIA user name (email address) and password or via the AMIA website.