Educational Instruction on a Hospital Information System for Medical Students During Their Surgical Rotations
- Correspondence and reprint requests: Robert Patterson, MD, Box 336, Squamish, BC, Canada V0N 3G0; e-mail: 〈 〉
- Received 30 June 2000
- Accepted 20 November 2000
Objective To evaluate the benefit, for medical students on their surgical rotations, of real-time educational instruction during order entry on a hospital information system.
Design Prospective controlled trial.
Intervention Access to educational information during computerized order entry.
Subjects Medical students in their final year at the University of Calgary.
Main outcomes Attainment of the surgery rotation educational objectives, as measured by performance on a multiple-choice examination.
Methods Before they began their surgical rotations, students at two hospitals took a multiple-choice examination to measure their knowledge of surgery. One hospital had an information system with computerized order entry; students at this hospital had access, while composing orders, to educational material on the system. The other hospital did not have an information system; students there wrote orders on a paper chart. At the end of the rotation, all students took another multiple-choice examination.
Results Of 50 eligible students, 45 agreed to participate in the project, 21 in the treatment group and 24 in the control group. Pre-rotation scores were similar for the two groups (43 percent in the treatment group and 40 percent in the control group; SD, 10 percent). Post-rotation scores were identical for the two groups (65 percent in the treatment group and 65 percent in the control group; SD, 12 percent). A t-test analysis revealed no significant difference in performance on the examinations between the two groups.
Conclusion This study did not demonstrate a learning advantage for medical students who have access to educational material on a hospital information system.
Over the past decade, many hospitals have acquired computerized patient care systems. An important part of the patient record has always been the physician's orders. An electronic record permits automated physician order entry, which eliminates transcription errors, reduces patients' charges, and shortens length of stay.1
Despite the benefits of the electronic patient record, physicians have been reluctant converts. With few exceptions, most physicians still record their orders using pen and paper. Reasons for their lack of enthusiasm for computerized order entry have been discussed in the literature and are both cultural and technical.2 3 4 5 In other words, physicians are averse to changing their method of practice, and the current state of patient care systems, which are often slow and unwieldy, offers little to entice them.
In their comprehensive review of computer-based physician order entry, Sittig and Stead6 cite the rationale for its use, including process improvement, support for economical decision making, support for clinical decision making, and better use of physicians' time. They note that the chief barrier to physician order entry is the natural resistance of people to change, including changes in established patterns of practice, roles of the care team, teaching routines, and institutional policies. The key ingredients for successful implementation of a physician order entry system include an intuitive and consistent system with quick response times, organized and committed leadership, and regular evaluation and problem solving.
Of the various “carrots” used to attract physicians to computerized order entry, none may be more alluring than readily available decision support and expert system capabilities. Not only do such systems offer online “advice,” but they are also associated with improved patient outcomes.7 8 With these enhancements, information systems will become sophisticated physician tools.
Medical students have been left behind in the rush to develop physician support. Expert systems presuppose that the user is a practicing physician; a certain knowledge base is assumed, and information exchanges tend to occur on an advanced professional level that may be inappropriate for novice physicians.
After three years of mostly classroom work, medical students spend their final year of medical school in a clinical hospital setting. Although hundreds of software programs have been designed to teach students the fundamentals of medicine, these stand-alone programs are found on PCs in the medical school or in students' homes and are not part of the hospital information system. Paradoxically, then, the student has no access to educational software when he or she signs on to the hospital's computerized patient care system. A logical innovation would allow medical students to access, on the hospital information system, information that is appropriate to their level of knowledge.
A review of the medical literature revealed many calls for making informatics part of the undergraduate curriculum in medical
schools, and descriptions of such efforts.9 10 Integration of informatics, medical education, and hospital information systems is, unfortunately, rare. An integrated system
is sometimes referred to as IAIMS (Integrated Advanced Information Management Systems). Richard West was the IAIMS program
officer at the National Library of Medicine for 13 years. In a 1995 interview,11 he outlined the scope of IAIMS and bemoaned its lack of penetration into academic centers.
IAIMS has always been a comprehensive information program. We wanted education, administration, research, and patient care
applications.… I've been surprised that the IAIMS program hasn't been developed for general academic programs.… [We] could
create a community of developers and make people think about issues they have sort of ignored to make computer-assisted instruction
and educational applications work.
At the initiation of this project, in 1993, we were unaware of any work that integrated medical education with order entry on a hospital information system and demonstrated an educational benefit.
Does the presence of readily available educational material on a hospital's computerized patient care system augment student learning? The null hypothesis states that there will be no measurable difference in examination performance between students who have online access to educational resources during computerized order entry and students in a control group.
The Foothills Hospital in Calgary, Alberta, is a large Canadian tertiary-care university hospital with approximately 900 beds. In 1988, the institution purchased and installed the Technicon Data Systems HC4000 patient care system; implementation was largely completed by 1993. From the outset, administrative policy required physicians to enter their orders directly into the system. House staff (residents) enter 70 percent of all orders. The process of order entry has been expedited by the development of personal order sets, which have pre-composed sets of orders.
During their last year of medical school, students rotate through various disciplines in a clinical role and are in continual contact with patients. The surgical rotation poses unique challenges to the student as he or she struggles to learn the various protocols used in surgery and the rationale for care of surgical patients. For example, students are often baffled by the subtleties of intravenous solutions or antibiotic therapy, when many choices are possible and more than one may be correct. To assist students in this daunting task, order sets and related educational materials were placed on the hospital information system. Access to information at the point of patient contact provided “learning-in-context,” a powerful educational tool.12
Learning objectives for the medical students were reviewed, and detailed explanations were developed to explain the rationale behind perioperative patient care. All information was reviewed by three surgeons involved in the education of the medical students. The next step was to place these educational materials on the hospital information system and link them to order sets designed specifically for medical students.
Fifty order sets containing a total of 244 individual orders were organized into four basic nodes—Preoperative Care, Postoperative Care, Ward Problems, and Miscellaneous. Under the Preoperative Care heading are order sets on patient management prior to the operation, such as appropriate diet, investigations, antibiotics, deep vein thrombosis prophylaxis, and bowel preparation. Postoperative order sets include analgesics, antiemetics, and nasogastric tubes. Ward Problems address complications such as fevers, wound infections, and fluid and electrolyte disturbances. The Miscellaneous section deals with issues like tetanus prophylaxis, post-splenectomy immunization, and patients with special needs, such as alcoholics.
The orders sets were combined with educational information that explains basic surgical concepts. For example, hypertext links under Preoperative Antibiotics take the user to discussions of different types of surgical wounds, the spectrum of coverage of various agents, and prophylactic vs. therapeutic use of antibiotics.
The steps a student uses in selecting orders and reviewing educational content are illustrated in Figures 1 to 3. All orders entered by a medical student must be reviewed and approved by a resident or staff physician.
To determine how often the students accessed the educational materials, certain screens were tagged, and the computer recorded the number of hits. Written and verbal feedback were also solicited from the students at the end of their surgical rotations.
Measuring the Educational Effect
To test the hypothesis, the students at Foothills Hospital were compared with a control group at a second hospital, where there was no computerized patient care system. Students were free to choose either hospital for their surgical rotation; therefore, assignment of students into the treatment and control groups was not randomized.
Before beginning their surgical rotations, students at the two hospitals took a 50-question multiple-choice examination that measured their initial knowledge of surgery. The questions were developed by a senior surgical resident and were based on the written learning objectives for medical students during their surgical rotations. The examination content was reviewed by several staff surgeons, and question formats were reviewed and revised by a medical educator.
Students in the Foothills Hospital group were then oriented to computerized order entry and to the educational materials. During their surgical rotation, they entered all their patient orders through the computer, where they had the option of selecting hypertext links to the educational materials. The other hospital had paper records, and students manually wrote orders without any access to online educational materials. After two months, students in both groups took another 50-question multiple-choice examination to assess their knowledge of surgery at the end of their rotations.
Of 50 eligible students, 45 agreed to participate in the project, 21 in the treatment group at Foothills Hospital and 24 in the control group at another hospital. At the Foothills Hospital, the tagged educational screens were accessed an average of 20 times per student per week. Written and verbal feedback from Foothills students included comments such as “very useful,” “great,” and “couldn't get along without it.”
Pre-rotation scores were similar for the two groups (43 percent in the treatment group and 40 percent in the control group: SD, 10 percent). Post-rotation scores were also similar for the two groups (65 percent in the treatment group and 65 percent in the control group; SD, 12 percent). A t-test analysis revealed no significant difference in performance on the examinations between the two groups.
This study failed to demonstrate a learning advantage provided by readily accessible educational materials on a hospital's information system. Although it seems obvious that ready access to learning resources during times of patient contact would improve students' knowledge base, this apparent advantage was not reflected in their performance on examinations. In spite of investment of a great amount of time and energy on the part of the investigators, this is a negative study.
An immediate concern is the statistical validity of the study; specifically, were there enough subjects and what was the likelihood of a type II error? For sample size analysis, the following assumptions were made: expected effect size, 10 percent; expected standard deviation on mean for both groups combined, 10 percent; significance level (alpha), 0.05; power (beta), 0.80; and proportion of participants in each group, 0.50. Using these figures, the minimum number of subjects needed for the study would be 32 (16 in each group). The actual study had 45 participants, so sample size was adequate. With each group achieving an identical score of 65 percent on the post-rotation examination, the chance of a type II error (accepting the null hypothesis when it is, in fact, false) was small.
Another potential flaw in the study design was the lack of randomization of the subjects. Perhaps the students in the control group who chose the hospital without an information system were brighter or more independent learners. Also, prior computer experience was not measured between the two groups and may have been a source of bias.
Another concern was the reliability and validity of the instrumentation—in this study, the multiple-choice examinations. The two examinations used to measure the students' knowledge of surgery were developed a year prior to the study and tested on a different group of final-year medical students. These students had scores similar to those of the students participating in the study; the few questions that were determined to be too easy or too difficult were discarded to provide a more reliable examination.
Despite these precautions, the multiple-choice examinations may not have been an appropriate method of determining differences in knowledge. Book learning and examination performance do not always correlate with achievement in real-life situations, and the students in the treatment group may have developed into superior clinicians. However, this skill may not have been captured in the examination format used to assess student knowledge.
Perhaps the primary learning environment at the Foothills Hospital, even when supplemented by computer information, was inadequate to enhance learning over that of the control group. For example, it is suggested that educators should consider four critical tasks to enhance learning.13 To be effective, educators must first select situations that will engage the learner in complex, realistic, problem-centered activities that will support the desired knowledge to be acquired and applied. Perhaps the students did not have an opportunity to actually apply the newly acquired information, thus reducing its utility.
Second, educators must provide a scaffold for learners; that is, the educator must know the type and intensity of guidance necessary to help the learner master the knowledge and skills. Perhaps follow-up and challenge to the newly acquired knowledge were inadequate.
Third, as the computer was the main vehicle for the dissemination of this information to the treatment group, the educators needed to recast their roles from content transmitters to facilitators of learning by tracking progress, assessing students, providing appropriate challenges, and encouraging reflection. The last task is to model the appropriate behaviors and provide adequate support for the intellectual growth of the learner. Students should have an opportunity to observe how instructors solve problems and use the information in the appropriate treatment and care of their patients. In summary, the tools for enhancing student learning include discussion, reflection, evaluation, and validation from a medical learning perspective. Any one or all of these steps could have been insufficient at the Foothills Hospital to foster the desired growth.
As a comforting afterthought, we speculate that, although performance on the end-of-rotation examination was the same for the two groups, the online educational materials may have facilitated learning. The Foothills group may have been able to accumulate the same knowledge as the control group with the expenditure of far less time or energy. Students expressed a great deal of enthusiasm during the project, and the screens were left in place after completion of the study. Five years later, the same order sets and educational materials are still used by new groups of medical students.
Most hospital computers have humble roots in financial and accounting systems. Only relatively recently have clinical applications been developed to aid physicians in the care of patients. The next step in the evolution of the hospital information system will be to utilize the system for education of members of the health care team. The appeal of placing instructional materials on the hospital's computer system seems logical—clinicians can hone their craft and educate themselves with minimal effort during the execution of their daily duties.
Three trends in computing are converging. Personal computers and larger systems continue to offer improved performance at less cost, health organizations are constructing huge databases to serve as rich repositories of patient information, and medical education is becoming increasingly digitalized. Medical educators, who have the responsibility for training the next generation of physicians, must recognize the changing role of the hospital information system as it emerges from its historical financial roots and matures into sophisticated clinical applications. The ready availability of such powerful information systems in our hospitals invites the creative educator to exploit their full educational potential. The failure of this study should not deter educators and informaticians from pursuing the inevitable merger of educational software and patient care systems.