Quantifying the impact of health IT implementations on clinical workflow: a new methodological perspective
- 1School of Public Health Department of Health Management and Policy & School of Information, The University of Michigan, Ann Arbor, Michigan, USA
- 2Center for Computational Medicine and Bioinformatics, The University of Michigan, Ann Arbor, Michigan, USA
- 3Department of Internal Medicine & Department of Pediatrics, The University of Michigan, Ann Arbor, Michigan, USA
- 4Department of Surgery, The University of Michigan, Ann Arbor, Michigan, USA
- 5Department of Anesthesiology, The University of Michigan, Ann Arbor, Michigan, USA
- 6Department of Pediatrics,Michigan, The University of Michigan, Ann Arbor, Michigan, USA
- Correspondence to Kai Zheng, Information Systems & Health Informatics, School of Public Health, School of Information, The University of Michigan, M3531 SPH II, 109 Observatory St, Ann Arbor, MI 48109-2029, USA;
- Received 30 June 2009
- Accepted 30 April 2010
Health IT implementations often introduce radical changes to clinical work processes and workflow. Prior research investigating this effect has shown conflicting results. Recent time and motion studies have consistently found that this impact is negligible; whereas qualitative studies have repeatedly revealed negative end-user perceptions suggesting decreased efficiency and disrupted workflow.
We speculate that this discrepancy may be due in part to the design of the time and motion studies, which is focused on measuring clinicians' ‘time expenditures’ among different clinical activities rather than inspecting clinical ‘workflow’ from the true ‘flow of the work’ perspective. In this paper, we present a set of new analytical methods consisting of workflow fragmentation assessments, pattern recognition, and data visualization, which are accordingly designed to uncover hidden regularities embedded in the flow of the work. Through an empirical study, we demonstrate the potential value of these new methods in enriching workflow analysis in clinical settings.
Adoption of health IT (HIT) applications, such as electronic health records (EHR) and computerized provider order entry (CPOE), often introduces radical changes to clinical work processes and workflow.1 These changes could have an undesirable impact on user satisfaction, time efficiency, quality of care, and patient safety.1 2 Qualitative studies investigating HIT-related ‘unintended consequences’ have amply demonstrated that disruption to established work processes and workflow introduced by HIT adoption is a principal cause of these suboptimal or adverse outcomes.3–5
Time and motion (T&M) is a commonly used approach for quantifying workflow to assess the potential impact associated with HIT. Recent T&M studies have consistently shown that this impact is either non-significant or only marginal, evincing that HIT implementations do not adversely affect clinicians' time utilization and clinical workflow.6–10
This suggests a paradox: why have qualitative studies repeatedly reported negative end-user perceptions with suggestions of decreased efficiency and disrupted workflow whereas quantitative studies have consistently shown that the impact is negligible? We speculate that this discordance may be due in part to the oversimplified workflow quantifier used in the previous T&M studies: ‘average aggregated clinician time.’ While this ‘time expenditures’ measure can generate valuable insights into whether HIT adoption may cause a redistribution of clinician time spent in various clinical activities (eg, direct patient care vs documentation or ordering), it is not capable of uncovering the temporal dynamics embedded in workflow. In other words, this measure is useful for studying clinicians' ‘time utilization’ but not the ‘flow of the work.’
As defined in the workflow literature, a workflow process refers to ‘a predefined set of work steps, and partial ordering of these steps,’11 and workflow refers to ‘systems that help organizations to specify, execute, monitor, and coordinate flow of the work cases within a distributed office environment.’11 Inspired by this view, we developed a set of new analytical methods to quantify HIT's impact on workflow from the true ‘flow of the work’ perspective, through the lens of the sequential ordering among different clinical tasks.
Despite the great potential,12–19 deployment of HIT applications such as EHR and CPOE may not always lead to desirable outcomes.20–25 Further, a significant body of literature has shown that adoption of HIT is often associated with unintended adverse consequences (UACs), which could result in diminished quality of care and escalated risks to patient safety.3 4 26 27 Among the UACs reported to date, ‘more/new work’ and ‘unfavorable workflow change’ are most common and most disruptive.3–5 They are generally attributable to problematic human-machine interfaces,27 overly simplistic workflow models,1 and other unfavorable implementation characteristics, such as inconvenient locations of computer workstations,26 disrupted power structures among clinicians,28 and unexpected changes introduced to the patterns of team coordination.29
While these UAC studies have produced detailed user accounts regarding encountered problems and suspected causes, most of them are qualitative investigations soliciting end-users' self-reported perceptions of how HIT adoption may have influenced their work. Therefore, they are not adequate to quantify the impact to assess its magnitude and prevalence. Time and motion, which collects scrupulous details of how clinicians spend their time performing each of the clinical tasks (what, when, for how long), is a useful approach for quantifying workflow to inspect for pre-post nuances. As compared to other quantitative workflow assessment methods, such as work sampling and time efficiency questionnaires, T&M has been shown to yield the most accurate results.30–32
We reviewed prior T&M studies that have investigated the workflow impact introduced by implementing HIT applications such as EHR, CPOE, and ambulatory care e-prescribing modules. With a few exceptions, the results of this stream of work suggest a clear time divide: studies published before 2001 have generally reported that HIT implementations were associated with an increase in clinician time,33–36 whereas studies published after 2001 have consistently shown that HIT implementations do not adversely affect clinicians' time utilization in significant ways.6–10 For example, ‘little extra time, if any, was required for physicians to use (the POE system);’6 ‘(the EHR system) does not require more time than a paper-based system during a primary care session;’8 ‘(implementation of ambulatory e-prescribing) was not associated with an increase in combined computer and writing time;’9 and ‘following EHR implementation, the average adjusted total time spent per patient across all specialties increased slightly but not significantly.’10
The discordance between the qualitative findings and the quantitative results is thus evident. This discordance, in fact, has already been noted in some of the T&M studies. For example, Pizziferri et al8 surveyed the physicians of the study practice where their T&M data were collected. They found that a majority of the survey respondents (71%) reported perceptions of increased time spent on patient documentation contradicting the fact that no significant differences were indicated in the T&M data.
What may account for this discordance? We speculate that the previous T&M studies have been overly focused on evaluating whether HIT adoption may affect how clinicians allocate their time among different clinical activities. This ‘time expenditures’ focus neglects that HIT's impact on clinical workflow may also originate from the changed sequence of task execution—that is, disruption to the flow of the work. In this paper, we present a set of new analytical methods consisting of workflow fragmentation assessments, pattern recognition, and data visualization, which are accordingly designed to uncover hidden regularities embedded in clinical workflow.
Workflow fragmentation assessments
First, we propose a new workflow quantifier, average continuous time (ACT), to assess the magnitude of workflow fragmentation. Average continuous time is herein defined as the average amount of time continuously spent on performing a single clinical activity (or similar activities belonging to a single category or theme; see the study design and empirical setting section for category and theme definitions). Workflow fragmentation, also referred to as frequency of task switching, is defined as the rate at which clinicians switch between tasks. The shorter continuous time spent on performing a single task, the higher frequency of task switching. Note that ‘activity’ and ‘task’ are used interchangeably in this paper unless otherwise specified.
This new quantifier, and magnitude of workflow fragmentation it measures, is potentially important in several ways. First, it has been shown in the cognition literature that frequent task switching is often associated with increased extra mental burden on the performer (eg, task prioritizing and task activation).37–40 Second, in a clinical setting, frequent task switching may cause increased amounts of physical activities (eg, locating a nearby computer workstation) and, hence, more frequent interruptions. Third, switching between tasks that are of distinct natures could result in a higher likelihood of cognitive slips and mistakes; for example, the loss-of-activation error manifesting as forgetting what the preceding task was about in a task execution sequence.41 42 Therefore, using this new quantifier to study workflow disruption may provide insights as to why HIT users may perceive decreased efficiency and disrupted workflow even though the total amount of clinical time and its distribution among different tasks are not significantly affected.
Recognition of workflow patterns
We define workflow patterns as hidden regularities embedded in the sequential order of a series of clinical task execution. Workflow patterns are collectively determined by multiple factors, such as individual physicians' practice styles, regulatory requirements, team coordination needs, and even the physical layout of a medical facility. As a result, workflow patterns are sensitive to any new changes introduced into the environment such as adoption of novel HIT systems.
To uncover workflow patterns from time-stamped T&M data, we use two pattern recognition techniques: consecutive sequential pattern analysis (CSPA) and transition probability analysis (TPA). The CSPA searches for workflow segments that reoccur frequently both within and across observations—referred to as consecutive sequential patterns.43 Each consecutive sequential pattern is composed of a sequence of clinical activities carried out one after another in a given sequential order. Further, we define the support for a consecutive sequential pattern as its hourly occurrence rate. For example, if the sequence ‘talking/rounding'’ → ‘paper—writing’ → ‘talking/rounding’ appears twice per hour in workflow data on average, we note that this is a plausible pattern with a support of 2.
The TPA, on the other hand, computes the probabilities of transitioning among pairs of tasks. The transition probabilities can be estimated using the maximum-likelihood estimation method based on empirical data. For example, the transition probability of ‘talking/rounding’ → ‘computer—writing’ is calculated as the number of times that this transition is observed in the field, divided by the total number of transitions observed originating from ‘talking/rounding.’ As compared to CSPA, the results of the TPA analysis provide an overall probabilistic view of the sequential relations among different clinical tasks.
Note that in this study we did not consider lagged sequential patterns (non-consecutive sequential patterns). Performing a useful lagged sequential pattern analysis requires fine-tuning of multiple parameters; for example, what lag constraints should be set in order to make sure that the resulted patterns are empirically meaningful (eg, whether a pattern A…B that receives sufficient support should be considered even if A and B occur many steps apart) and whether observing a lagged sequential pattern is due to odds (eg, in a randomly generated event sequence of infinite length, any event combinations may be recognized as plausible patterns). Without a priori knowledge of what settings are empirically meaningful, such decisions need to be arbitrarily made yet can have a significant impact on pattern recognition results. Higher-order transition probabilities (predicting the likelihood of observing a clinical activity based on multiple preceding events) were not considered in this study for the same reason.
Data visualization and visual analytics provide a means for transforming large quantities of numeric or textual data into graphical formats to facilitate human exploration and hypothesis generation.44 They have been widely applied in many fields, such as the analysis of gene expressions,45 structure of biomedical databases,46 and, increasingly, temporal relations among patient records.47 48
In this study, we use three visualization techniques to turn complex clinical workflow data into more easily comprehensible and more informative visual representations: (1) A ‘timeline belt’ diagram using distinct colors to delineate the sequential execution of a series of clinical tasks (figure 1). In this representation, the ‘flow of the work’ becomes apparent and the magnitude of workflow fragmentation becomes readily observable. (2) A network plot exhibiting the transition frequencies between pairs of tasks (figure 3). In this visualization, the temporal relations among different activities and the pre-post nuances can be easily told. (3) Heatmap visualizations displaying transition probabilities between different tasks using varied density of colors (figure 4). In these heatmaps, higher transition probabilities and significant pre-post differences can be instantly recognized.
The visualization algorithms, as well as the other methods presented in this paper, were implemented in an analytical tool called Clinical Workflow Analysis Tool (CWAT)—available online at http://sitemaker.umich.edu/workflow/. This tool was programmed using Microsoft ASP.net and C# 2.0 (Redmond, Washington, USA). The statistical procedures are based on Visual Numerics IMSL C# Numerical Library 2.0 (Houston, Texas, USA). Source code is available upon request.
Study design and empirical setting
To demonstrate the purpose and the value of these new analytical methods, we conducted an empirical validation study using the T&M approach to assess the workflow impact introduced by a recent CPOE implementation at our institution. The University of Michigan Institutional Review Board reviewed and approved the research protocol.
The study setting is a 16-bed level-1 pediatric intensive care unit (PICU) at the University of Michigan Health System (UMHS). A commercially sold CPOE system (Sunrise Clinical Manager, Eclipsys, Atlanta, Georgia, USA) was deployed in the unit in July 2007. Eight independent observers shadowed a convenience sample of second- and third-year resident physicians rotating through the study unit two months before and six months after the system implementation. These observers were paid medical students and graduate students enrolled in the Programs and Operations Analysis Department at the University of Michigan Hospitals. They were uniformly trained by the last author (DAH) and each conducted a few hours of training observations in the field before the actual study data collection took place.
The observation sessions started 7:30–8:00 am or 11:30–12:00 pm; each lasted approximately three to four hours. The morning and afternoon sessions were equally split roughly. At the beginning of a session, the observer randomly approached a resident physician who was working in the unit at the time. With the resident's consent, the observer started shadowing the subject and recording T&M data using a portable tablet computer equipped with a standard data acquisition tool. Elements of the data captured included date of observation, tasks performed, starting and ending time, and a unique study code assigned to each of the study participants.
The data acquisition tool, initially developed by Overhage et al and subsequently refined by Pizziferri et al,6 8 is recommended by the Agency for Healthcare Research and Quality for collecting T&M data in clinical workflow studies.49 In this tool, the clinical work is characterized as 60 distinct activities, which are further grouped into 12 categories and 6 themes to allow for analysis at different level of specificity. The 12 categories include: ‘computer—read,’ ‘computer—writing,’ ‘patient activity,’ ‘paper—read,’ ‘paper—looking for,’ ‘paper—writing,’ ‘other—looking for,’ ‘personal,’ ‘talking/rounding,’ ‘phone,’ ‘waiting,’ and ‘walking/moving’. The six themes are: ‘direct patient care,’ ‘indirect patient care—write,’ ‘indirect patient care—read,’ ‘indirect patient care—other,’ ‘administration,’ and ‘miscellaneous.’ In this study, we modified this classification schema slightly to reflect special data acquisition needs in inpatient settings (provided in Appendix 1 of the online supplementary data).
Results from the empirical validation study
The pre-implementation T&M data contain 67.8 hours of observations of a cohort of two second-year and two third-year resident physicians (all female) over 20 clinical sessions. The post-implementation data, consisting of 86.7 hours of observations over 22 clinical sessions, were collected from another cohort of ten second-year and two third-year residents (7 females and 5 males). The residents participating in the pre-implementation data collection were not part of the post-implementation observations (or vice versa) because none of them rotated in the study unit during both study phases. The average patient census did not change significantly in the study unit before and after the implementation (15.3±0.1 pre and 14.6±0.9 post, p=0.13).
The ‘timeline belt’ visualization
Figure 1 depicts the ‘timeline belt’ visualization. Each row (belt) represents an observation session composed of colored stripes designating the execution of clinical activities belonging to different task categories. Length of a colored stripe is proportional to how long the task lasted. Note that several observations were right-truncated to fit the graph for print. Further, all observations are left-aligned regardless of their actual starting time. The online tool provides more alignment options.
In the ‘timeline belt’ visualization, it can be easily observed that the post-CPOE representation contains more densely populated color transitions each corresponding to a task switch. This suggests that the CPOE implementation might have caused an increased level of workflow fragmentation. We further quantified this visual observation using statistical analysis methods and pattern recognition techniques.
Workflow impact measured as ‘time expenditures’
Before performing the workflow fragmentation analysis, we first applied the traditional ‘average aggregated clinician time’ measure to study how the physician participants allocated their time among different clinical tasks before and after the CPOE implementation. Key findings at the activity level are presented in table 1. A full report of the results is provided in Appendix 2A of the online supplementary data.
As table 1 shows, the proportion of physician time spent on using computers to read ‘chart, data, labs’ (2.47% pre vs 6.3 post, p<0.001) and ‘write orders’ (0.03% pre vs 2.69% post, p<0.001) significantly increased after the CPOE implementation. These increases were, not surprisingly, compensated by a nearly fourfold drop in the proportion of time allocated to ‘paper—writing orders’ (4.21% pre vs 0.11% post, p<0.001) as well as decreases in other paper-based activities (reported in Appendix 2A of the online supplementary data). With both computer-based and paper-based ordering activities combined, the proportion of physician time spent on writing orders reduced considerably after the CPOE implementation (4.24% pre vs 2.81% post), although this decrease is only marginally significant (p=0.115).
Figure 2A shows the results at the category level. Significant pre-post differences were found in four task categories. First, there was a more than doubled increase in the proportion of physician time spent on retrieving data at computer terminals (‘computerdread’: 5.33% pre vs 12.87% post, p<0.01). Further, significant decreases were found in the proportion of time allocated to paper documentation activities and finding paper forms (‘paperdwriting’: 5% pre vs 1.89% post, p<0.01; ‘paperdlooking for’: 0.32% pre vs 0.02% post, p<0.01). These changes were natural consequences of the transition from a paper-based operation to computerized order entry. The other significantly affected task category is ‘patient activities.’ After the CPOE implementation, the physicians were able to spend more time interacting with patients (1.18% pre vs 4.05% post, p<0.05). At the theme level (figure 2B), the only significant change was a more than threefold increase in the proportion of physician time spent in ‘indirect patient caredread’ activities (3.99% pre vs 12.59% post, p<0.001). The proportion of time distributed to ‘direct patient care’ was barely affected (45.17% pre vs 48.01% post, p=0.34).
These findings based on the ‘time expenditures’ analysis are consistent with the results of the recent T&M studies.6–10 Therefore, the conclusions are similar—the CPOE implementation was neither associated with an increase in clinician time spent on writing orders nor did it cause a reduction in clinician time allocated to direct patient care activities.
Results of workflow fragmentation assessments
Overall, the average amount of time continuously spent performing a single task significantly decreased from 163 seconds before the CPOE implementation to 107 seconds after the implementation (p<0.001). Figures 2C, D display the workflow fragmentation analysis results at the category and the theme level, respectively.
Among the 12 task categories, significant decreases in ACT were found in three categories: ‘computer—read’ (138 seconds pre vs 55 seconds post, p<0.05), ‘personal’ (266 seconds pre vs 158 seconds post, p<0.05), and ‘talking/rounding’ (265 seconds pre vs 194 seconds post, p<0.01). The largest relative decrease occurred in the ‘computer—read’ category where average task duration dropped more than 60%. Similarly, significantly shorter durations were observed on conducting tasks related to the ‘direct patient care’ theme (222 seconds pre vs 169 seconds post, p=0.001) as well as ‘miscellaneous’ (177 seconds pre vs 70 seconds post, p=0.001). These findings confirm the visual observation of the ‘timeline belt’ diagram that the post-CPOE clinical workflow had become more fragmented.
Results of workflow pattern recognition
Table 2 reports the consecutive sequential patterns uncovered that received a support of 1 or above. Five CSPA patterns were identified in the pre-implementation phase of the study and eleven in post-implementation. For example, after the CPOE system was implemented, the hourly rates of observing the transitions of ‘talking/rounding’ → ‘walking/moving’ (1.19 pre vs 2.68 post), ‘walking/moving’ → ‘talking/rounding’ transition (1.24 pre vs 2.39 post), ‘talking/rounding’ → ‘computer—read’ (<1 pre vs 2.41 post), and ‘computer—read’ → ‘talking/rounding’ (<1 pre vs 2.21 post) increased nearly or more than twofold. These prominent transitions were all centered on the ‘talking/rounding’ activity (either from or to), which is the most essential clinical process in inpatient settings.50
We further plotted the bidirectional transition frequencies as two network graphs (figures 3A, B). The purpose was to exhibit the pre-post differences more effectively. In both graphs, the network nodes represent task categories; width of an edge is proportional to the bidirectional transition frequency (hourly occurrence rate) between the two task categories. Further, the network nodes are distributed using a circular layout and the ‘talking/rounding’ activity is placed in the middle because of its central role.
By contrasting figure 3A, B, representing the pre-CPOE and post-CPOE clinical environment, respectively, it can be easily observed that several task transitions had become much more frequent (thicker edges) after the CPOE system was implemented. These include ‘talking/rounding’ ↔ ‘computer—read,’ ‘talking/rounding’ ↔ ‘paper—writing,’ and ‘talking/rounding’ ↔ ‘walking/moving,’ as well as between ‘computer—read’ and ‘computer—writing.’
The network plots shown in figure 3 were produced using GUESS (v0.5-alpha), an open-source graph exploration system (http://graphexploration.cond.org). A full report of the transition frequencies among all pairs of task categories is provided in online Appendix 2C of the online supplementary data.
Next, we performed the TPA analysis to compute the transition probabilities between different pairs of clinical tasks. Figure 4 visualizes the results as three heatmaps (pre-CPOE, post-CPOE, and pre-post comparison). On these heatmaps, varied density of color designates transition probabilities estimated based on the empirical data. The probabilities are also reported in each of the cells; for example, the second cell in the first row in figure 4A can be read as ‘before the CPOE implementation, the probability of observing the task context changed from computer—read to computer—writing was 0.2472, out of all possible transitions originating from computer—read.’
In these heatmap representations, it can be easily observed that the CPOE implementation might have recalibrated the probabilities of transitioning between different task pairs. For example, the likelihood of observing the ‘talking/rounding’ → ‘computer—read’ transition significantly increased from 0.077 pre-implementation to 0.32 post-implementation (p<0.05), compensated by a similar level of decrease in the likelihood of observing the transition of ‘talking/rounding’ → ‘paper—writing’ (0.31 pre vs 0.022 post, p<0.05). Depending on the implementation characteristics (eg, where the CPOE workstations were located, whether mobile computing devices were available), these changes may potentially introduce significant disruption to clinical workflow.
The limitations of the empirical validation study, such as the small sample size (hence unbalanced pre and post sample characteristics), idiosyncratic features of the CPOE system studied, and unique settings of the PICU site, constrain the power and generalizability of practical inferences that can be drawn. Therefore, the empirical study should only be interpreted as a demonstration of how the new methods presented in this paper may be used in future research to enrich workflow analysis in clinical settings.
The T&M data contain rich time-stamped information that can be used to examine the sequential ordering of distinct tasks in a task execution sequence. Analyzing T&M data from this ‘flow of the work’ perspective makes use of this information to allude to the actual impact HIT adoption may introduce to clinical ‘workflow.’ As shown in the empirical validation study, using the traditional ‘time expenditures’ measure—that is, lumping clinician time spent in different clinical activities to assess whether introduction of HIT may cause a redistribution—does not use this temporal information and, therefore, loses the ‘flow of the work’ insights derivable from T&M data. This fact may account for the discordance discussed earlier between the quantitative results and the qualitative findings. For example, our T&M data collected in the empirical study seem to suggest that the post-CPOE environment contained shorter, more fragmented task execution episodes. The direct consequence is a higher frequency of task switching, which may be associated with more rapid swapping of task rules in clinicians' memory, increased rates of running into distinct task contexts, extra physical activities (eg, locating a nearby computer terminal), and more frequent waiting and idling (eg, waiting for the computer system to respond). Consequently, clinicians may perceive decreased time efficiency and disrupted workflow even though the time utilization analysis does not suggest an adverse impact. To confirm this speculation, we encourage other researchers to consider using the new methods presented in this paper to analyze T&M data collected in future studies as well as those collected in prior efforts. We believe that this exercise may help illuminate the quantitative-qualitative paradox found in the literature.
It must be noted that, although these new methods may help enrich T&M analysis, several limitations intrinsic to the T&M approach (eg, observer bias and difficulty in observing multitasking activities) could undermine the validity or generalizability of T&M-based research findings. In addition to improving the accuracy and consistency of T&M observations, researchers have shown that using automated activity recognition tools, such as radio-frequency identification (RFID) tags, can greatly enhance the quality and efficiency in collecting workflow data.51 Further, quantitative methods such as T&M are not capable of revealing the root causes of HIT workflow impact and whether the impact may exert an actual influence on user satisfaction, time efficiency, clinician performance, and patient outcomes. Additional research is needed to relate HIT-associated clinical workflow changes to these outcomes variables. Researchers have demonstrated that some of these facets can be studied using ethnographically based investigations,52 53 questionnaires and interviews,52–55 cognitive engineering56 57 and computer-supported cooperative work (CSCW) approaches,58 or even using physiological devices to directly measure the level of clinicians' brain activities in different task situations.59 60
Recent quantitative studies applying the time and motion approach to assess HIT workflow impact have shown non-significant effects, conflicting with the guarded end-user perceptions reported in qualitative investigations. The workflow measure used in the recent T&M studies, ‘average aggregated clinician time,’ may be a factor accounting for the inconsistency. In this paper, we introduce a set of new analytical methods consisting of workflow fragmentation assessments, pattern recognition, and data visualization that are accordingly designed to address its limitation. Through an empirical validation study, we show that applying these new methods can enrich workflow analysis, which may allude to potential workflow deficiencies and corresponding re-engineering insights. In this paper, we also demonstrate the value of using data visualization techniques to turn complex workflow data into more comprehensible and more informative graphical representations.
We would like to thank Mary Duck, John Schumacher, Heidi Reichert, Jackie Aeto, Richard Loomis, Samuel Clark, Barry DeCiccio, Michelle Morris, Amy Shanks, Frank Manion, Anwar Hussain, and the entire PICU team at the University of Michigan Mott Children's Hospital as well as the patients and their families for helping to make this study possible.
Funding This project was supported in part by Grant # UL1RR024986 received from the National Center for Research Resources (NCRR): a component of the National Institutes of Health (NIH) and NIH Roadmap for Medical Research.
Ethics approval This study was conducted with the approval of the Medical School Institutional Review Board, The University of Michigan.
Provenance and peer review Not commissioned; externally peer reviewed.