Appl Clin Inform 2019; 10(03): 446-453
DOI: 10.1055/s-0039-1692164
Research Article
Georg Thieme Verlag KG Stuttgart · New York

Challenges and Opportunities to Improve the Clinician Experience Reviewing Electronic Progress Notes

Gretchen M. Hultman
1   Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, United States
,
Jenna L. Marquard
2   College of Engineering, University of Massachusetts, Amherst, Massachusetts, United States
,
Elizabeth Lindemann
3   Department of Surgery, University of Minnesota, Minneapolis, Minnesota, United States
,
Elliot Arsoniadis
1   Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, United States
3   Department of Surgery, University of Minnesota, Minneapolis, Minnesota, United States
,
Serguei Pakhomov
1   Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, United States
4   College of Pharmacy, University of Minnesota, Minneapolis, Minnesota, United States
,
Genevieve B. Melton
1   Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, United States
3   Department of Surgery, University of Minnesota, Minneapolis, Minnesota, United States
› Author Affiliations
Funding This work was supported by the Agency for Healthcare Research and Quality Award R01HS022085 (to G.M.) and National Science Foundation Award CMMI-1150057 (to J.M.).
Further Information

Address for correspondence

Genevieve B. Melton, MD, PhD
Department of Surgery, Institute for Health Informatics
420 Delaware Street SE, Mayo Mail Code 450, Minneapolis, MN 55455
United States   

Publication History

25 January 2019

26 April 2019

Publication Date:
19 June 2019 (online)

 

Abstract

Background High-quality clinical notes are essential to effective clinical communication. However, electronic clinical notes are often long, difficult to review, and contain information that is potentially extraneous or out of date. Additionally, many clinicians write electronic clinical notes using customized templates, resulting in notes with significant variability in structure. There is a need to understand better how clinicians review electronic notes and how note structure variability may impact clinicians' note-reviewing experiences.

Objective This article aims to understand how physicians review electronic clinical notes and what impact section order has on note-reviewing patterns.

Materials and Methods We conducted an experiment utilizing an electronic health record (EHR) system prototype containing four anonymized patient cases, each composed of nine progress notes that were presented with note sections organized in different orders to different subjects (i.e., Subjective, Objective, Assessment, and Plan, Assessment, Plan, Subjective, and Objective, Subjective, Assessment, Objective, and Plan, and Mixed). Participants, who were mid-level residents and fellows, reviewed the cases and provided a brief summary after reviewing each case. Time-related data were collected and analyzed using descriptive statistics. Surveys were administered and interviews regarding experiences reviewing notes were collected and analyzed qualitatively.

Results Qualitatively, participants reported challenges related to reviewing electronic clinical notes. Experimentally, time spent reviewing notes varied based on the note section organization. Consistency in note section organization improved performance (e.g., less scrolling and searching) compared with Mixed section organization when reviewing progress notes.

Discussion Clinicians face significant challenges reviewing electronic clinical notes. Our findings support minimizing extraneous information in notes, removing information that can be found in other parts of the EHR, and standardizing the display and order of note sections to improve clinicians' note review experience.

Conclusion Our findings support the need to improve EHR note design and presentation to support optimal note review patterns for clinicians.


#

Background and Significance

Clinical notes within electronic health record (EHR) systems remain a key element of care documentation and communication. However, clinicians using EHRs consistently report poor system usability, time-consuming data entry, and degradation of clinical documentation as key challenges associated with EHRs.[1] [2] In part, this is because EHRs enable the inclusion of large amounts of structured information into electronic notes (“notes”) via auto-filling or “carry-forward” capabilities, including “copy and paste” functionality. While these features can promote thoroughness[3] and can be efficient to use,[4] [5] these data can hinder note readability and can result in inaccurate or irrelevant information in notes (e.g., copying and pasting text that says “yesterday” without updating the reference each day).[6] [7] Lengthy notes can also overload users cognitively, and impede them from deciphering and retrieving key information.[6] [8] This problem is not isolated to providers only and is an important challenge for a range of clinicians. For example, one study showed that pharmacists spend significant time searching for and reading information in notes,[9] highlighting the challenges with extracting information from notes. As such, notes are now considered by some to be data-rich and information-poor.[8] Challenges related to use of clinical notes and EHR systems are not limited to the United States. Kaipio et al documented usability challenges with EHR systems in Finland.[10] In a Dutch study, authors implemented a clinical notes application.[11] While they had some success, the study demonstrated that there was room for improvement with respect to usability of such systems.[11]

Despite this, clinicians value the narrative expressivity of unstructured data in notes because notes can provide insight into clinical decision-making and point out salient patient case data.[3] However, unstructured data can be difficult to search and read.[12] Because of these concerns, there is a need to make the most clinically relevant data within notes easy to find and read.[8]

Progress notes, the most typical type of clinical note used to document care management for established patients, most often follow the Subjective, Objective, Assessment, and Plan (SOAP) note format that was established by Dr. Lawrence Weed in the 1960s as part of the Problem-Oriented Medical Record framework.[13] Anecdotally, clinicians report that the Assessment and Plan sections of the note are most important when reviewing a progress note and that it is common for providers to read these sections first, regardless of where they appear in the note.[6] Some have suggested that note reading could be improved by rearranging the sections of the notes so that the Assessment and Plan sections are at the top of a note.

A recent study examined clinician satisfaction after adoption of Assessment, Plan, Subjective, and Objective (APSO) notes with 13 outpatient clinicians. The study found that clinicians largely favored the APSO notes despite the inconvenience of adopting the new method for creating progress notes.[14] Another study used eye-tracking data to determine what information clinicians focused on while reviewing electronic progress notes. The study found that clinicians spent most of their time reading the Assessment and Plan section.[15] In most institutions, however, clinicians continue to write progress notes with customized section orders (and thus unpredictable section ordering for readers). This unpredictability may challenge note readers as they search for relevant information. It is well-known in the user interface design community that user preference and task performance are often not aligned.[16] [17] For this reason, electronic clinical note formatting should not be changed purely based on user preference.


#

Objectives

This study sought to gain insight into how clinicians read and use electronic progress notes. Our objectives were to understand how section order impacts note-reading patterns and to gain qualitative insights into physician note-reading behaviors and preferences. We conducted an experiment where clinicians read electronic progress notes randomized in different orders and provided case summaries for a realistic set of patients to better understand how the ordering of information in progress notes impacted how clinicians read and synthesize these notes. We also analyzed closed-form and open-ended questionnaire responses, and semistructured interview responses.


#

Materials and Methods

Study Design

This institutional review board-approved study was conducted at a large Midwestern academic health center. A previously described EHR system prototype[18] designed to look like the VistA Computerized Patient Record System (CPRS) was populated with four deidentified patient cases, each with nine progress notes.[19] The patient cases were designed to be of similar complexity, and to realistically represent patients being managed for chronic medical conditions frequently encountered in adult primary care clinics. These cases were used in previous experiments, which assessed their complexity and reading time, how time constraints changed patient case synthesis, and the effect of highlighting new information in the notes.[18] [19] [20]

Each of the nine progress notes within the cases were arranged and presented in four different section orders: SOAP, APSO, SAPO, and Mixed section format. In the Mixed section format, the section orders were inconsistent across the nine notes, with three notes presented in each of the above orders. In addition to the classic SOAP sections, notes contained other sections that could be classified as subjective, objective, assessment, or plan. Examples of these other elements included History of Present Illness (classified as Subjective), and Visit Diagnosis (classified as Assessment).

The four cases were presented in the same order, but each case had a different note order, such that each participant saw all four note orderings but with randomized note-ordering assignments for each case. For example, participant 1 viewed case 1 in the Mixed order, followed by SAPO and APSO, while participant 2 viewed case 1 in the APSO order, followed by SOAP, Mixed, and SAPO. Note orderings were randomized using a Latin squares design. The note orderings within the Mixed order were also randomized using a Latin squares design. Note orderings were randomized to account for the possible effect of participant behavior changing as the experiment progressed. For example, participants may have systematically read notes in the first patient case for longer than later patient cases. If the same note ordering always occurred first, the results may be confounded.

Participants (n = 23) were mid-level residents at a large Midwestern training program (internal medicine [n = 15], surgery [n = 8]). This participant population was chosen so that we could recruit a larger sample of participants and have participants with similar experience levels. Participants were recruited via emails and fliers placed in workrooms. This choice was made to control for differences in experience level that might affect how users read and interpret clinical notes. Sessions were facilitated by two researchers (G.H. and O.I.). The study was pilot tested with two participants. Sessions lasted between 60 and 90 minutes.


#

Study Procedure

Participants were seated at a desktop computer with the EHR interface opened to the note list for the first patient case. Participants were asked to review the notes for the patient as they normally would and then provide a verbal summary of the case. Upon completing the verbal summary, participants rated their perceived workload for the case using the validated National Aeronautics and Space Administration Task Load Index (NASA-TLX) workload instrument.[21] Participants repeated this process for the three other cases. We used screen capture software to capture participants' navigation patterns, a typical approach in EHR usability studies.[22] At the end of the session, participants completed a questionnaire with demographic questions, questions regarding their experience with different EHR systems, when and how long it takes them to review a set of electronic notes for a single patient, the importance they place on different information types within electronic notes, how well current electronic notes support the retrieval of different information types, and general barriers to accessing the information they need in electronic notes. The questionnaire was developed based on a previous experimental study where hospitalists reviewed information in the EHR prior to admitting a patient.[23] The questionnaire was pilot tested to ensure the questions were understandable. All sessions were audio recorded.


#

Analysis of Experimental Data

The analyses of the experimental data included participants' time to review notes, time spent verbally summarizing cases, and perceived workload. Each measure was calculated independently for the SOAP, APSO, SAPO, and Mixed note orders. Participant videos were reviewed and coded for scrolling time (screen moving), still time (screen not moving), and navigating time (using a selection box to move between patient notes). Times were tabulated and averages calculated. Two coders reviewed videos for two participants to determine a standard coding process.


#

Analysis of Qualitative Data

After completing the four cases, we conducted a semistructured debriefing interview with each participant. Interviews were conducted by G.H., a researcher with experience in qualitative analysis, and O.I., a researcher with a background in medicine. Interviews were transcribed, and coded for emerging themes by two coders with experience in qualitative analysis (G.H. and E.L.). One participant was excluded because the interview did not record properly. Interview answers were coded into categories and reviewed in collaboration with a physician expert. A subset of transcripts was coded by both coders to assess the internal reliability of coding (6, 26% of all transcripts). There was overall high agreement between coders (kappa = 0.81, percentage agreement = 98%).

Below, we present our analysis of participants' perceived note reading patterns based on the interview and questionnaire data. We then present our analysis of participants' actual note reading patterns based on the experimental data.


#
#

Results

Participant Characteristics

Fourteen participants were male and 9 were female. Participants were on average 29.9 years old (standard deviation [SD] = 2.48) and graduated from medical school an average of 2.82 years prior (SD = 1.47). Participants reported their levels of experience with eight different EHRs. Participants reported the highest self-rated experience using Epic; all participants considered themselves average or expert Epic users. VistA CPRS, on which the prototype used in this experiment was based, was used by 20 of the 22 participants. Most (n = 17, 74%) said they considered themselves average VistA CPRS users; two considered themselves expert users.


#

Timing of and Time Spent Reading

Participants were asked in a free response question when they typically read patient notes. Responses fell into one or more of five broad categories. Participants reported that they review notes prior to a patient encounter (n = 22, 96%), during an encounter (n = 1, 4%), after an encounter (n = 1, 4%), when writing a note (n = 2, 9%), or when trying to answer specific questions (n = 2, 9%).

Participants were asked to report three values related to how long they typically spend reviewing notes for a single patient: (1) minimum time, (2) average time, and (3) maximum time. [Fig. 1] shows these responses with the box located at their average time and the whiskers noting their minimum and maximum values. The horizontal line shows the overall average across participants, though variability within and across participants was substantial.

Zoom Image
Fig. 1 Self-reported note reading time. Participants reported their average, maximum, and minimum note reading times. The average time for each plotted and the whiskers represent the maximum and minimum reading time. The line represents the average of participants' average reading time.

#

Reading Order

When asked in the semistructured interview how they typically read through a single patient note, 22 participants responded, and when asked how they typically read through a set of notes for one patient, 21 participants responded. When asked how they read a single note, 14 mentioned starting with Subjective, and 6 mentioned starting with the Assessment and Plan. When asked how they read a set of notes, participants discussed different strategies such as looking for notes from specific specialties or looking for the most recent notes. Responses are summarized in [Tables 1] and [2].

Table 1

How participants typically read through a single patient note

Number

%

Representative quote

Start with subjective

14

64

“Typically, when assessing a patient note for any given specialty, I'll look at their HPI or initial subjective assessment, then go and jump to the assessment and plan”

Start with A/P

6

27

“I read the assessment and plan and then scroll back up depending on what I need”

Start with something else

2

9%

“I try to hone into something that person wrote, like, the actual things that are not auto-populated”

Depends on context

1

5

It does depend a little bit on the setting that I'm reading it in”

Mentioned information skipped

4

18

“I usually skim through the lab results, imaging studies, and especially the physical exam I go through quickly”

Table 2

How participants typically read through a set of notes for single patient note

Number

%

Representative quote

Look for certain specialties

3

14

“I also tend to look for family practice or internal medicine notes, because they're usually the most complete, touch-all aspects of the patient”

Look for notes relevant to Chief Complaint

8

36

“Typically I'll read through the notes that are most relevant to what the patient is coming in for today”

Look for certain note types

4

18

“I'll typically bring up H&Ps, consult notes, and then DC summaries, pretty much in that order”

Start with most recent notes first

6

27

“Usually, I'll start with the most recent note and see what the pertinent issues are, or find the most recent note that addresses a lot of the chronic health problems”

Start with older notes first

4

18

“So I would have started doing that first thing, just to sort them by dates so I could read a story about the patient”

Mentioned notes skipped

4

18

I'm going to look for consult notes rather than sifting through every progress note I have, unless I absolutely have to


#

Value of Information Types in Notes

Participants were asked to list the five information types they considered the most valuable to them within notes, and ranked the information-types from 1 to 5, with 1 being the highest priority and 5 being the lowest (of the top 5). Shown in [Fig. 2], the most recent Assessment and Plan, past medical history, and chief complaint were the three types of information most often ranked as a top 5 priority. Chief complaint and most recent Assessment and Plan were the two categories most frequently listed as priority of one. Labs, Imaging, and Medications were frequently listed in the top five, but as lower priorities. Past surgical history was a priority for surgical but not medicine residents.

Zoom Image
Fig. 2 Value of information-types in notes. Participants were asked to list the five types of information they found most valuable and rank them 1–5. This bar chart shows the types of information mentioned and the color denotes its priority.

Participants were asked how well each section type provides the clinical information they need. Almost all (22/23, 96%) participants said that the Assessment section met their information needs very well, somewhat well, or neither well or poorly. All participants (23/23, 100%) said the Plan section meets their information needs very well, somewhat well, or neither well or poorly. The Objective section was rated as meeting information needs somewhat poorly more than any other section. Other sections participants provided included: Medications (very poorly), Imaging (very well), and “blown in data” (auto-populated data) (somewhat poorly).


#

Barriers to Reading Notes

Participants were asked about six types of information barriers to accessing information they needed from notes. Many of the barriers selected were previously identified in other research,[23] including necessary information not being in notes, there being too much information in notes, and information not being accurate. Participants rated each barrier on a scale of 1 to 5, with 1 being not a barrier and 5 being a severe barrier. As summarized in [Fig. 3], most participants (n = 21) perceived that having information in notes poorly displayed or difficult to interpret was a moderate, large, or severe barrier to accessing patient clinical information while not having key needed information in notes was also perceived as a moderate, large, or severe barrier for most participants (n = 16, 73%). Additionally, all participants (n = 22) reported that too much information in notes presented a barrier to note reading.

Zoom Image
Fig. 3 Information barriers. Participants were asked to review six potential information barriers and rank how much of a barrier each is to note reading and rate each on a scale of 1–5, with 1 being not a barrier and 5 being a severe barrier.

#

Experiment Results

As summarized in [Table 3], differences in reading time were observed across the four presentation orders, with participants taking the shortest amount of time to read APSO ordered notes and the longest amount of time to read Mixed ordered notes. On average, participants took 1.9 minutes longer to read the Mixed ordered notes versus the APSO ordered notes. On average, participants also took 1.0 minutes less to read APSO ordered notes compared with traditional SOAP notes. The observed times spent reading notes for the cases were in the range of participants' self-reported time spent reading patient notes. Our analysis revealed that participants tended to spend a significant amount of time scrolling (average = 4.23 minutes per case, or ∼39% of reading time for each case), regardless of how note sections were ordered. Differences in verbal summary times are also presented in [Table 3].

Table 3

Key times and NASA-TLX scores

Reading time (min)

Proportion with screen still

Proportion scrolling

Verbal summary time (min)

Average workload scores

SOAP

11.6

61%

38%

2.1

30.6 (10.57)

APSO

10.6

60%

39%

1.9

31.3 (8.75)

SAPO

11.3

57%

40%

2.3

31.9 (7.04)

Mix

12.5

59%

39%

2.1

31.7 (7.78)

Average

11.5

59%

39%

2.1

31.4 (8.52)

Abbreviations: APSO, Assessment, Plan, Subjective, and Objective; NASA-TLX, National Aeronautics and Space Administration Task Load Index; SAPO, Subjective, Assessment, Objective, and Plan; SOAP, Subjective, Objective, Assessment, and Plan.


NASA-TLX perceived workload scores (0–50 scale) across the case orders were similar. The NASA-TLX score was lowest for SOAP notes, even though reading time was lowest for APSO notes. Scores are summarized in [Table 3]. Reading times, verbal summary times, and TLX scores were similar for medicine and surgery residents within each section order.

Participants were asked if the experimental patient cases were realistic. All participants (23/23, 100%) said the cases were realistic, stating that the cases represented common medical conditions, and that the flow of care from one provider to another seemed realistic.


#
#

Discussion

This study provides important insights about how clinicians read and review progress notes and the challenges they face. Our study also provides insight into the impact that section order has on note reading. Many have documented current concerns with electronic documentation.[24] [25] These concerns have been documented outside the United States where EHRs are used.[10] [11] One study examined two potential avenues for improving documentation: having residents attend a lecture or having them attend a lecture coupled with individual feedback on notes.[26] This study did not find that these interventions improved note quality and highlights the challenges in improving progress notes.[26] Our findings suggest four possible strategies to potentially improve clinician efficiency and satisfaction while reading electronic clinical notes.

First, establishing a standard note presentation order appears to be essential, with users performing worse, taking between 0.9 and 1.9 minutes longer to review, when notes had unpredictable orders. Others have reported on the challenges with variable documentation and the tension between the need for structure versus expressivity.[3] Second, APSO note organization may be beneficial. While SOAP note organization had the lowest perceived workload, APSO note organization was more efficient in terms of note reading. This suggestion aligns with other work that demonstrates the advantages of this format.[14] [27] The perceived workload associated with APSO note organization may become lower as users get used to this format. Another benefit of APSO note organization is that Objective section (rated most poorly by users) is at the end of the note. Third, removing information that can easily be found elsewhere in the EHR from the note may decrease the length of notes, associated “note bloat,” and scrolling. This suggestion is supported by work reporting on the detrimental effects of this information.[6] Finally, our findings suggest a need to improve the display of notes—regardless of section order—to make them easier to read and find important information.

Currently, the use of custom note templates means that the display order for notes tends to vary across clinicians. We observed that notes with mixed section order negatively impacted participant performance. While there are some advantages to flexible documentation,[3] participants also reported frustrations with notes appearing in differing formats. This indicates that clinicians and organizations would benefit from having a standard order for sections in which all notes appear. The display of note sections could be uncoupled from the order of sections in how notes are written (i.e., custom note entry; standard note presentation). Overall, this could allow clinicians to create notes using their preferred style while displaying notes for optimal reading and use. This approach could address both user preferences and the need for standardization of note display.

This study also provides some confirmatory evidence to support concerns related to the negative impacts of auto-populated data and copy and pasted data in notes.[24] [25] Not only did participants report frustrations and barriers due to auto-populated data and extraneous information, but moving this information to the end of the note decreased time spent reviewing note sets. This supports findings from other studies that indicate clinicians are generally more satisfied with notes in APSO order instead of SOAP.[14] Many participants suggested that auto-populated data can be found more easily and in more readable form elsewhere within EHRs. Further work should examine the impact of removing auto-populated data from notes, especially information that is not relevant to the chief complaint or current active problems, perhaps creating links within the note to other areas of the EHR. Regulations that require certain information be in notes and the impact of changing them could be examined as part of this work.

Interestingly, we did not observe large differences in perceived cognitive load when notes were organized differently, and participants reported the lowest cognitive load score for SOAP notes. Similar to a study of another type of EHR functionality (i.e., ambulatory navigators), we observed that perceived cognitive load was unrelated to participant performance.[28]

This study has several limitations that reduce its generalizability including its single organization, and relatively small sample size. Our convenience sample was limited to mid-level resident physicians which may affect generalizability. Participant experience with note reading can be affected by previous experience, preferences, and training. Additional confounders such as specialty may be present.


#

Conclusion

Overall, participants reported that information in notes remains poorly displayed and difficult to interpret. Future work should identify in more detail the specific aspects of the note designs that led participants to consider the notes poorly designed or difficult to interpret. Future work should also validate and expand upon these findings with other user groups and EHR systems, and test alternative note designs.


#

Clinical Relevance Statement

This work represents a step toward understanding how clinicians review electronic clinical notes and the barriers that they face. This work also represents a step toward understanding how the ordering of information in notes impacts clinician experience reviewing notes.


#

Multiple Choice Questions

  1. Which note order was the most challenging for participants to read?

    • SOAP.

    • APSO.

    • SAPO.

    • Mix.

    Correct Answer: The correct answer is option d. In this study, the mixed order was the most challenging for participants to read. This order required the most reading time for participants. Participants also commented on frustrations with inconsistently formatted notes in the qualitative portions of this study.

  2. Based on the finding of this study, what steps could improve the experience with reading notes?

    • Allow physicians to write notes in any order according to reading style.

    • Add additional options for auto populating data in notes.

    • Have a standard note writing template.

    • Add additional data to notes.

    Correct Answer: The correct answer is option c. In this study, participants were negatively affected by notes that were highly variable. Participants were also negatively affected by notes that were in the mixed order. They were also reported frustrations with long notes and notes that contained a lot of auto-populated data.


#
#

Conflict of Interest

None declared.

Acknowledgments

We would like to acknowledge Osadebamwen Ighile, MBBS, MS, and Oladimeji Farri, MBBS, PhD, for their contributions to this study.

Protection of Human and Animal Subjects

This study was performed in compliance with World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects. This work was reviewed by the University of Minnesota Institutional Review Board.


  • References

  • 1 Lowry SZ, Ramaiah M, Patterson ES. , et al. Integrating electronic health records into clinical workflow: an application of human factors modeling methods to ambulatory care. Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 2014; 3 (01) 170-177
  • 2 Friedberg MW, Chen PG, Aunon FM. , et al. Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems, and Health Policy. Santa Monica, CA: Rand Corporation; 2013
  • 3 Rosenbloom ST, Denny JC, Xu H, Lorenzi N, Stead WW, Johnson KB. Data from clinical notes: a perspective on the tension between structure and flexible documentation. J Am Med Inform Assoc 2011; 18 (02) 181-186
  • 4 Cimino JJ, Patel VL, Kushniruk AW. Studying the human-computer-terminology interface. J Am Med Inform Assoc 2001; 8 (02) 163-173
  • 5 Poon AD, Fagan LM, Shortliffe EH. The PEN-Ivory project: exploring user-interface design for the selection of items from large controlled vocabularies of medicine. J Am Med Inform Assoc 1996; 3 (02) 168-183
  • 6 Koopman RJ, Steege LM, Moore JL. , et al. Physician information needs and electronic health records (EHRs): time to reengineer the clinic note. J Am Board Fam Med 2015; 28 (03) 316-323
  • 7 Han H, Lopp L. Writing and reading in the electronic health record: an entirely new world. Med Educ Online 2013; 18: 1-7
  • 8 Shoolin J, Ozeran L, Hamann C, Bria II W. Association of Medical Directors of Information Systems consensus on inpatient electronic health record documentation. Appl Clin Inform 2013; 4 (02) 293-303
  • 9 Nelson SD, LaFleur J, Fiol RGD, Weir CR. Reading and writing: qualitative analysis of pharmacists' use of the EHR when preparing for team rounds. In: AMIA Annual Symposium Proceedings. Vol. 2015. American Medical Informatics Association 2015: 943
  • 10 Kaipio J, Lääveri T, Hyppönen H. , et al. Usability problems do not heal by themselves: national survey on physicians' experiences with EHRs in Finland. Int J Med Inform 2017; 97: 266-281
  • 11 Cillessen FHJM, de Vries Robbé PF, Biermans MCJ. A hospital-wide transition from paper to digital problem-oriented clinical notes. A descriptive history and cross-sectional survey of use, usability, and satisfaction. Appl Clin Inform 2017; 8 (02) 502-514
  • 12 Meystre SM, Savova GK, Kipper-Schuler KC, Hurdle JF. Extracting information from textual documents in the electronic health record: a review of recent research. Yearb Med Inform 2008; 35: 128-144
  • 13 Weed LL. The problem oriented record as a basic tool in medical education, patient care and clinical research. Ann Clin Res 1971; 3 (03) 131-134
  • 14 Lin C-T, McKenzie M, Pell J, Caplan L. Health care provider satisfaction with a new electronic progress note format: SOAP vs APSO format. JAMA Intern Med 2013; 173 (02) 160-162
  • 15 Brown PJ, Marquard JL, Amster B. , et al. What do physicians read (and ignore) in electronic progress notes?. Appl Clin Inform 2014; 5 (02) 430-444
  • 16 Andre AD, Wickens CD. When users want what's not best for them. Ergon Des 1995; 3 (04) 10-14
  • 17 Wickens CD. Engineering Psychology and Human Performance. 4th ed. Boston: Pearson; 2013
  • 18 Farri O, Rahman A, Monsen KA. , et al. Impact of a prototype visualization tool for new information in EHR clinical documents. Appl Clin Inform 2012; 3 (04) 404-418
  • 19 Farri O, Pieckiewicz DS, Rahman AS, Adam TJ, Pakhomov SV, Melton GB. A qualitative analysis of EHR clinical document synthesis by clinicians. AMIA Annu Symp Proc 2012; 2012: 1211-1220
  • 20 Farri O, Monsen KA, Pakhomov SV, Pieczkiewicz DS, Speedie SM, Melton GB. Effects of time constraints on clinician-computer interaction: a study on information synthesis from EHR clinical notes. J Biomed Inform 2013; 46 (06) 1136-1144
  • 21 Hart SG. NASA-task load index (NASA-TLX); 20 years later. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Vol. 50. Sage Publications 2006: 904-908
  • 22 Turf Usability Software [Compute Program]. Version 4.0. Huston, Texas. University of Texas Health Science Center at Huston; 2015
  • 23 Doberne JW, He Z, Mohan V, Gold JA, Marquard J, Chiang MF. Using high-fidelity simulation and eye tracking to characterize EHR workflow patterns among hospital physicians. In: AMIA Annual Symposium Proceedings. Vol. 2015. American Medical Informatics Association; 2015: 1881
  • 24 Thornton JD, Schold JD, Venkateshaiah L, Lander B. Prevalence of copied information by attendings and residents in critical care progress notes. Crit Care Med 2013; 41 (02) 382-388
  • 25 Weis JM, Levy PC. Copy, paste, and cloned notes in electronic health records. Chest 2014; 145 (03) 632-638
  • 26 Fanucchi L, Yan D, Conigliaro RL. Duly noted: lessons from a two-site intervention to assess and improve the quality of clinical documentation in the electronic health record. Appl Clin Inform 2016; 7 (03) 653-659
  • 27 Belden JL, Koopman RJ, Patil SJ, Lowrance NJ, Petroski GF, Smith JB. Dynamic electronic health record note prototype: seeing more by showing less. J Am Board Fam Med 2017; 30 (06) 691-700
  • 28 Hultman G, Marquard J, Arsoniadis E. , et al. Usability testing of two ambulatory EHR navigators. Appl Clin Inform 2016; 7 (02) 502-515

Address for correspondence

Genevieve B. Melton, MD, PhD
Department of Surgery, Institute for Health Informatics
420 Delaware Street SE, Mayo Mail Code 450, Minneapolis, MN 55455
United States   

  • References

  • 1 Lowry SZ, Ramaiah M, Patterson ES. , et al. Integrating electronic health records into clinical workflow: an application of human factors modeling methods to ambulatory care. Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 2014; 3 (01) 170-177
  • 2 Friedberg MW, Chen PG, Aunon FM. , et al. Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems, and Health Policy. Santa Monica, CA: Rand Corporation; 2013
  • 3 Rosenbloom ST, Denny JC, Xu H, Lorenzi N, Stead WW, Johnson KB. Data from clinical notes: a perspective on the tension between structure and flexible documentation. J Am Med Inform Assoc 2011; 18 (02) 181-186
  • 4 Cimino JJ, Patel VL, Kushniruk AW. Studying the human-computer-terminology interface. J Am Med Inform Assoc 2001; 8 (02) 163-173
  • 5 Poon AD, Fagan LM, Shortliffe EH. The PEN-Ivory project: exploring user-interface design for the selection of items from large controlled vocabularies of medicine. J Am Med Inform Assoc 1996; 3 (02) 168-183
  • 6 Koopman RJ, Steege LM, Moore JL. , et al. Physician information needs and electronic health records (EHRs): time to reengineer the clinic note. J Am Board Fam Med 2015; 28 (03) 316-323
  • 7 Han H, Lopp L. Writing and reading in the electronic health record: an entirely new world. Med Educ Online 2013; 18: 1-7
  • 8 Shoolin J, Ozeran L, Hamann C, Bria II W. Association of Medical Directors of Information Systems consensus on inpatient electronic health record documentation. Appl Clin Inform 2013; 4 (02) 293-303
  • 9 Nelson SD, LaFleur J, Fiol RGD, Weir CR. Reading and writing: qualitative analysis of pharmacists' use of the EHR when preparing for team rounds. In: AMIA Annual Symposium Proceedings. Vol. 2015. American Medical Informatics Association 2015: 943
  • 10 Kaipio J, Lääveri T, Hyppönen H. , et al. Usability problems do not heal by themselves: national survey on physicians' experiences with EHRs in Finland. Int J Med Inform 2017; 97: 266-281
  • 11 Cillessen FHJM, de Vries Robbé PF, Biermans MCJ. A hospital-wide transition from paper to digital problem-oriented clinical notes. A descriptive history and cross-sectional survey of use, usability, and satisfaction. Appl Clin Inform 2017; 8 (02) 502-514
  • 12 Meystre SM, Savova GK, Kipper-Schuler KC, Hurdle JF. Extracting information from textual documents in the electronic health record: a review of recent research. Yearb Med Inform 2008; 35: 128-144
  • 13 Weed LL. The problem oriented record as a basic tool in medical education, patient care and clinical research. Ann Clin Res 1971; 3 (03) 131-134
  • 14 Lin C-T, McKenzie M, Pell J, Caplan L. Health care provider satisfaction with a new electronic progress note format: SOAP vs APSO format. JAMA Intern Med 2013; 173 (02) 160-162
  • 15 Brown PJ, Marquard JL, Amster B. , et al. What do physicians read (and ignore) in electronic progress notes?. Appl Clin Inform 2014; 5 (02) 430-444
  • 16 Andre AD, Wickens CD. When users want what's not best for them. Ergon Des 1995; 3 (04) 10-14
  • 17 Wickens CD. Engineering Psychology and Human Performance. 4th ed. Boston: Pearson; 2013
  • 18 Farri O, Rahman A, Monsen KA. , et al. Impact of a prototype visualization tool for new information in EHR clinical documents. Appl Clin Inform 2012; 3 (04) 404-418
  • 19 Farri O, Pieckiewicz DS, Rahman AS, Adam TJ, Pakhomov SV, Melton GB. A qualitative analysis of EHR clinical document synthesis by clinicians. AMIA Annu Symp Proc 2012; 2012: 1211-1220
  • 20 Farri O, Monsen KA, Pakhomov SV, Pieczkiewicz DS, Speedie SM, Melton GB. Effects of time constraints on clinician-computer interaction: a study on information synthesis from EHR clinical notes. J Biomed Inform 2013; 46 (06) 1136-1144
  • 21 Hart SG. NASA-task load index (NASA-TLX); 20 years later. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Vol. 50. Sage Publications 2006: 904-908
  • 22 Turf Usability Software [Compute Program]. Version 4.0. Huston, Texas. University of Texas Health Science Center at Huston; 2015
  • 23 Doberne JW, He Z, Mohan V, Gold JA, Marquard J, Chiang MF. Using high-fidelity simulation and eye tracking to characterize EHR workflow patterns among hospital physicians. In: AMIA Annual Symposium Proceedings. Vol. 2015. American Medical Informatics Association; 2015: 1881
  • 24 Thornton JD, Schold JD, Venkateshaiah L, Lander B. Prevalence of copied information by attendings and residents in critical care progress notes. Crit Care Med 2013; 41 (02) 382-388
  • 25 Weis JM, Levy PC. Copy, paste, and cloned notes in electronic health records. Chest 2014; 145 (03) 632-638
  • 26 Fanucchi L, Yan D, Conigliaro RL. Duly noted: lessons from a two-site intervention to assess and improve the quality of clinical documentation in the electronic health record. Appl Clin Inform 2016; 7 (03) 653-659
  • 27 Belden JL, Koopman RJ, Patil SJ, Lowrance NJ, Petroski GF, Smith JB. Dynamic electronic health record note prototype: seeing more by showing less. J Am Board Fam Med 2017; 30 (06) 691-700
  • 28 Hultman G, Marquard J, Arsoniadis E. , et al. Usability testing of two ambulatory EHR navigators. Appl Clin Inform 2016; 7 (02) 502-515

Zoom Image
Fig. 1 Self-reported note reading time. Participants reported their average, maximum, and minimum note reading times. The average time for each plotted and the whiskers represent the maximum and minimum reading time. The line represents the average of participants' average reading time.
Zoom Image
Fig. 2 Value of information-types in notes. Participants were asked to list the five types of information they found most valuable and rank them 1–5. This bar chart shows the types of information mentioned and the color denotes its priority.
Zoom Image
Fig. 3 Information barriers. Participants were asked to review six potential information barriers and rank how much of a barrier each is to note reading and rate each on a scale of 1–5, with 1 being not a barrier and 5 being a severe barrier.