Appl Clin Inform 2022; 13(02): 456-467
DOI: 10.1055/s-0042-1745829
Research Article

Usability and Acceptability of Clinical Decision Support Based on the KIIDS-TBI Tool for Children with Mild Traumatic Brain Injuries and Intracranial Injuries

Jacob K. Greenberg
1   Department of Neurological Surgery, Washington University School of Medicine in St. Louis, St. Louis, Missouri, United States
,
Ayodamola Otun
1   Department of Neurological Surgery, Washington University School of Medicine in St. Louis, St. Louis, Missouri, United States
,
Pyi Theim Kyaw
2   McKelvey School of Engineering, Washington University School of Medicine in St. Louis, St. Louis, Missouri, United States
,
Christopher R. Carpenter
3   Department of Emergency Medicine, Washington University School of Medicine in St. Louis, St. Louis, Missouri, United States
,
Ross C. Brownson
4   Brown School of Social Work, Washington University School of Medicine in St. Louis, St. Louis, Missouri, United States
,
Nathan Kuppermann
5   Department of Emergency Medicine, University of California Davis, Davis, California, United States
,
David D Limbrick Jr.
1   Department of Neurological Surgery, Washington University School of Medicine in St. Louis, St. Louis, Missouri, United States
,
Randi E. Foraker*
6   Institute for Informatics, Washington University School of Medicine in St. Louis, St. Louis, Missouri, United States
,
Po-Yin Yen*
6   Institute for Informatics, Washington University School of Medicine in St. Louis, St. Louis, Missouri, United States
› Author Affiliations
Funding This study was supported by the U.S. Department of Health and Human Services, Agency for Healthcare Research and Quality (1F32HS027075-01A1), and Thrasher Research Fund (#15024).
 

Abstract

Background The Kids Intracranial Injury Decision Support tool for Traumatic Brain Injury (KIIDS-TBI) tool is a validated risk prediction model for managing children with mild traumatic brain injuries (mTBI) and intracranial injuries. Electronic clinical decision support (CDS) may facilitate the clinical implementation of this evidence-based guidance.

Objective Our objective was to evaluate the acceptability and usability of an electronic CDS tool for managing children with mTBI and intracranial injuries.

Methods Emergency medicine and neurosurgery physicians (10 each) from 10 hospitals in the United States were recruited to participate in usability testing of a novel CDS prototype in a simulated electronic health record environment. Testing included a think-aloud protocol, an acceptability and usability survey, and a semi-structured interview. The prototype was updated twice during testing to reflect user feedback. Usability problems recorded in the videos were categorized using content analysis. Interview transcripts were analyzed using thematic analysis.

Results Among the 20 participants, most worked at teaching hospitals (80%), freestanding children's hospitals (95%), and level-1 trauma centers (75%). During the two prototype updates, problems with clarity of terminology and navigating through the CDS interface were identified and corrected. Corresponding to these changes, the number of usability problems decreased from 35 in phase 1 to 8 in phase 3 and the number of mistakes made decreased from 18 (phase 1) to 2 (phase 3). Through the survey, participants found the tool easy to use (90%), useful for determining a patient's level of care (95%), and likely to improve resource use (90%) and patient safety (79%). Interview themes related to the CDS's ability to support evidence-based decision-making and improve clinical workflow proposed implementation strategies and potential pitfalls.

Conclusion After iterative evaluation and refinement, the KIIDS-TBI CDS tool was found to be highly usable and useful for aiding the management of children with mTBI and intracranial injuries.


#

Background and Significance

Traumatic brain injury (TBI) is one of the most common and costly health problems in children.[1] [2] [3] Among children with TBI, injuries characterized as “mild TBI” (mTBI) constitute more than 90% of new diagnoses and about one-third of hospitalizations.[4] [5] While mTBI can have damaging long-term sequelae,[6] the acute evaluation is focused on appropriately identifying and managing 4 to 14% of children with mTBI who show evidence of intracranial injuries (ICIs) and may be at risk of neurological decline.[7] [8]

There is growing evidence that post-neuroimaging practice is not evidence-based and may place some children at risk of harm.[9] In particular, insufficient attention given to high-risk patients may delay early recognition of neurological decline and the need for neurosurgical intervention, while excessive reliance on intensive care unit (ICU) monitoring is impractical and compromises limited resources. Reflecting this need for evidence-based guidance, several risk models have been proposed to help guide level-of-care decisions in this population, particularly related to the need for ICU admission.[10] [11] [12]

Most recently, the Kids Intracranial Injury Decision Support tool for TBI (KIIDS-TBI) model was externally validated in a large, multicenter pediatric population.[13] In a population of children with mTBI and ICI, the KIIDS-TBI prediction model can serve as a tool that considers seven clinical/imaging findings (e.g., mental status and type of intracranial hematoma) to stratify risk of neurosurgery, prolonged intubation, or death from TBI. When tailored to each institution's practices and capabilities, these risk predictions can be used to guide level-of-care recommendations (e.g., the need for ICU admission). An overview of this decision-tool and associated recommendations is shown in [E-Fig. 1C] ([Supplementary Appendix A], available in the online version). Nonetheless, even validated clinical decision support (CDS) often fails to be incorporated into routine practice, reflecting the complex considerations that impact successful use.[14] [15] Electronic CDS offers the potential to present evidence-based guidance at the point of care, but clinical use remains dependent on interconnected human, organizational, and technical factors.[16] [17] To understand the context for implementing electronic CDS for children with mTBI and ICI, our group recently conducted a sociotechnical analysis among neurotrauma physicians.[18] Through multidisciplinary focus groups, we found strong interest in using evidence-based CDS to guide level-of-care decisions and also obtained feedback on wireframes (i.e., simple mockups of prototype layouts) used to inform prototype design.


#

Objectives

Building on that sociotechnical analysis,[18] we recently developed a functioning prototype electronic CDS tool based on the validated KIIDS-TBI prediction model. Incorporating mixed methods from the human–computer interaction literature,[19] [20] the objective of this investigation was to evaluate the usability and usefulness of a novel electronic CDS tool for managing children with mTBI and ICI.


#

Methods

Prototype Design

The electronic CDS tool evaluated in this study was developed based on the principles of human-centered design,[21] [22] which involves direct input from end-users who will be using the tool. In this context, end-users included emergency medicine (EM) and neurosurgery physicians, the clinicians likely to have the greatest interaction with the tool. The CDS content was based on the validated KIIDS-TBI prediction model.[13] The initial prototype design was based on feedback obtained from wireframe testing conducted with focus groups in a previous study.[18] That prior testing used wire-frame mockups (i.e., schematic layouts of possible designs/features)—rather than a full electronic prototype—to obtain feedback on recommended CDS layout and tool content (e.g., users recommended removing cost data). In the current study, that feedback served as the foundation for the initial electronic prototype that was presented to users in a simulated electronic health record (EHR) environment. The electronic prototype tool requires users to enter relevant clinical/imaging findings and then demonstrates a summary of their patient's findings, predicted neurological risk, and a hypothetical recommendation for an appropriate level of care. A screenshot of the electronic prototype interface is shown in [Fig. 1].

Zoom Image
Fig. 1 The initial prototype used in testing phase 1. (A) The simulated electronic health record environment and CDS tool. (B) the CDS output screen.

#

Participant Recruitment

We included a convenience sample of participants who were either neurosurgery or EM physicians practicing at one of ten institutions in eight states in the United States. Both attending and resident physicians were recruited from one institution, and only faculty physicians were recruited from the other eight institutions. Only one participant had previously been a part of the focus groups involved in the earlier wireframe testing.[18] Due to the coronavirus disease 2019 pandemic, testing sessions were conducted and recorded using Zoom (Zoom Video Communications, San Jose, CA, United States). Participants were offered $50 compensation for their time. User testing was conducted in January and February of 2021. The authors' institutional review board reviewed and approved the study procedures with a waiver of documentation of consent. Therefore, participants were provided with a consent document, a verbal study description, and an opportunity to ask questions before verbally agreeing to proceed with the study.


#

User Testing

We used a mixed-methods design to solicit both quantitative and qualitative feedback from participants on the CDS prototype. To begin each session, participants were given an overview of the evidence underlying the CDS tool ([Supplementary Appendix A], available in the online version). They were then introduced to three clinical case scenarios and given minimal instructions to use the CDS prototype. The cases included children with a range of injuries (case 1: epidural hematoma and highest risk; case 2: subdural hematoma and moderate risk; and case 3: subarachnoid hemorrhage and lowest risk). Testing then proceeded in four parts. First, participants were asked to interact with the CDS prototype using the three clinical case simulations. In practice, clinicians would likely be prompted with the tool after a neurosurgery consult is initiated. For the usability testing, participants reviewed each case and then entered relevant clinical/imaging findings into the tool, were asked to provide their own recommended level-of-care, and then selected “view risk score” to receive model-predicted risk estimates. Second, participants completed a think-aloud protocol, which asks users to verbalize the cognitive processes required to complete a task (i.e., “thinking aloud”).[23] For the first session, the participants were asked to follow a concurrent think-aloud protocol and describe their thoughts and feelings as they interacted with the tool.[20] [23] [24] However, we recognized that the participant had difficulty with those instructions and instead simply read the case description verbatim. Anticipating that this problem would persist, we switched to a retrospective think-aloud protocol, where users were asked to describe their thoughts about using the tool right after reviewing the cases. Unlike a prospective think-aloud protocol that solicits user thoughts and feelings while completing a task, the retrospective approach asks them to think back to how they interacted with the tool during the testing session and also address areas of apparent confusion detected by the moderator.[25]

After the think-aloud protocol, participants typically completed an acceptability and usability survey ([Table 1]). This instrument was designed to assess dimensions such as ease of use and clinical usefulness, and it incorporated and adapted content from three validated measures: the Ottawa Acceptability of Decision Tools Instrument[26]; the Health-Information Technology Usability Evaluation Scale[27]; and the System Usability Scale.[28] Finally, at the end of each session, an individual semi-structured interview was conducted to solicit feedback regarding the tool's usefulness, anticipated impact on patient care, and approaches to clinical practice integration. The interview guide is provided in [Supplementary Appendix B] (available in the online version). The prototype was updated twice during the testing process (three phases), and the order in which cases 1 and 3 were presented was reversed after the first 10 participants. A flowchart describing the iterative tool development process is shown in [E-Fig. 2] ([Supplementary Appendix A], available in the online version).

Table 1

Responses to the acceptability and usability survey

Strongly disagree

Disagree

Neutral

Agree

Strongly agree

Usability

The tool is easy to use

0 (0%)

0 (0%)

0 (0%)

7 (35%)

13 (65%)

The presentation of the tool is clear and unambiguous

0 (0%)

1 (5%)

0 (0%)

9 (45%)

10 (50%)

The tool is useful in determining a patient's level-of-care

0 (0%)

0 (0%)

1 (5%)

12 (60%)

7 (35%)

I am satisfied with this tool's ability to help guide level-of-care recommendations

0 (0%)

0 (0%)

3 (15%)

11 (55%)

6 (30%)

Using this tool will improve patient safety at my hospital

0 (0%)

1 (5.3%)

3 (16%)

7 (37%)

8 (42%)

Using this tool will improve communication with other specialties

0 (0%)

1 (5%)

1 (5%)

10 (50%)

8 (40%)

Using this tool results in improved use of resources

0 (0%)

0 (0%)

2 (10%)

10 (50%)

8 (40%)

Acceptability

Using the tool would increase the chance of lawsuits

4 (20%)

13 (65%)

3 (15%)

0 (0%)

0 (0%)

The evidence supporting the tool is flawed

5 (26%)

13 (68%)

1 (5.3%)

0 (0%)

0 (0%)

The tool fails to account for important clinical information

2 (10%)

12 (60%)

1 (5%)

4 (20%)

1 (5%)

The environment I work in makes it hard to use the tool

9 (45%)

7 (35%)

3 (15%)

1 (5%)

0 (0%)

Note: Questions are grouped into those that assessed usability versus those that evaluated acceptability of the clinical decision support.



#

Analysis

Descriptive statistics were reported for participant demographic characteristics and structured survey responses. Usability problems identified through the simulated cases were summarized using content analysis.[29] [30] [31] Each video recording was reviewed by two members of the research team and a previously reported coding scheme was used to categorize the types of problems encountered ([Supplementary Appendix C], available in the online version).[31] For example, one code was for “mistakes” in entering patient clinical/imaging findings, including failure to select the appropriate clinical finding and/or incorrectly selecting a finding that was not present. Usability problems were summarized by each of the three phases of prototype updating. In addition, each usability problem was categorized as an interpretational problem (i.e., failing to understand the terminology or wording of instructions) or an operational problem (i.e., failing to follow instructions or navigate through the tool as anticipated). Finally, we also compared user experience by prototype iteration, including the number and types of usability problems, along with user completion times.

Results from the semi-structured interviews were analyzed using inductive thematic analysis.[32] First, audio transcripts from each interview were professionally transcribed (Landmark Associated, Inc., Phoenix, AZ, United States). Next, two authors separately analyzed and independently assigned codes to the first three interview transcripts using Dedoose software version 8.3.35 (Dedoose, Hermosa Beach, CA, United States). The codebook was further modified based on input from a qualitative methods expert. Using the final coding scheme, each remaining transcript was independently coded by two reviewers and discrepancies were then reconciled by consensus of the two reviewers. The full list of codes is shown in [Supplementary Appendix C] (available in the online version). Next, major themes and sub-themes were inductively assigned to represent the main insights from the interviews, reflecting the comments that were most novel, relevant to future implementation efforts, and common among participants. The final themes were decided based on the input of both coders and two qualitative methods experts. User testing was completed after reaching thematic saturation, when no substantially new ideas emerged from further interviews.[33]


#
#

Results

Twenty physicians participated in testing sessions, with an equal number of neurosurgery and EM participants. Most participants were male (60%), 30 to 39 years old (55%), faculty physicians (70%), and affiliated with teaching hospitals (80%). Participant demographic characteristics are summarized in [Table 2]. The usability testing sessions lasted a mean of 22 minutes (range 12–33 minutes).

Table 2

Participant demographic characteristics

Frequency (%)

Specialty

 Emergency medicine

10 (50)

 Neurosurgery

10 (50)

Gender

 Male

12 (60)

 Female

8 (40)

Age

 Younger than 30 years

1 (5)

 30–39 years

11 (55)

 40–49 years

4 (20)

 50–59 years

2 (10)

 60 years or older

1 (5)

Years practicing as an attending

 Resident/fellow

6 (30)

 0–4 years

6 (30)

 5–9 years

2 (10)

 10 years or longer

6 (30)

Teaching hospital affiliation

 Yes

16 (80)

 No

4 (20)

Freestanding children's hospital

 Yes

19 (95)

 No

1 (5)

Hospital trauma level

 Level 1 trauma center

15 (75)

 Level 2 trauma center

3 (15)

 Non-trauma center

2 (10)

Usability Testing Summary

Overall, 88% of recommendations given by participants were concordant with those provided by the KIIDS-TBI tool. Among discordant recommendations, 57% recommended a lower level of care than the KIIDS-TBI tool and 43% recommended a higher level of care. As shown in [Table 3], the more common and impactful usability problems were interpretational. For example, several participants were unsure when to select the response option for “extra-axial hematoma” (e.g., not understanding the wording regarding extra-axial hematoma). For instance, one participant stated,

Table 3

Results by prototype development stage

Category

Phase 1

Phase 2

Phase 3

Number of usability problems

NA

35

4

8

Number of mistakes made

NA

18

2

2

Mean time for case review and data entry (s)

NA

91

76

79

Most common usability problems (average per participant)

 Mistake

18

2

2

 • Examples: Selecting extra-axial hematoma in additional to subdural hematoma; selecting cerebral contusion when not given an option for subarachnoid hemorrhage

Interpretational

15

2

1

 • Selecting the incorrect GCS score; incorrectly selecting depressed skull fracture

Operational

3

0

1

 Slip

Operational

6

0

3

 Navigation

Operational

4

0

1

 Understanding instructions

Interpretational

5

2

2

 Layout

Operational

1

0

0

 Consistency

Operational

1

0

0

Most common data entry mistakes[a]

 Extra-axial hematoma

Interpretational

5

1

0

 Cerebral contusion

Interpretational

8

0

1

 GCS Score

Operational

0

0

1

 Subdural hematoma

Interpretational

1

0

0

 Epidural hematoma

Interpretational

1

1

0

 Midline shift

Operational

2

0

0

 Fracture depressed ≥ skull width

Operational

1

0

0

Abbreviation: GCS, Glasgow Coma Scale.


Note: NA, not applicable to that row.


Note: Phase 1 included 10 participants, phase 2 included 4 participants, and phase 3 included 6 participants. The problem category refers to those related to difficulty interpreting the terminology or wording of instructions (interpretational) versus those related to not correctly extracting information from the cases or navigating through the tool (operational).


a Mistakes included selecting a finding when it was not present and/or failing to select a finding that was present.


“I think question 4 could be tricky…just it's not 100% clear what you're trying to get at…”

Another common interpretational problem reflected participants' uncertainly regarding how to distinguish subarachnoid hemorrhage from cerebral contusions. These interpretational problems accounted for 18 of 22 mistakes. Operational problems more commonly involved accidentally selecting the wrong hemorrhage type or having difficulty navigating from the input to output screens in the CDS.

Testing Phase 1

The initial CDS version tested by the first 10 participants is shown in [Fig. 1]. As shown in [Table 3], the first 10 participants had 35 usability problems, the most common of which were mistakes (18 total) related to mislabeling extra-axial hematomas (5) or cerebral contusions (8). Confusion related to when to select “extra-axial hematoma” as a tool input also manifested as difficulty participants expressed with understanding instructions (five problems). Based on participant suggestions, after phase 1 we changed the wording for the question prompt related to the presence of extra-axial hematomas.

From discussions with participants, we also learned that many had incorrectly selected a cerebral contusion as being present in case 3 because there was no response option for subarachnoid hemorrhage. Although we initially omitted that response option because it did not influence predicted risk, based on participant feedback, an input option for subarachnoid hemorrhage was added in phase 2.

Another common problem in phase 1 was “slips” (i.e., mistakes that users successfully corrected themselves). These typically involved participants initially believing they had to click to indicate a response option of “no,” which in fact changed the default answer from “no” to “yes.” Although quickly corrected, based on participant suggestions, we changed the input to require manually selecting yes/no for each question. Finally, there were four navigation problems in phase 1, which typically involved being unable to hit “View Risk Score” without completing all input prompts. To address this problem, we added a prompt reading “Please complete all the fields” that appeared when users tried to view the risk score without completing all inputs. Similarly, some participants noted uncertainty regarding how to interpret the “institutional recommendations,” which indicated an appropriate level of care that would be based on the opinions of leaders at each participant's institution. Consequently, in phase 2, we added an information icon, “,” that offered an additional explanation when hovered over with the mouse.


#

Testing Phase 2

The version of the tool used in testing phase 2 is shown in [Fig. 2]. The number of usability problems (4) and, specifically, the number of mistakes (2) decreased substantially compared with phase 1. However, most of the decline resulted from fewer participants mislabeling cerebral contusion. Based on both the mistakes made and participant feedback, there remained confusion related to the question prompt for extra-axial hematomas. After reviewing a variety of proposed solutions with participants, we added a question with branching logic to better distinguish epidural, subdural, and “extra-axial” hematomas. Based on a participant suggestion, we also modified the wording related to fracture depression. For patients lacking any risk factors, we also replaced “0%” predicted risk in the output screen with “< 0.10%” based on feedback that the former output implied no possible risk of a negative outcome.

Zoom Image
Fig. 2 The input screen of the CDS prototype used in testing phase 2. Arrows identify changes to the first update, including adding a response about presence of subarachnoid hemorrhage, changing the wording regarding extra-axial hematoma, and adding an explanatory prompt when some inputs were left unanswered. The yes/no response options were also changed from a switch with default “no” to manual selection icons.

#

Testing Phase 3

For the final six participants who were shown the third iteration of the CDS tool ([Fig. 3], online at https://head-injury-risk-predictor.web.app/case-1.html), the number of usability problems (8) and mistakes (2) were similar. However, problems related to the interpretation of the question prompts were nearly eliminated. The remaining problems participants encountered were typically minor, most often involving “slips” (i.e., temporary mistakes the user corrected) and two data entry mistakes.

Zoom Image
Fig. 3 The final input (A) and output (B) screens of the CDS prototype used in testing phase 3. Arrows identify changes added in the second update, including branching logic to classify extra-axial hematomas; updated wording to describe skull fracture depression; and modified predicted risk display for the lowest risk patients. The information icon, “,” in the output was added during phase 2.

#
#

Acceptability and Usability Survey Results

Results of the structured survey are shown in [Table 1]. Nearly all (≥ 95%) respondents indicated that the tool was clear, easy to use, and helpful for determining a patient's level of care. Most respondents also felt that the tool was likely to improve patient safety (79%), use of resources (90%), and communication across specialties (90%). While a minority (25%) of respondents indicated that the CDS failed to account for important clinical information (e.g., missing relevant data points), few other respondents reported any major flaws.


#

Thematic Analysis

We identified four primary themes resulting from the semi-structured interviews, which are summarized in [Table 4].

Table 4

Major themes and sub-themes identified in the thematic analysis

Support evidence-based decision-making

 • CDS tool could increase confidence in decision-making

 • CDS tool is useful for facilitating level-of-care decisions

 • CDS tool may be useful for influencing community hospital transfer decisions

 • CDS tool could change existing practice patterns

 • The CDS tool can help address inconsistencies and support standardization

 • The CDS tool might change neurosurgery consultation practices

 • Need to understand the underlying evidence

Improves clinical workflow

 • CDS enhances communication between specialties

 • CDS may speed up routine workflow

 • CDS can help address inconsistencies and support standardization

Maximizing CDS Impact

 • CDS may have particular value in private hospital settings

 • ED physicians can effectively use the CDS and would benefit from its guidance

 • CDS use by ED physicians depends on rapid availability of radiology reads

 • CDS would be particularly useful to trainees and mid-level providers

Potential pitfalls

 • Unintended effects of the tool on patient care

 • Neurosurgery needs to retain disposition decision-making autonomy to avoid conflicts

 • Medicolegal considerations related to using the CDS tool

Abbreviations: CDS, clinical decision support; ED, emergency department.


Theme 1: Support Evidence-Based Decision-Making

This theme reflected the broad responses indicating that using the CDS tool would standardize and facilitate evidence-based decision-making. This included increasing confidence in physician decision-making and helping determine an appropriate level of care. For example, one participant explained,

“I think it would be valuable to have that, either backup support for what I was thinking or, ‘Oh, well, that's a good point if it—the chance is that big. Let's do that. Or if the chance is that low, sure. That seems appropriate.’” (Participant 2, Emergency Medicine)

Although not agreed on by all participants, some physicians also explained that the CDS could potentially change existing practice patterns, which includes some hospital transfer decisions and neurosurgery consultation protocols. For example, one physician noted,

“If I'm in a community hospital and we don't have neurosurgery, for example, then that tool will help me, you know, talk to a neurosurgeon and decide if it's safe to keep him at my hospital, or they need to be sent over.” (Participant 14, Emergency Medicine)

While not finding flaws with the evidence presented, some participants noted that they would want to spend more time assessing the source literature.


#

Theme 2: Improve Clinical Workflow

Aside from the potential to improve evidence-based decision-making, participants also noted ways that CDS could improve the efficiency and quality of clinical workflow. For example, multiple participants explained that the CDS could improve communication between specialties. For example, one EM physician said,

“That could be useful just to know which details to communicate to [neurosurgery]….” (Participant 17, Emergency Medicine)

Similarly, participants noted that using CDS could remove inconsistencies in clinical practice, and in doing so, also speed up workflow. For example,

“… if we have a better idea of what the disposition's going to be, we can start the process for that.” (Participant 2, Emergency Medicine)


#

Theme 3: Maximizing CDS Impact

The participants discussed several specific approaches that would maximize the effectiveness of CDS implementation. For example, participants generally said that having EM physicians serve as the initial users would be feasible and effective. For instance, one emergency physician explained,

“I do believe this is very helpful for an arrival. The ER physician has an idea in which direction this is going.” (Participant 18, Neurosurgery)

At the same time, several participants noted that timely radiology interpretations would be needed for use by EM physicians. Some participants also noted that CDS would be particularly valuable to trainees (i.e., resident/fellow physicians) and mid-level providers (e.g., nurse practitioners), as well as in private hospital settings that typically lack in-house neurosurgical coverage. For example, one neurosurgeon said,

“Most…number one trauma centers are not actually an academic setting …Most settings are private actually… This can be helpful for private settings, very helpful actually, for neurosurgeons that you look at the scan from the home, and say, ‘I’ll see the kid in the morning...” (Participant 18, Neurosurgery)


#

Theme 4: Potential Pitfalls

Participants noted occasional concerns related to potential unintended consequences, such as potential conflicts regarding CDS interpretation across specialties. In addition, several physicians noted that medicolegal considerations may impact use. For example,

“[P]eople may be reluctant to document a prediction tool in the medical record…If you went against the prediction model and something bad happened to the kid, they might be afraid of some legal repercussions.” (Participant 15, Neurosurgery)

Likewise, some neurosurgeons expressed concern that the CDS recommendations could be restrictive if interpreted too rigidly. For example, one neurosurgeon explained,

“Once it falls on the lap of the neurosurgeon, I'd want to not be forced to walk down this aisle and no other alternative…In other words, I want to have the option of using my own judgment as well.” (Participant 1, Neurosurgery)


#
#
#

Discussion

Using data from a clinically diverse, multicenter cohort of EM and neurosurgery physicians, we found broad support for using a novel electronic CDS tool to aid with the management of children with mTBI and ICI. End-user input informed two design iterations, which substantially reduced the number of usability problems encountered. Using both structured and semi-structured feedback, we found that participants believed the CDS was clear, easy to use, and likely to improve the efficiency and safety of patient care, supporting the viability of future clinical use.

This study builds on a strong foundation of using electronic CDS in pediatric trauma and critical care, which has highlighted the importance of iterative development and refinement.[17] [34] [35] During previous focus group interviews, we identified the output data most useful for clinicians, such as the anticipated risk of neurological decline, and features, such as anticipated costs, that clinicians found problematic. That feedback in the wireframe stage allowed us to avoid making substantive changes to the CDS content, and instead enabled us to focus on the clarity of presentation and human-computer interaction. Through two separate design updates, we addressed confusion related to terminology and wording, navigation problems, and the clarity of the output display. This iterative process highlighted the value of using participatory design,[21] [22] [36] which helped identify shortcomings not anticipated by our multidisciplinary design team.

As highlighted in previous research, a key factor driving CDS adoption is the extent to which clinicians believe that the information provided is clinically valuable.[37] [38] In this study, nearly all participants believed that the CDS provided clinically useful information that would likely improve patient safety and decrease resource utilization. In semi-structured interviews, physicians explained that the tool may be particularly helpful to mid-level providers, trainee physicians, and physicians in community hospitals that lack access to in-house neurosurgery. These comments are consistent with the notion that using the CDS tool engages a more deliberate analytic strategy (so-called type 2 reasoning in the dual-process model).[39] Such a process may be particularly important for less experienced clinicians who may be more error-prone when relying on more automated, gestalt judgment (i.e., “type 1” reasoning). Similarly, previous studies have shown that CDS can be particularly valuable when clinicians are busy or fatigued when their cognitive resources may be strained.[40] [41]

Particularly relevant to future implementation efforts, participants from both specialties felt that having the CDS completed by EM providers prior to initiating a neurosurgery consult would be both feasible and clinically helpful. EM physicians consistently stated that rapid radiology interpretations combined with their baseline knowledge would allow them to use the electronic CDS to anticipate management decisions and inform discussions prior to interacting with consulting neurosurgeons.

While not wanting to forgo neurosurgery consultation, EM physicians felt that having access to CDS early in a patient encounter would provide useful information about likely patient outcomes. Although not wanting to relinquish final decision-making autonomy, neurosurgeons generally agreed that initiating CDS use with EM providers would likely improve the overall quality and efficiency of communication and care. As confidence grows in the underlying CDS evidence, providers may adopt expanded uses, such as guiding transfer practices and discharging selected patients home after emergency department observation.

This study has several limitations. First, although we included participants from 10 institutions, there were relatively few cases and most participants came from academic children's hospitals that were level-one trauma centers. While participants did provide valuable perspectives from non-teaching and community hospitals, additional experience in diverse settings and clinical contexts will be needed to substantiate some conclusions. Second, while the EHR environment was intended to simulate real-world use, more prolonged real-world testing is needed to identify both technical barriers and more diverse patient presentations that could be encountered during real encounters. Third, although no participants felt that using the KIIDS-TBI tool would increase the risk of lawsuits, some did note medicolegal concerns regarding CDS-discordant recommendations.[42] While previous studies have addressed such concerns in similar populations,[43] this topic should be explored in future work with legal and regulatory experts. Finally, although we attempted to enroll participants across a range of practice settings and career levels, our dependence on voluntary participation may limit the generalizability of some conclusions.


#

Conclusion

This multicenter study of EM and neurosurgery physicians supported the acceptability and usability of a prototype CDS tool for children with mTBI and ICI. Next steps should include the development of a mobile application to broaden the tool's availability, and most importantly, real-world testing. These results provide a strong foundation for a larger implementation/effectiveness trial to evaluate both the feasibility of implementing the KIIDS-TBI tool and its effects on patient outcomes in diverse health care settings.[37]


#

Clinical Relevance Statement

Electronic CDS has the potential to improve the safe, resource-efficient management of children with mTBI and ICI. The usability and acceptability testing described in the manuscript provides a strong foundation for implementing the prototype CDS tested. These results have direct implications for clinicians seeking to use this CDS tool and also provide support for a larger implementation trial of this electronic CDS.


#

Multiple Choice Questions

  1. Which of the following is a potential pitfall noted of electronic clinical decision support for children with mild traumatic brain injuries and intracranial injuries?

    • CDS recommendations could be restrictive if interpreted too rigidly.

    • CDS could improve patient safety.

    • CDS could improve communication across specialties.

    • CDS could increase costs of care.

    Correct Answer: The correct answer is option a. Overall, participants noted a variety of benefits of CDS in this population. However, some neurosurgeons noted that the CDS could become restrictive if the results were interpreted too rigidly. Instead, the participants wanted to retain the right to use their own judgment in final decision-making.

  2. Which of the following describes the most likely workflow proposed for implementing electronic CDS for children with mTBI and ICI?

    • Artificial intelligence will be used to populate all information.

    • Emergency department nurses will complete the CDS and share the results with the emergency physician/advance practice provider.

    • Emergency department physicians/advanced practice providers will complete the CDS and share the results with the neurosurgery team.

    • Only the neurosurgery team will interact with the CDS.

    Correct Answer: The correct answer is option c. Both EM and neurosurgery participants agreed that the most efficient workflow would be for the emergency department physician or advanced practice provider to initially complete the electronic CDS. That clinician would then share the results with the consulting neurosurgery team. Participants felt this approach was practical, efficient, and would improve communication and workflow between both teams.


#
#

Conflict of Interest

The authors have a patent pending for the electronic clinical decision support described in this manuscript. The authors declare no other conflicts of interest.

Acknowledgment

We thank our institution's programming team for their work on the early phase of the prototype programming and design.

Protection of Human and Animal Subjects

The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects and was reviewed and approved by the authors' institutional review board. The authors' institutional review board reviewed and approved the study procedures with a waiver of documentation of consent (IRB #201902091). Therefore, participants were provided with a consent document, a verbal study description, and an opportunity to ask questions before verbally agreeing to proceed with the study.


* P.Y. and R.E.F. shared equal responsibility for study supervision.


Supplementary Material

  • References

  • 1 National Center for Injury Prevention and Control (US).. Report to Congress on mild traumatic brain injury in the United States: steps to prevent a serious public health problem. Centers for Disease Control and Prevention;; 2003
  • 2 McKinlay A, Grace RC, Horwood LJ, Fergusson DM, Ridder EM, MacFarlane MR. Prevalence of traumatic brain injury among children, adolescents and young adults: prospective evidence from a birth cohort. Brain Inj 2008; 22 (02) 175-181
  • 3 Schneier AJ, Shields BJ, Hostetler SG, Xiang H, Smith GA. Incidence of pediatric traumatic brain injury and associated hospital resource utilization in the United States. Pediatrics 2006; 118 (02) 483-492
  • 4 Koepsell TD, Rivara FP, Vavilala MS. et al. Incidence and descriptive epidemiologic features of traumatic brain injury in King County, Washington. Pediatrics 2011; 128 (05) 946-954
  • 5 Bowman SM, Bird TM, Aitken ME, Tilford JM. Trends in hospitalizations associated with pediatric traumatic brain injuries. Pediatrics 2008; 122 (05) 988-993
  • 6 Lumba-Brown A, Yeates KO, Sarmiento K. et al. Centers for Disease Control and Prevention Guideline on the Diagnosis and Management of Mild Traumatic Brain Injury Among Children. JAMA Pediatr 2018; 172 (11) e182853
  • 7 Kuppermann N, Holmes JF, Dayan PS. et al; Pediatric Emergency Care Applied Research Network (PECARN). Identification of children at very low risk of clinically-important brain injuries after head trauma: a prospective cohort study. Lancet 2009; 374 (9696): 1160-1170
  • 8 Babl FE, Borland ML, Phillips N. et al; Paediatric Research in Emergency Departments International Collaborative (PREDICT). Accuracy of PECARN, CATCH, and CHALICE head injury decision rules in children: a prospective cohort study. Lancet 2017; 389 (10087): 2393-2402
  • 9 Greenberg JK, Jeffe DB, Carpenter CR. et al. North American survey on the post-neuroimaging management of children with mild head injuries. J Neurosurg Pediatr 2018; 23 (02) 227-235
  • 10 Greenberg JK, Stoev IT, Park TS. et al. Management of children with mild traumatic brain injury and intracranial hemorrhage. J Trauma Acute Care Surg 2014; 76 (04) 1089-1095
  • 11 Greenberg JK, Yan Y, Carpenter CR. et al. Development and internal validation of a clinical risk score for treating children with mild head trauma and intracranial injury. JAMA Pediatr 2017; 171 (04) 342-349
  • 12 Neumayer KE, Sweney J, Fenton SJ, Keenan HT, Flaherty BF. Validation of the “CHIIDA” and application for PICU triage in children with complicated mild traumatic brain injury. J Pediatr Surg 2020; 55 (07) 1255-1259
  • 13 Greenberg JK, Ahluwalia R, Hill M. et al. Development and external validation of the KIIDS-TBI tool for managing children with mild traumatic brain injury and intracranial injuries. Acad Emerg Med 2021; 28 (12) 1409-1420
  • 14 Stiell IG, Bennett C. Implementation of clinical decision rules in the emergency department. Acad Emerg Med 2007; 14 (11) 955-959
  • 15 Green SM. When do clinical decision rules improve patient care?. Ann Emerg Med 2013; 62 (02) 132-135
  • 16 Sittig DF, Singh H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care 2010; 19 (Suppl. 03) i68-i74
  • 17 Kiatchai T, Colletti AA, Lyons VH, Grant RM, Vavilala MS, Nair BG. Development and feasibility of a real-time clinical decision support system for traumatic brain injury anesthesia care. Appl Clin Inform 2017; 8 (01) 80-96
  • 18 Greenberg JK, Otun A, Nasraddin A. et al. Electronic clinical decision support for children with minor head trauma and intracranial injuries: a sociotechnical analysis. BMC Med Inform Decis Mak 2021; 21 (01) 161
  • 19 Nielsen J, Clemmensen T, Yssing C. Getting access to what goes on in people's heads? reflections on the think-aloud technique. Proceedings of the Second Nordic Conference on Human-Computer Interaction; Aarhus, Denmark; 2002
  • 20 Yen PY, Bakken S. A comparison of usability evaluation methods: heuristic evaluation versus end-user think-aloud protocol—an example from a web-based communication tool for nurse scheduling. AMIA Annu Symp Proc 2009; 2009: 714-718
  • 21 Hartzler AL, Chaudhuri S, Fey BC, Flum DR, Lavallee D. Integrating patient-reported outcomes into spine surgical care through visual dashboards: lessons learned from human-centered design. EGEMS (Wash DC) 2015; 3 (02) 1133
  • 22 Horsky J, Schiff GD, Johnston D, Mercincavage L, Bell D, Middleton B. Interface design principles for usable decision support: a targeted review of best practices for clinical prescribing interventions. J Biomed Inform 2012; 45 (06) 1202-1216
  • 23 Jaspers MW, Steen T, van den Bos C, Geenen M. The think aloud method: a guide to user interface design. Int J Med Inform 2004; 73 (11-12): 781-795
  • 24 Jaspers MW. A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence. Int J Med Inform 2009; 78 (05) 340-353
  • 25 van den Haak M, De Jong M, Jan Schellens P. Retrospective vs. concurrent think-aloud protocols: testing the usability of an online library catalogue. Behav Inf Technol 2003; 22 (05) 339-351
  • 26 Brehaut JC, Graham ID, Wood TJ. et al. Measuring acceptability of clinical decision rules: validation of the Ottawa acceptability of decision rules instrument (OADRI) in four countries. Med Decis Making 2010; 30 (03) 398-408
  • 27 Yen P-Y, Sousa KH, Bakken S. Examining construct and predictive validity of the Health-IT Usability Evaluation Scale: confirmatory factor analysis and structural equation modeling results. J Am Med Inform Assoc 2014; 21 (e2): e241-e248
  • 28 Peres SC, Pham T, Phillips R. Validation of the System Usability Scale (SUS):SUS in the Wild. Proc Hum Factors Ergon Soc Annu Meet 2013; 57 (01) 192-196
  • 29 Yen PY, Bakken S. Review of health information technology usability study methodologies. J Am Med Inform Assoc 2012; 19 (03) 413-422
  • 30 Yen PY, Walker DM, Smith JMG, Zhou MP, Menser TL, McAlearney AS. Usability evaluation of a commercial inpatient portal. Int J Med Inform 2018; 110: 10-18
  • 31 Kushniruk AW, Borycki EM. Development of a video coding scheme for analyzing the usability and usefulness of health information systems. Stud Health Technol Inform 2015; 218: 68-73
  • 32 Nowell LS, Norris JM, White DE, Moules NJ. Thematic analysis: striving to meet the trustworthiness criteria. Int J Qual Methods 2017; 16 (01) 1609406917733847
  • 33 Curry LA, Nembhard IM, Bradley EH. Qualitative and mixed methods provide unique contributions to outcomes research. Circulation 2009; 119 (10) 1442-1452
  • 34 Tham E, Swietlik M, Deakyne S. et al; Pediatric Emergency Care Applied Research Network (PECARN). Clinical decision support for a multicenter trial of pediatric head trauma: development, implementation, and lessons learned. Appl Clin Inform 2016; 7 (02) 534-542
  • 35 Grinspan ZM, Eldar YC, Gopher D. et al. Guiding Principles for a Pediatric Neurology ICU (neuroPICU) Bedside Multimodal Monitor: findings from an International Working Group. Appl Clin Inform 2016; 7 (02) 380-398
  • 36 Gasson S. Human-centered vs. user-centered approaches to information system design. J Inf Technol Theory Appl 2003; 5 (02) 5 (JITTA)
  • 37 Masterson Creber RM, Dayan PS, Kuppermann N. et al; Pediatric Emergency Care Applied Research Network (PECARN) and the Clinical Research on Emergency Services and Treatments (CREST) Network. Applying the RE-AIM framework for the evaluation of a clinical decision support tool for pediatric head trauma: a mixed-methods study. Appl Clin Inform 2018; 9 (03) 693-703
  • 38 Cabana MD, Rand CS, Powe NR. et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999; 282 (15) 1458-1465
  • 39 Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract 2009; 14 (1, Suppl 1): 27-35
  • 40 Ozkaynak M, Metcalf N, Cohen DM, May LS, Dayan PS, Mistry RD. Considerations for designing EHR-embedded clinical decision support systems for antimicrobial stewardship in pediatric emergency departments. Appl Clin Inform 2020; 11 (04) 589-597
  • 41 Kane B, Carpenter C. Cognition and decision making. In: The Washington Manual of Patient Safety and Quality Improvement; Philadelphia, PA: Wolters Kluwer; 2016: 195-209
  • 42 Stark DE, Kumar RB, Longhurst CA, Wall DP. The quantified brain: a framework for mobile device-based assessment of behavior and neurological function. Appl Clin Inform 2016; 7 (02) 290-298
  • 43 Gimbel RW, Pirrallo RG, Lowe SC. et al. Effect of clinical decision rules, patient cost and malpractice information on clinician brain CT image ordering: a randomized controlled trial. BMC Med Inform Decis Mak 2018; 18 (01) 20

Address for correspondence

Jacob K. Greenberg, MD, M.S.C.I.
Department of Neurosurgery, Washington University School of Medicine
660 S. Euclid Ave., Box 8057, St. Louis, MO 63110
United States   

Publication History

Received: 14 August 2021

Accepted: 18 February 2022

Article published online:
27 April 2022

© 2022. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 National Center for Injury Prevention and Control (US).. Report to Congress on mild traumatic brain injury in the United States: steps to prevent a serious public health problem. Centers for Disease Control and Prevention;; 2003
  • 2 McKinlay A, Grace RC, Horwood LJ, Fergusson DM, Ridder EM, MacFarlane MR. Prevalence of traumatic brain injury among children, adolescents and young adults: prospective evidence from a birth cohort. Brain Inj 2008; 22 (02) 175-181
  • 3 Schneier AJ, Shields BJ, Hostetler SG, Xiang H, Smith GA. Incidence of pediatric traumatic brain injury and associated hospital resource utilization in the United States. Pediatrics 2006; 118 (02) 483-492
  • 4 Koepsell TD, Rivara FP, Vavilala MS. et al. Incidence and descriptive epidemiologic features of traumatic brain injury in King County, Washington. Pediatrics 2011; 128 (05) 946-954
  • 5 Bowman SM, Bird TM, Aitken ME, Tilford JM. Trends in hospitalizations associated with pediatric traumatic brain injuries. Pediatrics 2008; 122 (05) 988-993
  • 6 Lumba-Brown A, Yeates KO, Sarmiento K. et al. Centers for Disease Control and Prevention Guideline on the Diagnosis and Management of Mild Traumatic Brain Injury Among Children. JAMA Pediatr 2018; 172 (11) e182853
  • 7 Kuppermann N, Holmes JF, Dayan PS. et al; Pediatric Emergency Care Applied Research Network (PECARN). Identification of children at very low risk of clinically-important brain injuries after head trauma: a prospective cohort study. Lancet 2009; 374 (9696): 1160-1170
  • 8 Babl FE, Borland ML, Phillips N. et al; Paediatric Research in Emergency Departments International Collaborative (PREDICT). Accuracy of PECARN, CATCH, and CHALICE head injury decision rules in children: a prospective cohort study. Lancet 2017; 389 (10087): 2393-2402
  • 9 Greenberg JK, Jeffe DB, Carpenter CR. et al. North American survey on the post-neuroimaging management of children with mild head injuries. J Neurosurg Pediatr 2018; 23 (02) 227-235
  • 10 Greenberg JK, Stoev IT, Park TS. et al. Management of children with mild traumatic brain injury and intracranial hemorrhage. J Trauma Acute Care Surg 2014; 76 (04) 1089-1095
  • 11 Greenberg JK, Yan Y, Carpenter CR. et al. Development and internal validation of a clinical risk score for treating children with mild head trauma and intracranial injury. JAMA Pediatr 2017; 171 (04) 342-349
  • 12 Neumayer KE, Sweney J, Fenton SJ, Keenan HT, Flaherty BF. Validation of the “CHIIDA” and application for PICU triage in children with complicated mild traumatic brain injury. J Pediatr Surg 2020; 55 (07) 1255-1259
  • 13 Greenberg JK, Ahluwalia R, Hill M. et al. Development and external validation of the KIIDS-TBI tool for managing children with mild traumatic brain injury and intracranial injuries. Acad Emerg Med 2021; 28 (12) 1409-1420
  • 14 Stiell IG, Bennett C. Implementation of clinical decision rules in the emergency department. Acad Emerg Med 2007; 14 (11) 955-959
  • 15 Green SM. When do clinical decision rules improve patient care?. Ann Emerg Med 2013; 62 (02) 132-135
  • 16 Sittig DF, Singh H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care 2010; 19 (Suppl. 03) i68-i74
  • 17 Kiatchai T, Colletti AA, Lyons VH, Grant RM, Vavilala MS, Nair BG. Development and feasibility of a real-time clinical decision support system for traumatic brain injury anesthesia care. Appl Clin Inform 2017; 8 (01) 80-96
  • 18 Greenberg JK, Otun A, Nasraddin A. et al. Electronic clinical decision support for children with minor head trauma and intracranial injuries: a sociotechnical analysis. BMC Med Inform Decis Mak 2021; 21 (01) 161
  • 19 Nielsen J, Clemmensen T, Yssing C. Getting access to what goes on in people's heads? reflections on the think-aloud technique. Proceedings of the Second Nordic Conference on Human-Computer Interaction; Aarhus, Denmark; 2002
  • 20 Yen PY, Bakken S. A comparison of usability evaluation methods: heuristic evaluation versus end-user think-aloud protocol—an example from a web-based communication tool for nurse scheduling. AMIA Annu Symp Proc 2009; 2009: 714-718
  • 21 Hartzler AL, Chaudhuri S, Fey BC, Flum DR, Lavallee D. Integrating patient-reported outcomes into spine surgical care through visual dashboards: lessons learned from human-centered design. EGEMS (Wash DC) 2015; 3 (02) 1133
  • 22 Horsky J, Schiff GD, Johnston D, Mercincavage L, Bell D, Middleton B. Interface design principles for usable decision support: a targeted review of best practices for clinical prescribing interventions. J Biomed Inform 2012; 45 (06) 1202-1216
  • 23 Jaspers MW, Steen T, van den Bos C, Geenen M. The think aloud method: a guide to user interface design. Int J Med Inform 2004; 73 (11-12): 781-795
  • 24 Jaspers MW. A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence. Int J Med Inform 2009; 78 (05) 340-353
  • 25 van den Haak M, De Jong M, Jan Schellens P. Retrospective vs. concurrent think-aloud protocols: testing the usability of an online library catalogue. Behav Inf Technol 2003; 22 (05) 339-351
  • 26 Brehaut JC, Graham ID, Wood TJ. et al. Measuring acceptability of clinical decision rules: validation of the Ottawa acceptability of decision rules instrument (OADRI) in four countries. Med Decis Making 2010; 30 (03) 398-408
  • 27 Yen P-Y, Sousa KH, Bakken S. Examining construct and predictive validity of the Health-IT Usability Evaluation Scale: confirmatory factor analysis and structural equation modeling results. J Am Med Inform Assoc 2014; 21 (e2): e241-e248
  • 28 Peres SC, Pham T, Phillips R. Validation of the System Usability Scale (SUS):SUS in the Wild. Proc Hum Factors Ergon Soc Annu Meet 2013; 57 (01) 192-196
  • 29 Yen PY, Bakken S. Review of health information technology usability study methodologies. J Am Med Inform Assoc 2012; 19 (03) 413-422
  • 30 Yen PY, Walker DM, Smith JMG, Zhou MP, Menser TL, McAlearney AS. Usability evaluation of a commercial inpatient portal. Int J Med Inform 2018; 110: 10-18
  • 31 Kushniruk AW, Borycki EM. Development of a video coding scheme for analyzing the usability and usefulness of health information systems. Stud Health Technol Inform 2015; 218: 68-73
  • 32 Nowell LS, Norris JM, White DE, Moules NJ. Thematic analysis: striving to meet the trustworthiness criteria. Int J Qual Methods 2017; 16 (01) 1609406917733847
  • 33 Curry LA, Nembhard IM, Bradley EH. Qualitative and mixed methods provide unique contributions to outcomes research. Circulation 2009; 119 (10) 1442-1452
  • 34 Tham E, Swietlik M, Deakyne S. et al; Pediatric Emergency Care Applied Research Network (PECARN). Clinical decision support for a multicenter trial of pediatric head trauma: development, implementation, and lessons learned. Appl Clin Inform 2016; 7 (02) 534-542
  • 35 Grinspan ZM, Eldar YC, Gopher D. et al. Guiding Principles for a Pediatric Neurology ICU (neuroPICU) Bedside Multimodal Monitor: findings from an International Working Group. Appl Clin Inform 2016; 7 (02) 380-398
  • 36 Gasson S. Human-centered vs. user-centered approaches to information system design. J Inf Technol Theory Appl 2003; 5 (02) 5 (JITTA)
  • 37 Masterson Creber RM, Dayan PS, Kuppermann N. et al; Pediatric Emergency Care Applied Research Network (PECARN) and the Clinical Research on Emergency Services and Treatments (CREST) Network. Applying the RE-AIM framework for the evaluation of a clinical decision support tool for pediatric head trauma: a mixed-methods study. Appl Clin Inform 2018; 9 (03) 693-703
  • 38 Cabana MD, Rand CS, Powe NR. et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999; 282 (15) 1458-1465
  • 39 Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract 2009; 14 (1, Suppl 1): 27-35
  • 40 Ozkaynak M, Metcalf N, Cohen DM, May LS, Dayan PS, Mistry RD. Considerations for designing EHR-embedded clinical decision support systems for antimicrobial stewardship in pediatric emergency departments. Appl Clin Inform 2020; 11 (04) 589-597
  • 41 Kane B, Carpenter C. Cognition and decision making. In: The Washington Manual of Patient Safety and Quality Improvement; Philadelphia, PA: Wolters Kluwer; 2016: 195-209
  • 42 Stark DE, Kumar RB, Longhurst CA, Wall DP. The quantified brain: a framework for mobile device-based assessment of behavior and neurological function. Appl Clin Inform 2016; 7 (02) 290-298
  • 43 Gimbel RW, Pirrallo RG, Lowe SC. et al. Effect of clinical decision rules, patient cost and malpractice information on clinician brain CT image ordering: a randomized controlled trial. BMC Med Inform Decis Mak 2018; 18 (01) 20

Zoom Image
Fig. 1 The initial prototype used in testing phase 1. (A) The simulated electronic health record environment and CDS tool. (B) the CDS output screen.
Zoom Image
Fig. 2 The input screen of the CDS prototype used in testing phase 2. Arrows identify changes to the first update, including adding a response about presence of subarachnoid hemorrhage, changing the wording regarding extra-axial hematoma, and adding an explanatory prompt when some inputs were left unanswered. The yes/no response options were also changed from a switch with default “no” to manual selection icons.
Zoom Image
Fig. 3 The final input (A) and output (B) screens of the CDS prototype used in testing phase 3. Arrows identify changes added in the second update, including branching logic to classify extra-axial hematomas; updated wording to describe skull fracture depression; and modified predicted risk display for the lowest risk patients. The information icon, “,” in the output was added during phase 2.