Appl Clin Inform 2024; 15(04): 798-807
DOI: 10.1055/s-0044-1788979
Research Article

An Advanced Cardiac Life Support Application Improves Performance during Simulated Cardiac Arrest

Authors

  • Michael Senter-Zapata*

    1   Harvard Medical School, Boston, Massachusetts, United States
    2   Brigham and Women's Hospital, Boston, Massachusetts, United States
    3   Healthcare Transformation Lab, Massachusetts General Hospital, Boston, Massachusetts, United States
  • Dylan V. Neel*

    1   Harvard Medical School, Boston, Massachusetts, United States
  • Isabella Colocci

    1   Harvard Medical School, Boston, Massachusetts, United States
  • Afaf Alblooshi

    4   STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Boston, Massachusetts, United States
    5   Department of Medical Education, United Arab Emirates University College of Medicine and Health Sciences, Al Ain, Abu Dhabi, United Arab Emirates
  • Faten Abdullah M. AlRadini

    4   STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Boston, Massachusetts, United States
    6   Department of Clinical Sciences, College of Medicine, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
  • Brian Quach

    2   Brigham and Women's Hospital, Boston, Massachusetts, United States
    4   STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Boston, Massachusetts, United States
  • Samuel Lyon

    1   Harvard Medical School, Boston, Massachusetts, United States
  • Maxwell Coll

    1   Harvard Medical School, Boston, Massachusetts, United States
    2   Brigham and Women's Hospital, Boston, Massachusetts, United States
  • Andrew Chu

    3   Healthcare Transformation Lab, Massachusetts General Hospital, Boston, Massachusetts, United States
    8   Massachusetts General Hospital, Boston, Massachusetts, United States
  • Katharine W. Rainer

    9   Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States
  • Beth Waters

    10   Brigham and Women's Faulkner Hospital, Jamaica Plain, Massachusetts, United States
  • Christopher W. Baugh

    1   Harvard Medical School, Boston, Massachusetts, United States
    2   Brigham and Women's Hospital, Boston, Massachusetts, United States
  • Roger D. Dias

    1   Harvard Medical School, Boston, Massachusetts, United States
    2   Brigham and Women's Hospital, Boston, Massachusetts, United States
    4   STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Boston, Massachusetts, United States
  • Haipeng Zhang

    1   Harvard Medical School, Boston, Massachusetts, United States
    2   Brigham and Women's Hospital, Boston, Massachusetts, United States
    11   Brigham Digital Innovation Hub, Brigham and Women's Hospital, Hale Building for Transformative Medicine, Boston, Massachusetts, United States
  • Andrew Eyre

    1   Harvard Medical School, Boston, Massachusetts, United States
    2   Brigham and Women's Hospital, Boston, Massachusetts, United States
    4   STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Boston, Massachusetts, United States
  • Eric Isselbacher

    1   Harvard Medical School, Boston, Massachusetts, United States
    3   Healthcare Transformation Lab, Massachusetts General Hospital, Boston, Massachusetts, United States
    8   Massachusetts General Hospital, Boston, Massachusetts, United States
  • Jared Conley

    1   Harvard Medical School, Boston, Massachusetts, United States
    3   Healthcare Transformation Lab, Massachusetts General Hospital, Boston, Massachusetts, United States
    8   Massachusetts General Hospital, Boston, Massachusetts, United States
  • Narath Carlile

    1   Harvard Medical School, Boston, Massachusetts, United States
    2   Brigham and Women's Hospital, Boston, Massachusetts, United States

Funding This study was funded by the Massachusetts General Hospital Healthcare Transformation Lab, Brigham Education Institute (BEI), the Brigham and Women's Internal Medicine Residency Program Office, and the Mass General Brigham Office of Graduate Medical Education Center of Expertise (COE) in MedEd.
 

Abstract

Objectives Variability in cardiopulmonary arrest training and management leads to inconsistent outcomes during in-hospital cardiac arrest. Existing clinical decision aids, such as American Heart Association (AHA) advanced cardiovascular life support (ACLS) pocket cards and third-party mobile apps, often lack comprehensive management guidance. We developed a novel, guided ACLS mobile app and evaluated user performance during simulated cardiac arrest according to the 2020 AHA ACLS guidelines via randomized controlled trial.

Methods Forty-six resident physicians were randomized to lead a simulated code team using the AHA pockets cards (N = 22) or the guided app (N = 24). The primary outcome was successful return of spontaneous circulation (ROSC). Secondary outcomes included code leader stress and confidence, AHA ACLS guideline adherence, and errors. A focus group of 22 residents provided feedback. Statistical analysis included two-sided t-tests and Fisher's exact tests.

Results App users showed significantly higher ROSC rate (50 vs. 18%; p = 0.024), correct thrombolytic administration (54 vs. 23%; p = 0.029), backboard use (96 vs. 27%; p < 0.001), end-tidal CO2 monitoring (58 vs. 27%; p = 0.033), and confidence compared with baseline (1.0 vs 0.3; p = 0.005) compared with controls. A focus group of 22 residents indicated unanimous willingness to use the app, with 82% preferring it over AHA pocket cards.

Conclusion Our guided ACLS app shows potential to improve user confidence and adherence to the AHA ACLS guidelines and may help to standardize in-hospital cardiac arrest management. Further validation studies are essential to confirm its efficacy in clinical practice.


Background and Significance

The position of “code” (cardiopulmonary arrest) team leader during cardiac arrest events in the hospital is one of the most stressful responsibilities that resident physicians face during training[1] [2] and has been exacerbated during the coronavirus 2019 pandemic.[3] Residents receive advanced cardiovascular life support (ACLS) training prior to starting their training and are provided with the American Heart Association (AHA) pocket cards containing algorithms for common code scenarios, which represent the current gold standard for ACLS clinical decision support (CDS). While the current AHA ACLS pocket cards represent an important advancement in ACLS care over the last several decades, advancement in digital CDS technologies has rendered their use more limited. Furthermore, there can be months to years of lag time between ACLS training and a resident's first code event.[4] Although many medical schools and residency programs in the United States employ simulation-based ACLS curricula,[5] their implementation is variable, and not all domestic and international medical schools have access to state-of-the-art simulation training facilities, which can cost millions of dollars to build and hundreds of thousands of dollars per year to maintain.[6]

Inconsistent training opportunities lead to significant anxiety among residents, contribute to resident burnout, and lead to worse patient outcomes.[4] [7] Digital CDS tools have emerged as a potential solution that have shown promising results when studied in both simulated and real-world high-acuity clinical situations. CDS use has been associated with higher simulation performance scores,[8] improved adherence to guidelines,[9] [10] [11] [12] higher number of correct ACLS interventions and fewer mistakes,[13] [14] reduced time to drug delivery,[12] [15] [16] subjective decreased cognitive load,[17] improved user experience,[18] and increased likelihood of recommendation to colleagues.[19] However, some studies have demonstrated worse outcomes in simulated ACLS care when using bedside CDS tools[20] due to a delay in initiating chest compressions, highlighting the need for careful design.

Although several ACLS iOS mobile CDS applications currently exist, most serve as single screen organizational tools that either display event timers or a digitized version of the AHA ACLS pocket card. However, they often lack reminder functionality (e.g., on high-quality cardiopulmonary resuscitation [CPR] technique), internal timers, direct management guidance according to the AHA guidelines, or the ability to log key code events. We believed that the ideal ACLS CDS bedside aid would combine organizational and reminder tools with a step-by-step checklist-based approach to guideline-based ACLS instructions, similar to preflight and preoperative checklists used in the aviation and surgical fields.[21]

Therefore, our team designed an innovative, standalone iOS mobile app using the current AHA ACLS algorithms to help code leaders run effective, guideline-driven hospital codes. The app breaks down existing AHA ACLS algorithms into multiple, simple screens with yes/no decision buttons and checklists that guide users through appropriate code management at each branch point in the algorithm. This combination of organizational functionality and point-of-care clinical guidance has the potential to increase adherence to guideline-based practice and to combat code leader anxiety, cognitive load, and decision fatigue.


Objective

This study aimed to assess the efficacy of this ACLS mobile app in improving (1) subjective code leader experience and (2) objective performance according to AHA ACLS guidelines during simulated cardiac arrest.


Methods

App Development and Features

Using the Swift programming language, we developed a mobile iOS application to guide users through appropriate steps in cardiac arrest management according to a licensed copy of the 2020 AHA cardiac arrest algorithm. The app was designed for trainees in real-world clinical practice to help solidify the algorithmic steps of appropriate ACLS management by easing code leader cognitive load and allowing greater focus on diagnostics and therapeutics rather than on logistical organization and resource management. However, the app can help clinicians of all experience levels adhere to best-practice ACLS guidelines.

Our development team consisted of medical students, internal medicine resident physicians, attending physicians board certified in internal medicine and emergency medicine, a software engineer, and a UI/UX designer.

The app is designed to be opened by the code team leader upon arrival to the resuscitation. Once the user presses the start button, the app guides the user through appropriate cardiac arrest management steps using simple checklists and reminders to perform critical steps over multiple screens. The app walks users through the AHA cardiac arrest algorithm according to user inputs, such as the presence of a shockable versus nonshockable rhythm.

In line with an ideal cardiac arrest app as outlined by Muller et al,[22] our app offers timers that track total code duration, CPR cycles, and epinephrine and amiodarone dosing. On-screen notifications coupled with vibrational (haptic) alerts after each pulse/rhythm check remind users to deliver high-quality CPR (i.e., compression rate of 100–120 per minute, compression depth 2.5,″ and full chest recoil).[23] The app causes the phone to vibrate in its user's hand at a rate of 100 vibrations per minute as a way for code leaders to track compression rate and relay immediate CPR feedback to the code team. Other haptic notifications alert users when pulse/rhythm checks and medication doses are due according to AHA guidelines and to follow end-tidal CO2 (EtCO2) to assess compression quality and monitor for return of spontaneous circulation (ROSC). The app also features an “H&Ts” section (list of common causes of cardiac arrest beginning with the letters “H” and “T”) with diagnostic and treatment information where users can rule in and rule out common diagnoses and maintain an evolving differential diagnosis over multiple screens ([Fig. 1]). Within the H&Ts section, specific diagnostic and therapeutic recommendations are highlighted for each condition listed. The app also has a “Roles” section where trainees can ensure that all code roles are filled by appropriate staff. Lastly, all code data, including number of medication doses/shocks administered, duration of code, total CPR cycles, and timing of interventions, is stored locally on the user's iOS device and can be exported for documenting or research purposes. The app currently has no communication with any electronic medical record and does not utilize or store any identified patient data.

Zoom
Fig. 1 Design of the guided ACLS iOS mobile application. The app guides trainees through appropriate steps in management according to the 2020 American Heart Association cardiac arrest algorithm and includes easy to access timers, haptic reminders for correct CPR technique and medication administration, an “H&Ts” differential diagnosis section, and an exportable event log. ACLS, advanced cardiovascular life support; CPR, cardiopulmonary resuscitation.

Study Sample and Setting

Residents at the Brigham and Women's Hospital were invited to participate in a simulated cardiac arrest resuscitation at the STRATUS Center for Medical Simulation, located on campus. The sessions were designed in conjunction with the Emergency Medicine Residency program leadership and the STRATUS Simulation Team to serve primarily as an educational opportunity for residents to practice code leadership skills outside of their regular residency curriculum and receive immediate feedback by a board-certified faculty member. Secondarily, sessions served as testing environments to assess the utility of the guided ACLS mobile app in this randomized controlled trial. Given a focus on educational training, emergency medicine program leadership selected massive pulmonary embolism as the underlying etiology of cardiac arrest given that its diagnostic complexity often makes it a diagnosis of exclusion in clinical practice, encouraging participants to rely on complex problem solving and data synthesis to reach an appropriate management plan. The case has been outlined in [Supplementary File 1] (available in the online version) for ease of reproducibility.


Recruitment and Randomization of Study Participants

A total of 300 residents from various specialties including internal medicine, emergency medicine, general surgery, and anesthesia (postgraduate year [PGY] 1–3) were invited via email between September 2022 and April 2023 to partake in the educational sessions and were offered $50 USD Amazon gift cards for study participation. Inclusion criteria included internal medicine, emergency medicine, general surgery, and anesthesia resident physicians at the Brigham and Women's Hospital at any stage of residency training who had received previous ACLS certification according to the AHA standards and medical licensing regulations. Given the minimal risk of harm to subjects, the Institutional Review Board (IRB) waived the requirement for written documentation of informed consent according to IRB protocol number 2021P002066. Instead, participants gave verbal consent to study participation at the time of their scheduled simulation session.

They were then randomized by the primary authors using the rand function in Excel v16.70 (Microsoft Corporation, Redmond, WA) to use the app (experimental group) or the AHA ACLS pocket card (control group) during the simulated code in a 1:1 allocation ratio ([Fig. 2]). Participants' characteristics (e.g., residency program and PGY-year) were balanced between groups.

Zoom
Fig. 2 Trial design. Randomized controlled trial design to evaluate the efficacy of our ACLS iOS mobile app. ACLS, advanced cardiovascular life support; AHA, American Heart Association.

A power calculation was performed according to current literature that shows baseline in-hospital cardiac arrest ROSC rates between 45 and 70%,[24] [25] [26] which have been shown to improve to 80 to 85% after simulation-based or educational interventions.[27] [28] Using these data, a target sample size of 36 participants was calculated to detect an effect size of 0.88 at α-level 0.05 and 80% power.


Simulation Scenario

Following randomization, participants were given 3 minutes to familiarize themselves with either the iOS app or the pocket card. iOS app users were provided an iPhone 12 with the app preinstalled. Participants were all similarly briefed about the clinical scenario, various actor roles in the room, and clinical tools at their disposal. Both groups received clarification that the simulation team would be performing its duties true to life (as if resuscitating a real patient).

Once the scenario began, participants were given 15 minutes to function as code leader. The simulation room contained standard equipment, including a defibrillator and code cart with intravenous (IV) fluids, calcium, sodium bicarbonate, magnesium, epinephrine, amiodarone, and atropine. Emergency airway supplies were not provided given that a full intubation sequence was beyond the scope of the trial and not an endpoint.

Three simulation actors were present during sessions: one performing chest compressions, one bag-masking the mannequin, and one managing the defibrillator and administering IV medications. The actor performing chest compressions purposefully performed poor-quality CPR with a low compression rate (e.g., 75 compressions/min) and to an inadequate depth of 1 inch while the actor performing bag-mask oxygenation purposefully administered breaths too frequently (1 breath every 1 second). Actors rehearsed these improper techniques as a team before simulation cases to help maintain consistency. Backboard placement, bag-mask initiation, defibrillation pads, and attachment of EtCO2, all interventions indicated by 2020 AHA ACLS guidelines, were not performed unless explicitly requested by the study participant leading the resuscitation. Patient ROSC was achieved at the next pulse/rhythm check if the participant administered tissue plasminogen activator (tPA) as treatment for massive pulmonary embolism.


Data Collection

Presimulation surveys were used to assess demographics and baseline perceived code-related stress and confidence (low 1 to high 4) ([Supplementary File 2], available in the online version) followed by postsimulation surveys to assess stress and confidence experienced during the simulation ([Supplementary Files 3–4], available in the online version).

Video was reviewed to assess whether or not a checklist of guideline-based critical code steps was performed (e.g., placing a backboard under patient, ensuring that high-quality chest compressions were administered at the correct rate/depth, ensuring EtCO2 was monitored, and ensuring breaths via bag-mask were administered at the correct rate)[29] ([Supplementary File 5], available in the online version). Data from videos were reviewed by one Harvard Medical School student and two STRATUS Simulation Center simulation fellows. Two training sessions with all three reviewers present were spent standardizing data collection techniques to ensure consistency and accuracy between reviewers using video samples. Given the number and length of videos, each was reviewed by one reviewer. Reviewers could not be blinded, since the video clearly showed whether the study participant was using a pocket card or the mobile app due to the position of the wall-mounted camera in the simulation suite. Video reviewers understood the study's purpose but had no role in app development or trial design.

Internal medicine residents from the same study cohort in both control and experimental arms were also invited to participate in a focus group to share first impressions on user interface and experience. After a brief overview of the app's functionality, they were given independent access to the app for up to 5 minutes and then asked to complete a private, anonymous feedback survey ([Supplementary File 6], available in the online version).


Outcome Measures

The study's primary outcome was return of spontaneous circulation (ROSC), achieved after correct administration of tPA. ROSC was chosen as the primary outcome, since it represents the ultimate marker of a successful resuscitation effort (saving a patient's life). Secondary outcomes included code leader stress and confidence, AHA ACLS guideline adherence, diagnosis, treatment, time to diagnosis and interventions, and errors made. We hypothesized that use of our guided app would lead to self-reported improvements in user code-related stress and anxiety in addition to improvements in code leader clinical performance, with a null hypothesis that subjective and objective performance metrics would be equivalent in both groups.


Statistical Analysis

Binary study outcome measurements were assessed using a chi-square or Fisher's exact test in R software v.4.0.5 (significance level: 0.05). For continuous outcome measures (e.g., time to ROSC), a Shapiro–Wilk test and visual inspection by histogram were performed to assess normality. For continuous, normally distributed outcomes (time to diagnosis, time to tPA, time to ROSC), mean values for the experimental and control groups were compared using a two-sided t-test with a significance level 0.05. For continuous, non-normal datasets (time to CPR, number of errors made during simulation), a nonparametric Mann–Whitney–Wilcoxon test was chosen. Values are reported as mean ± standard deviation. The co-first authors of this manuscript had full access to all the data in the study and take responsibility for its integrity.



Results

Demographics

Forty-six participants out of 300 (15.3%) invited agreed to participate in the study. Among participants, 21 (45.7%) were PGY-1s, 20 (43.5%) were PGY-2s, and 5 (10.9%) were PGY-3s. The following specialties were represented: anesthesia 3 (6.5%), emergency medicine 4 (8.7%), internal medicine 36 (78.3%), and general surgery 3 (6.5%). Twenty-two participants (47.8%) were randomized to the ACLS pocket card group compared with 24 (52.2%) in the guided app group ([Table 1]). Three (13.6%) pocket card participants reported previous code leadership experience compared with 6 (25%) guided app users (p = 0.48). Among these nine participants who reported previous code leadership experience, 8 (88.9%) preferred existing mobile app ACLS decision aids compared with 1 (11.1%) who preferred AHA ACLS pocket cards. All participants were able to successfully initiate and navigate the app during presimulation orientation.

Table 1

Participant characteristics and outcomes

Demographics

Control group

App group

p-Value

 Total participants

22

24

 PGY-1

12 (52%)

9 (38%)

0.38

 PGY-2

8 (39%)

12 (50%)

0.65

 PGY-3

2 (9%)

3 (12%)

1

 Internal medicine

17

19

1

 Anesthesia

2

1

0.6

 Emergency medicine

2

2

1

 Surgery

1

2

1

 Previous code experience

3 (13.6%)

6 (25%)

0.48

 Precode stress

3.6 + 0.12

3.6 + 0.12

0.685

 Precode confidence

1.5 + 0.15

1.5 + 0.15

0.928

Code performance

Control group

App group

p-Value

 Backboard placement, N (%)

6 (27.3%)

23 (95.8%)

<0.0001

 CPR rate correction (N) %

16 (72.7%)

16 (66.7%)

0.655

 Time to CPR correction in sec (mean ± SD)

119 + 94

135 + 168

0.94

 CPR Depth Correction (N) %

11 (57.9%)

9 (45.0%)

0.527

EtCO2 use (N) %

6 (27.3%)

14 (58.3%)

0.033

 Bag-mask rate correction (N) %

11 (50.0%)

15 (62.5%)

0.393

 Defib pads placement (N) %

19 (86.4%)

23 (95.8%)

0.336

 Correct diagnosis (N) %

7 (31.8%)

14 (58.3%)

0.071

 Time to correct diagnosis in sec (mean ± SD)

584 + 165

498 + 176

0.605

Correct intervention (tPa) (N) %

5 (22.7%)

13 (54.2%)

0.029

 Time to tPa administration in sec (mean ± SD)

664 + 138

603 + 132

0.369

ROSC (N) %

4 (18.20%)

12 (50.0%)

0.024

 Time to ROSC in sec (mean ± SD)

689.5 + 58.2

705.9 + 113.6

0.715

 Verbalized H&Ts (N) %

16 (72.7%)

18 (75.0%)

0.861

 Number of errors per person (mean ± SE)

0.95 + 0.30

0.38 + 0.13

0.175

Subjective experience (low 1 to high 4)

Control group

App group

p-Value

 Stress reduction (mean ± SE)

0.56 + 0.19

0.83 + 0.12

0.224

Confidence increase (mean ± SE)

0.30 + 0.19

1 + 0.14

0.005

Abbreviations: CPR, cardiopulmonary resuscitation; end-tidal CO2, EtCO2; ROSC, return of spontaneous circulation; SE, standard error; SD, standard deviation.



Objective Simulated Outcomes

ROSC, the study's primary outcome, was achieved by 4 (18.2%) of pocket card users compared with 12 (50%) of app users (p = 0.024) with an effect size of 0.67 by correct thrombolytic treatment administered by 5 (22.7%) of pocket card users compared with 13 (54.2%) of app users (p = 0.029). A mismatch between correct treatment and ROSC was observed for two participants whose simulation ended before the next pulse check that would have shown ROSC had been achieved ([Table 1]).

Statistically significant improvements in guideline adherence (secondary outcome) were demonstrated in the app group by greater backboard use (96 vs. 27%; p < 0.001) and EtCO2 monitoring (58 vs. 27%; p = 0.033). Other outcomes assessing guideline adherence were not statistically significant, including correction of inappropriate CPR technique, time to CPR correction, correction of inappropriate bag-mask technique, time to bag-mask technique correction, and correct defibrillator pad placement. Similarly, time to diagnosis, interventions, and ROSC was similar between both groups as detailed in [Table 1].

There was also a trend toward app users performing fewer errors per person (0.38 ± 0.13) compared with pocket card users (0.95 ± 0.30) (p = 0.175). In total, 9 errors were made in the app group compared with 17 errors in the pocket card group. The most common errors made in the code scenario were administration of a heparin drip (rather than tPA) (n = 8), administration of atropine (n = 5), activation of the cardiac catheterization laboratory (n = 3), and administration of amiodarone (n = 3).


Physician Stress and Confidence

Baseline levels of stress were similar between pocket card (mean = 3.6 ± 0.12) and app users (3.6 ± 0.12), indicating a relatively high baseline perceived stress surrounding in-hospital cardiac arrest management (4 = highest stress). Baseline confidence between pocket card (1.5 ± 0.15) and app users (1.5 ± 0.16) was similarly low ([Table 1]).

Postsurvey data revealed that use of both the AHA ACLS pocket card and iOS app reduced code stress and increased confidence. Among pocket card users, there was an average decrease in stress of 0.56 points (±0.19) while app users displayed a higher mean 0.83-point stress reduction (±0.12), although this trend was not significant (p = 0.22). In contrast, we found that app users displayed increased confidence (1.0 ± 0.14) compared with pocket card users (0.3 ± 0.19) group (p = 0.005) ([Table 1]).


User Feedback

A separate focus group with 22 internal medicine residents (PGY1–3) was conducted to elicit feedback on user interface and experience. Out of these 22 residents, 100% endorsed that they would use this app in real-world clinical practice. Eighteen of the residents (82%) preferred the guided version of the app compared with the AHA pocket cards. All 8 PGY-1s (100%) preferred the guided app compared with 83% of PGY-2s and 63% of PGY-3s.



Discussion

This randomized controlled trial comparing a novel, guided iOS ACLS app to AHA pocket cards demonstrated improvements in both user experience and simulated clinical outcomes across several important metrics. From a clinical performance perspective, critical cardiac arrest care steps, including backboard placement, use of EtCO2 monitoring, correct tPA administration, and ROSC, were all statistically significantly improved in the guided app group compared with the pocket card group.

From a user perspective, improvements in confidence were also displayed among app users compared with pocket card users. At baseline, residents at the PGY-1–3 levels exhibited high anxiety and low confidence around their ability to lead resuscitation after in-hospital cardiac arrest. Comparisons of pre- and postsimulation survey scores showed a positive trend in user code-related stress reduction after using the guided app and a statistically significant improvement in confidence, which has the potential to enhance clear communication with the team and elevate performance.

Focus group data and feedback from postsimulation surveys also indicated that participants overwhelmingly preferred using the guided algorithm compared with the AHA pocket cards. Positive feedback from postsimulation surveys was centered around the app's “H&Ts” section and easily accessible timers and alerts for CPR cycle and medication administration. Many users commented on how these functions decreased their cognitive load during the simulation and enabled them to focus primarily on diagnosis and management. User preference for the guided app also may correlate with PGY-level or prior code exposure since focus group participants earlier in training exhibited a higher preference for the guided app compared with more experienced participants later in training. Overall, focus group data showed that 82% of residents preferred the guided app across all PGY years to existing AHA pocket cards, and 100% reported that they would use the app in clinical practice.

Interestingly, CPR quality, including chest compression rate and depth, did not differ significantly between the two groups. We suspect the overall similarity in outcomes was primarily driven by limitations of the simulation testing environment, the learning curve associated with using a new app in a cognitively demanding scenario, and possible in-app notification fatigue.

In debrief sessions following the simulation, many participants stated that they thought that the simulation team's poor CPR technique was purposeful to mitigate actors' physical exhaustion over multiple simulations in 1 day despite clear instructions to treat everything occurring in the simulated environment as real life prior to beginning the scenario ([Supplementary File 1], available in the online version). As a result, many participants explained that they did not verbally correct chest compression rate or depth despite recognizing incorrect form during the resuscitation. Sterz et al discussed similar limitations of mannequin-based simulation training after discovering participants using mannequins scored lower in objective structured clinical examination evaluation than participants using simulated patients.[30] Given the challenge of maintaining a high level of realism with mannequin-based simulation, suspending disbelief using high-quality presimulation briefing[31] has been suggested for improving training conditions and follow-up studies.

iOS app users also may have displayed greater difficulty in correcting CPR technique compared with pocket card users (though nonsignificantly) due to the learning curve associated with navigating a novel ACLS management tool in real-time at the bedside in a stressful simulated clinical environment. Implementing a new workflow change often requires a brief learning and adjustment period, which may affect performance and outcomes. We suspect that with more hands-on time with the app outside of the cognitive stress of the cardiac arrest scenario, users may have been able to feel more comfortable using the app and therefore spend more time engaging the team and assessing proper chest compression technique. In fact, we recommend that all future users familiarize themselves with the app's functionality and limitations prior to clinical use.

We also suspect that the concept of “notification fatigue” may have presented a challenge to app users. Clinicians have been shown to become less likely to pay attention to alerts within a CDS platform, particularly if those alerts are repeated.[32] We therefore theorize that repeated pop-up notifications and haptic vibrations reminding users of correct CPR technique could have contributed to user notification fatigue leading to nonimproved outcomes. As a result, careful design that limits notifications should be emphasized in future versions of this guided app and in similar CDS tools.


Limitations

The study was limited by a small group of participants at a single institution in a simulated clinical scenario outside of real-world clinical practice. Our small sample size also limited our ability to perform statistically meaningful subgroup analyses of performance by PGY-year. Study participants were also resident physicians with various levels of PGY training and prior code experience, making our findings less applicable to attending physicians or fellows with greater clinical expertise. Actor technique possibly subtly varied from session to session, and differences in video reviewer and scoring may have contributed to increased variability in results despite best attempts to control for discrepancies. Blinding reviewers was not possible due to the position of the wall-mounted cameras in the simulation room, which could have introduced some reviewer bias. Many residents also stated that they did not fully treat the simulation scenario as true life as mentioned above. Due to funding constraints, we were only able to develop an iOS version of the guided app but hope to build an Android version eventually.

Future simulation studies would benefit from more thorough presimulation briefing and greater reviewer standardization. Future clinical studies are planned for more comprehensive clinical validation during real-world, in-hospital cardiac arrest resuscitations and will utilize a larger participant sample size to increase the generalizability of results. Lastly, future app iterations will benefit from a more multidisciplinary design approach, leveraging greater expertise and feedback from a diverse set of stakeholders in clinical practice, medical education, software development, and patient advocacy.


Conclusion

A guided ACLS app may help to address the need for greater standardization for in-hospital cardiac arrest management and may help improve trainee experience. The guided iOS app showed preliminary improvements in subjective and objective outcomes in simulated cardiac arrest care, including higher rates of ROSC, correct treatment, adherence to AHA ACLS guidelines, and user confidence. Further design iterations and study are needed to optimize high-quality CPR adherence. Real-world validation studies are needed to confirm the app's efficacy in clinical practice.


Clinical Relevance Statement

This guided mobile application represents a first of its kind, validated ACLS decision aid that combines the 2020 AHA ACLS algorithms with organizational timers, reminders, haptic feedback, and an interactive differential diagnosis panel to help decrease code leader cognitive load, highlight appropriate diagnostic and therapeutic interventions, and improve guideline adherence with the potential to improve code leader AHA ACLS guideline adherence and increase the chance of achieving ROSC in real-world clinical use. The app was directly compared with existing AHA ACLS pocket cards via randomized controlled trial during simulated cardiac arrest and demonstrated improved user subjective and objective code leader performance. The app was preferred by resident physicians over AHA pockets, and all participants said that they would use it clinically.


Multiple-Choice Questions

  1. Clinical decision support tools have been shown to be associated with which of the following?

    • Improved guideline adherence

    • Increased burnout rate

    • Increased errors

    • Increased cognitive load

    Correct Answer: The correct answer is option a. As detailed in the Background and Significance section, clinical decision support tools have been shown to improve guideline adherence.

  2. Which of the following outcomes were improved with use of the guided ACLS mobile app?

    • User confidence

    • User confidence and backboard use/EtCO2 monitoring

    • User confidence, backboard use/EtCO2 monitoring, and correct therapeutic intervention

    • User confidence, backboard use/EtCO2 monitoring, correct therapeutic intervention, and ROSC rate

    Correct Answer: The correct answer is option d. As detailed in the Results section, user confidence, backboard use/EtCO2 monitoring, correct therapeutic intervention, and ROSC rate all showed improvement with use of the guided app.



Conflict of Interest

J.C. and A.C. are co-creators of the AHA ACLS mobile app (iOS and Android) in collaboration with the American Heart Association.

Acknowledgments

Our study team would like to thank the STRATUS Center for Medical Simulation staff, our nursing simulation actors (Beth Waters, Megan Howland, Lia Carroll, Samantha Allen, Jill Santini, Beth White, and Gary Bednarz), the Brigham iHub (Chen Cao), Fidencio Saldaña, and Paula McCree at the Massachusetts General Hospital Healthcare Transformation Lab for their kind support. The funders played no role in study design, data collection, analysis and interpretation of data, or the writing of this manuscript. We thank them for their generous support.

Data Availability

The experimental data and the simulation results that support the findings of this study are available upon request.


Code Availability

Our Swift UIKit code is not publicly available at the time of publication due to institutional intellectual property policy.


Protection of Human and Animal Subjects

The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects and was reviewed by Brigham and Women's Hospital's Institutional Review Board.


Authors' Contribution

Co-principal investigators: J.C., N.C.


* Co-first authors: Michael J. Senter-Zapata, Dylan V. Neel.



Address for correspondence

Michael Senter-Zapata, MD
Brigham and Women's Hospital
75 Francis Street, Boston, MA 02115
United States   

Publication History

Received: 25 April 2024

Accepted: 18 July 2024

Article published online:
02 October 2024

© 2024. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany


Zoom
Fig. 1 Design of the guided ACLS iOS mobile application. The app guides trainees through appropriate steps in management according to the 2020 American Heart Association cardiac arrest algorithm and includes easy to access timers, haptic reminders for correct CPR technique and medication administration, an “H&Ts” differential diagnosis section, and an exportable event log. ACLS, advanced cardiovascular life support; CPR, cardiopulmonary resuscitation.
Zoom
Fig. 2 Trial design. Randomized controlled trial design to evaluate the efficacy of our ACLS iOS mobile app. ACLS, advanced cardiovascular life support; AHA, American Heart Association.