Subscribe to RSS
DOI: 10.1055/s-0044-1788979
An Advanced Cardiac Life Support Application Improves Performance during Simulated Cardiac Arrest
Authors
Funding This study was funded by the Massachusetts General Hospital Healthcare Transformation Lab, Brigham Education Institute (BEI), the Brigham and Women's Internal Medicine Residency Program Office, and the Mass General Brigham Office of Graduate Medical Education Center of Expertise (COE) in MedEd.
Abstract
Objectives Variability in cardiopulmonary arrest training and management leads to inconsistent outcomes during in-hospital cardiac arrest. Existing clinical decision aids, such as American Heart Association (AHA) advanced cardiovascular life support (ACLS) pocket cards and third-party mobile apps, often lack comprehensive management guidance. We developed a novel, guided ACLS mobile app and evaluated user performance during simulated cardiac arrest according to the 2020 AHA ACLS guidelines via randomized controlled trial.
Methods Forty-six resident physicians were randomized to lead a simulated code team using the AHA pockets cards (N = 22) or the guided app (N = 24). The primary outcome was successful return of spontaneous circulation (ROSC). Secondary outcomes included code leader stress and confidence, AHA ACLS guideline adherence, and errors. A focus group of 22 residents provided feedback. Statistical analysis included two-sided t-tests and Fisher's exact tests.
Results App users showed significantly higher ROSC rate (50 vs. 18%; p = 0.024), correct thrombolytic administration (54 vs. 23%; p = 0.029), backboard use (96 vs. 27%; p < 0.001), end-tidal CO2 monitoring (58 vs. 27%; p = 0.033), and confidence compared with baseline (1.0 vs 0.3; p = 0.005) compared with controls. A focus group of 22 residents indicated unanimous willingness to use the app, with 82% preferring it over AHA pocket cards.
Conclusion Our guided ACLS app shows potential to improve user confidence and adherence to the AHA ACLS guidelines and may help to standardize in-hospital cardiac arrest management. Further validation studies are essential to confirm its efficacy in clinical practice.
Background and Significance
The position of “code” (cardiopulmonary arrest) team leader during cardiac arrest events in the hospital is one of the most stressful responsibilities that resident physicians face during training[1] [2] and has been exacerbated during the coronavirus 2019 pandemic.[3] Residents receive advanced cardiovascular life support (ACLS) training prior to starting their training and are provided with the American Heart Association (AHA) pocket cards containing algorithms for common code scenarios, which represent the current gold standard for ACLS clinical decision support (CDS). While the current AHA ACLS pocket cards represent an important advancement in ACLS care over the last several decades, advancement in digital CDS technologies has rendered their use more limited. Furthermore, there can be months to years of lag time between ACLS training and a resident's first code event.[4] Although many medical schools and residency programs in the United States employ simulation-based ACLS curricula,[5] their implementation is variable, and not all domestic and international medical schools have access to state-of-the-art simulation training facilities, which can cost millions of dollars to build and hundreds of thousands of dollars per year to maintain.[6]
Inconsistent training opportunities lead to significant anxiety among residents, contribute to resident burnout, and lead to worse patient outcomes.[4] [7] Digital CDS tools have emerged as a potential solution that have shown promising results when studied in both simulated and real-world high-acuity clinical situations. CDS use has been associated with higher simulation performance scores,[8] improved adherence to guidelines,[9] [10] [11] [12] higher number of correct ACLS interventions and fewer mistakes,[13] [14] reduced time to drug delivery,[12] [15] [16] subjective decreased cognitive load,[17] improved user experience,[18] and increased likelihood of recommendation to colleagues.[19] However, some studies have demonstrated worse outcomes in simulated ACLS care when using bedside CDS tools[20] due to a delay in initiating chest compressions, highlighting the need for careful design.
Although several ACLS iOS mobile CDS applications currently exist, most serve as single screen organizational tools that either display event timers or a digitized version of the AHA ACLS pocket card. However, they often lack reminder functionality (e.g., on high-quality cardiopulmonary resuscitation [CPR] technique), internal timers, direct management guidance according to the AHA guidelines, or the ability to log key code events. We believed that the ideal ACLS CDS bedside aid would combine organizational and reminder tools with a step-by-step checklist-based approach to guideline-based ACLS instructions, similar to preflight and preoperative checklists used in the aviation and surgical fields.[21]
Therefore, our team designed an innovative, standalone iOS mobile app using the current AHA ACLS algorithms to help code leaders run effective, guideline-driven hospital codes. The app breaks down existing AHA ACLS algorithms into multiple, simple screens with yes/no decision buttons and checklists that guide users through appropriate code management at each branch point in the algorithm. This combination of organizational functionality and point-of-care clinical guidance has the potential to increase adherence to guideline-based practice and to combat code leader anxiety, cognitive load, and decision fatigue.
Objective
This study aimed to assess the efficacy of this ACLS mobile app in improving (1) subjective code leader experience and (2) objective performance according to AHA ACLS guidelines during simulated cardiac arrest.
Methods
App Development and Features
Using the Swift programming language, we developed a mobile iOS application to guide users through appropriate steps in cardiac arrest management according to a licensed copy of the 2020 AHA cardiac arrest algorithm. The app was designed for trainees in real-world clinical practice to help solidify the algorithmic steps of appropriate ACLS management by easing code leader cognitive load and allowing greater focus on diagnostics and therapeutics rather than on logistical organization and resource management. However, the app can help clinicians of all experience levels adhere to best-practice ACLS guidelines.
Our development team consisted of medical students, internal medicine resident physicians, attending physicians board certified in internal medicine and emergency medicine, a software engineer, and a UI/UX designer.
The app is designed to be opened by the code team leader upon arrival to the resuscitation. Once the user presses the start button, the app guides the user through appropriate cardiac arrest management steps using simple checklists and reminders to perform critical steps over multiple screens. The app walks users through the AHA cardiac arrest algorithm according to user inputs, such as the presence of a shockable versus nonshockable rhythm.
In line with an ideal cardiac arrest app as outlined by Muller et al,[22] our app offers timers that track total code duration, CPR cycles, and epinephrine and amiodarone dosing. On-screen notifications coupled with vibrational (haptic) alerts after each pulse/rhythm check remind users to deliver high-quality CPR (i.e., compression rate of 100–120 per minute, compression depth 2.5,″ and full chest recoil).[23] The app causes the phone to vibrate in its user's hand at a rate of 100 vibrations per minute as a way for code leaders to track compression rate and relay immediate CPR feedback to the code team. Other haptic notifications alert users when pulse/rhythm checks and medication doses are due according to AHA guidelines and to follow end-tidal CO2 (EtCO2) to assess compression quality and monitor for return of spontaneous circulation (ROSC). The app also features an “H&Ts” section (list of common causes of cardiac arrest beginning with the letters “H” and “T”) with diagnostic and treatment information where users can rule in and rule out common diagnoses and maintain an evolving differential diagnosis over multiple screens ([Fig. 1]). Within the H&Ts section, specific diagnostic and therapeutic recommendations are highlighted for each condition listed. The app also has a “Roles” section where trainees can ensure that all code roles are filled by appropriate staff. Lastly, all code data, including number of medication doses/shocks administered, duration of code, total CPR cycles, and timing of interventions, is stored locally on the user's iOS device and can be exported for documenting or research purposes. The app currently has no communication with any electronic medical record and does not utilize or store any identified patient data.


Study Sample and Setting
Residents at the Brigham and Women's Hospital were invited to participate in a simulated cardiac arrest resuscitation at the STRATUS Center for Medical Simulation, located on campus. The sessions were designed in conjunction with the Emergency Medicine Residency program leadership and the STRATUS Simulation Team to serve primarily as an educational opportunity for residents to practice code leadership skills outside of their regular residency curriculum and receive immediate feedback by a board-certified faculty member. Secondarily, sessions served as testing environments to assess the utility of the guided ACLS mobile app in this randomized controlled trial. Given a focus on educational training, emergency medicine program leadership selected massive pulmonary embolism as the underlying etiology of cardiac arrest given that its diagnostic complexity often makes it a diagnosis of exclusion in clinical practice, encouraging participants to rely on complex problem solving and data synthesis to reach an appropriate management plan. The case has been outlined in [Supplementary File 1] (available in the online version) for ease of reproducibility.
Recruitment and Randomization of Study Participants
A total of 300 residents from various specialties including internal medicine, emergency medicine, general surgery, and anesthesia (postgraduate year [PGY] 1–3) were invited via email between September 2022 and April 2023 to partake in the educational sessions and were offered $50 USD Amazon gift cards for study participation. Inclusion criteria included internal medicine, emergency medicine, general surgery, and anesthesia resident physicians at the Brigham and Women's Hospital at any stage of residency training who had received previous ACLS certification according to the AHA standards and medical licensing regulations. Given the minimal risk of harm to subjects, the Institutional Review Board (IRB) waived the requirement for written documentation of informed consent according to IRB protocol number 2021P002066. Instead, participants gave verbal consent to study participation at the time of their scheduled simulation session.
They were then randomized by the primary authors using the rand function in Excel v16.70 (Microsoft Corporation, Redmond, WA) to use the app (experimental group) or the AHA ACLS pocket card (control group) during the simulated code in a 1:1 allocation ratio ([Fig. 2]). Participants' characteristics (e.g., residency program and PGY-year) were balanced between groups.


A power calculation was performed according to current literature that shows baseline in-hospital cardiac arrest ROSC rates between 45 and 70%,[24] [25] [26] which have been shown to improve to 80 to 85% after simulation-based or educational interventions.[27] [28] Using these data, a target sample size of 36 participants was calculated to detect an effect size of 0.88 at α-level 0.05 and 80% power.
Simulation Scenario
Following randomization, participants were given 3 minutes to familiarize themselves with either the iOS app or the pocket card. iOS app users were provided an iPhone 12 with the app preinstalled. Participants were all similarly briefed about the clinical scenario, various actor roles in the room, and clinical tools at their disposal. Both groups received clarification that the simulation team would be performing its duties true to life (as if resuscitating a real patient).
Once the scenario began, participants were given 15 minutes to function as code leader. The simulation room contained standard equipment, including a defibrillator and code cart with intravenous (IV) fluids, calcium, sodium bicarbonate, magnesium, epinephrine, amiodarone, and atropine. Emergency airway supplies were not provided given that a full intubation sequence was beyond the scope of the trial and not an endpoint.
Three simulation actors were present during sessions: one performing chest compressions, one bag-masking the mannequin, and one managing the defibrillator and administering IV medications. The actor performing chest compressions purposefully performed poor-quality CPR with a low compression rate (e.g., 75 compressions/min) and to an inadequate depth of 1 inch while the actor performing bag-mask oxygenation purposefully administered breaths too frequently (1 breath every 1 second). Actors rehearsed these improper techniques as a team before simulation cases to help maintain consistency. Backboard placement, bag-mask initiation, defibrillation pads, and attachment of EtCO2, all interventions indicated by 2020 AHA ACLS guidelines, were not performed unless explicitly requested by the study participant leading the resuscitation. Patient ROSC was achieved at the next pulse/rhythm check if the participant administered tissue plasminogen activator (tPA) as treatment for massive pulmonary embolism.
Data Collection
Presimulation surveys were used to assess demographics and baseline perceived code-related stress and confidence (low 1 to high 4) ([Supplementary File 2], available in the online version) followed by postsimulation surveys to assess stress and confidence experienced during the simulation ([Supplementary Files 3–4], available in the online version).
Video was reviewed to assess whether or not a checklist of guideline-based critical code steps was performed (e.g., placing a backboard under patient, ensuring that high-quality chest compressions were administered at the correct rate/depth, ensuring EtCO2 was monitored, and ensuring breaths via bag-mask were administered at the correct rate)[29] ([Supplementary File 5], available in the online version). Data from videos were reviewed by one Harvard Medical School student and two STRATUS Simulation Center simulation fellows. Two training sessions with all three reviewers present were spent standardizing data collection techniques to ensure consistency and accuracy between reviewers using video samples. Given the number and length of videos, each was reviewed by one reviewer. Reviewers could not be blinded, since the video clearly showed whether the study participant was using a pocket card or the mobile app due to the position of the wall-mounted camera in the simulation suite. Video reviewers understood the study's purpose but had no role in app development or trial design.
Internal medicine residents from the same study cohort in both control and experimental arms were also invited to participate in a focus group to share first impressions on user interface and experience. After a brief overview of the app's functionality, they were given independent access to the app for up to 5 minutes and then asked to complete a private, anonymous feedback survey ([Supplementary File 6], available in the online version).
Outcome Measures
The study's primary outcome was return of spontaneous circulation (ROSC), achieved after correct administration of tPA. ROSC was chosen as the primary outcome, since it represents the ultimate marker of a successful resuscitation effort (saving a patient's life). Secondary outcomes included code leader stress and confidence, AHA ACLS guideline adherence, diagnosis, treatment, time to diagnosis and interventions, and errors made. We hypothesized that use of our guided app would lead to self-reported improvements in user code-related stress and anxiety in addition to improvements in code leader clinical performance, with a null hypothesis that subjective and objective performance metrics would be equivalent in both groups.
Statistical Analysis
Binary study outcome measurements were assessed using a chi-square or Fisher's exact test in R software v.4.0.5 (significance level: 0.05). For continuous outcome measures (e.g., time to ROSC), a Shapiro–Wilk test and visual inspection by histogram were performed to assess normality. For continuous, normally distributed outcomes (time to diagnosis, time to tPA, time to ROSC), mean values for the experimental and control groups were compared using a two-sided t-test with a significance level 0.05. For continuous, non-normal datasets (time to CPR, number of errors made during simulation), a nonparametric Mann–Whitney–Wilcoxon test was chosen. Values are reported as mean ± standard deviation. The co-first authors of this manuscript had full access to all the data in the study and take responsibility for its integrity.
Results
Demographics
Forty-six participants out of 300 (15.3%) invited agreed to participate in the study. Among participants, 21 (45.7%) were PGY-1s, 20 (43.5%) were PGY-2s, and 5 (10.9%) were PGY-3s. The following specialties were represented: anesthesia 3 (6.5%), emergency medicine 4 (8.7%), internal medicine 36 (78.3%), and general surgery 3 (6.5%). Twenty-two participants (47.8%) were randomized to the ACLS pocket card group compared with 24 (52.2%) in the guided app group ([Table 1]). Three (13.6%) pocket card participants reported previous code leadership experience compared with 6 (25%) guided app users (p = 0.48). Among these nine participants who reported previous code leadership experience, 8 (88.9%) preferred existing mobile app ACLS decision aids compared with 1 (11.1%) who preferred AHA ACLS pocket cards. All participants were able to successfully initiate and navigate the app during presimulation orientation.
Abbreviations: CPR, cardiopulmonary resuscitation; end-tidal CO2, EtCO2; ROSC, return of spontaneous circulation; SE, standard error; SD, standard deviation.
Objective Simulated Outcomes
ROSC, the study's primary outcome, was achieved by 4 (18.2%) of pocket card users compared with 12 (50%) of app users (p = 0.024) with an effect size of 0.67 by correct thrombolytic treatment administered by 5 (22.7%) of pocket card users compared with 13 (54.2%) of app users (p = 0.029). A mismatch between correct treatment and ROSC was observed for two participants whose simulation ended before the next pulse check that would have shown ROSC had been achieved ([Table 1]).
Statistically significant improvements in guideline adherence (secondary outcome) were demonstrated in the app group by greater backboard use (96 vs. 27%; p < 0.001) and EtCO2 monitoring (58 vs. 27%; p = 0.033). Other outcomes assessing guideline adherence were not statistically significant, including correction of inappropriate CPR technique, time to CPR correction, correction of inappropriate bag-mask technique, time to bag-mask technique correction, and correct defibrillator pad placement. Similarly, time to diagnosis, interventions, and ROSC was similar between both groups as detailed in [Table 1].
There was also a trend toward app users performing fewer errors per person (0.38 ± 0.13) compared with pocket card users (0.95 ± 0.30) (p = 0.175). In total, 9 errors were made in the app group compared with 17 errors in the pocket card group. The most common errors made in the code scenario were administration of a heparin drip (rather than tPA) (n = 8), administration of atropine (n = 5), activation of the cardiac catheterization laboratory (n = 3), and administration of amiodarone (n = 3).
Physician Stress and Confidence
Baseline levels of stress were similar between pocket card (mean = 3.6 ± 0.12) and app users (3.6 ± 0.12), indicating a relatively high baseline perceived stress surrounding in-hospital cardiac arrest management (4 = highest stress). Baseline confidence between pocket card (1.5 ± 0.15) and app users (1.5 ± 0.16) was similarly low ([Table 1]).
Postsurvey data revealed that use of both the AHA ACLS pocket card and iOS app reduced code stress and increased confidence. Among pocket card users, there was an average decrease in stress of 0.56 points (±0.19) while app users displayed a higher mean 0.83-point stress reduction (±0.12), although this trend was not significant (p = 0.22). In contrast, we found that app users displayed increased confidence (1.0 ± 0.14) compared with pocket card users (0.3 ± 0.19) group (p = 0.005) ([Table 1]).
User Feedback
A separate focus group with 22 internal medicine residents (PGY1–3) was conducted to elicit feedback on user interface and experience. Out of these 22 residents, 100% endorsed that they would use this app in real-world clinical practice. Eighteen of the residents (82%) preferred the guided version of the app compared with the AHA pocket cards. All 8 PGY-1s (100%) preferred the guided app compared with 83% of PGY-2s and 63% of PGY-3s.
Discussion
This randomized controlled trial comparing a novel, guided iOS ACLS app to AHA pocket cards demonstrated improvements in both user experience and simulated clinical outcomes across several important metrics. From a clinical performance perspective, critical cardiac arrest care steps, including backboard placement, use of EtCO2 monitoring, correct tPA administration, and ROSC, were all statistically significantly improved in the guided app group compared with the pocket card group.
From a user perspective, improvements in confidence were also displayed among app users compared with pocket card users. At baseline, residents at the PGY-1–3 levels exhibited high anxiety and low confidence around their ability to lead resuscitation after in-hospital cardiac arrest. Comparisons of pre- and postsimulation survey scores showed a positive trend in user code-related stress reduction after using the guided app and a statistically significant improvement in confidence, which has the potential to enhance clear communication with the team and elevate performance.
Focus group data and feedback from postsimulation surveys also indicated that participants overwhelmingly preferred using the guided algorithm compared with the AHA pocket cards. Positive feedback from postsimulation surveys was centered around the app's “H&Ts” section and easily accessible timers and alerts for CPR cycle and medication administration. Many users commented on how these functions decreased their cognitive load during the simulation and enabled them to focus primarily on diagnosis and management. User preference for the guided app also may correlate with PGY-level or prior code exposure since focus group participants earlier in training exhibited a higher preference for the guided app compared with more experienced participants later in training. Overall, focus group data showed that 82% of residents preferred the guided app across all PGY years to existing AHA pocket cards, and 100% reported that they would use the app in clinical practice.
Interestingly, CPR quality, including chest compression rate and depth, did not differ significantly between the two groups. We suspect the overall similarity in outcomes was primarily driven by limitations of the simulation testing environment, the learning curve associated with using a new app in a cognitively demanding scenario, and possible in-app notification fatigue.
In debrief sessions following the simulation, many participants stated that they thought that the simulation team's poor CPR technique was purposeful to mitigate actors' physical exhaustion over multiple simulations in 1 day despite clear instructions to treat everything occurring in the simulated environment as real life prior to beginning the scenario ([Supplementary File 1], available in the online version). As a result, many participants explained that they did not verbally correct chest compression rate or depth despite recognizing incorrect form during the resuscitation. Sterz et al discussed similar limitations of mannequin-based simulation training after discovering participants using mannequins scored lower in objective structured clinical examination evaluation than participants using simulated patients.[30] Given the challenge of maintaining a high level of realism with mannequin-based simulation, suspending disbelief using high-quality presimulation briefing[31] has been suggested for improving training conditions and follow-up studies.
iOS app users also may have displayed greater difficulty in correcting CPR technique compared with pocket card users (though nonsignificantly) due to the learning curve associated with navigating a novel ACLS management tool in real-time at the bedside in a stressful simulated clinical environment. Implementing a new workflow change often requires a brief learning and adjustment period, which may affect performance and outcomes. We suspect that with more hands-on time with the app outside of the cognitive stress of the cardiac arrest scenario, users may have been able to feel more comfortable using the app and therefore spend more time engaging the team and assessing proper chest compression technique. In fact, we recommend that all future users familiarize themselves with the app's functionality and limitations prior to clinical use.
We also suspect that the concept of “notification fatigue” may have presented a challenge to app users. Clinicians have been shown to become less likely to pay attention to alerts within a CDS platform, particularly if those alerts are repeated.[32] We therefore theorize that repeated pop-up notifications and haptic vibrations reminding users of correct CPR technique could have contributed to user notification fatigue leading to nonimproved outcomes. As a result, careful design that limits notifications should be emphasized in future versions of this guided app and in similar CDS tools.
Limitations
The study was limited by a small group of participants at a single institution in a simulated clinical scenario outside of real-world clinical practice. Our small sample size also limited our ability to perform statistically meaningful subgroup analyses of performance by PGY-year. Study participants were also resident physicians with various levels of PGY training and prior code experience, making our findings less applicable to attending physicians or fellows with greater clinical expertise. Actor technique possibly subtly varied from session to session, and differences in video reviewer and scoring may have contributed to increased variability in results despite best attempts to control for discrepancies. Blinding reviewers was not possible due to the position of the wall-mounted cameras in the simulation room, which could have introduced some reviewer bias. Many residents also stated that they did not fully treat the simulation scenario as true life as mentioned above. Due to funding constraints, we were only able to develop an iOS version of the guided app but hope to build an Android version eventually.
Future simulation studies would benefit from more thorough presimulation briefing and greater reviewer standardization. Future clinical studies are planned for more comprehensive clinical validation during real-world, in-hospital cardiac arrest resuscitations and will utilize a larger participant sample size to increase the generalizability of results. Lastly, future app iterations will benefit from a more multidisciplinary design approach, leveraging greater expertise and feedback from a diverse set of stakeholders in clinical practice, medical education, software development, and patient advocacy.
Conclusion
A guided ACLS app may help to address the need for greater standardization for in-hospital cardiac arrest management and may help improve trainee experience. The guided iOS app showed preliminary improvements in subjective and objective outcomes in simulated cardiac arrest care, including higher rates of ROSC, correct treatment, adherence to AHA ACLS guidelines, and user confidence. Further design iterations and study are needed to optimize high-quality CPR adherence. Real-world validation studies are needed to confirm the app's efficacy in clinical practice.
Clinical Relevance Statement
This guided mobile application represents a first of its kind, validated ACLS decision aid that combines the 2020 AHA ACLS algorithms with organizational timers, reminders, haptic feedback, and an interactive differential diagnosis panel to help decrease code leader cognitive load, highlight appropriate diagnostic and therapeutic interventions, and improve guideline adherence with the potential to improve code leader AHA ACLS guideline adherence and increase the chance of achieving ROSC in real-world clinical use. The app was directly compared with existing AHA ACLS pocket cards via randomized controlled trial during simulated cardiac arrest and demonstrated improved user subjective and objective code leader performance. The app was preferred by resident physicians over AHA pockets, and all participants said that they would use it clinically.
Multiple-Choice Questions
-
Clinical decision support tools have been shown to be associated with which of the following?
-
Improved guideline adherence
-
Increased burnout rate
-
Increased errors
-
Increased cognitive load
Correct Answer: The correct answer is option a. As detailed in the Background and Significance section, clinical decision support tools have been shown to improve guideline adherence.
-
-
Which of the following outcomes were improved with use of the guided ACLS mobile app?
-
User confidence
-
User confidence and backboard use/EtCO2 monitoring
-
User confidence, backboard use/EtCO2 monitoring, and correct therapeutic intervention
-
User confidence, backboard use/EtCO2 monitoring, correct therapeutic intervention, and ROSC rate
Correct Answer: The correct answer is option d. As detailed in the Results section, user confidence, backboard use/EtCO2 monitoring, correct therapeutic intervention, and ROSC rate all showed improvement with use of the guided app.
-
Conflict of Interest
J.C. and A.C. are co-creators of the AHA ACLS mobile app (iOS and Android) in collaboration with the American Heart Association.
Acknowledgments
Our study team would like to thank the STRATUS Center for Medical Simulation staff, our nursing simulation actors (Beth Waters, Megan Howland, Lia Carroll, Samantha Allen, Jill Santini, Beth White, and Gary Bednarz), the Brigham iHub (Chen Cao), Fidencio Saldaña, and Paula McCree at the Massachusetts General Hospital Healthcare Transformation Lab for their kind support. The funders played no role in study design, data collection, analysis and interpretation of data, or the writing of this manuscript. We thank them for their generous support.
Data Availability
The experimental data and the simulation results that support the findings of this study are available upon request.
Code Availability
Our Swift UIKit code is not publicly available at the time of publication due to institutional intellectual property policy.
Protection of Human and Animal Subjects
The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects and was reviewed by Brigham and Women's Hospital's Institutional Review Board.
Authors' Contribution
Co-principal investigators: J.C., N.C.
* Co-first authors: Michael J. Senter-Zapata, Dylan V. Neel.
-
References
- 1 Dias RD, Scalabrini Neto A. Acute stress in residents during emergency care: a study of personal and situational factors. Stress 2017; 20 (03) 241-248
- 2 Merriel A, Ficquet J, Barnard K. et al. The effects of interactive training of healthcare providers on the management of life-threatening emergencies in hospital. Cochrane Database Syst Rev 2019; 9 (09) CD012177
- 3 Shanafelt T, Ripp J, Trockel M. Understanding and addressing sources of anxiety among health care professionals during the COVID-19 pandemic. JAMA 2020; 323 (21) 2133-2134
- 4 Stefan MS, Belforti RK, Langlois G, Rothberg MB. A simulation-based program to train medical residents to lead and perform advanced cardiovascular life support. Hosp Pract 1995; 39 (04) 63-69
- 5 Wayne DB, Butter J, Siddall VJ. et al. Mastery learning of advanced cardiac life support skills by internal medicine residents using simulation technology and deliberate practice. J Gen Intern Med 2006; 21 (03) 251-256
- 6 Hippe DS, Umoren RA, McGee A, Bucher SL, Bresnahan BW. A targeted systematic review of cost analyses for implementation of simulation-based education in healthcare. SAGE Open Med 2020; 8: 2050312120913451
- 7 Rolston DM, Li T, Owens C. et al. Mechanical, team-focused, video-reviewed cardiopulmonary resuscitation improves return of spontaneous circulation after emergency department implementation. J Am Heart Assoc 2020; 9 (06) e014420
- 8 Low D, Clark N, Soar J. et al. A randomised control trial to determine if use of the iResus© application on a smart phone improves the performance of an advanced life support provider in a simulated medical emergency. Anaesthesia 2011; 66 (04) 255-262
- 9 Fitzgerald M, Cameron P, Mackenzie C. et al. Trauma resuscitation errors and computer-assisted decision support. Arch Surg 2011; 146 (02) 218-225
- 10 Hsu JM. Digital health technology and trauma: development of an app to standardize care. ANZ J Surg 2015; 85 (04) 235-239
- 11 Roitsch CM, Patricia KE, Hagan JL, Arnold JL, Sundgren NC. Tablet-based decision support tool improves performance of neonatal resuscitation: a randomized trial in simulation. Simul Healthc 2020; 15 (04) 243-250
- 12 Corazza F, Fiorese E, Arpone M. et al. The impact of cognitive aids on resuscitation performance in in-hospital cardiac arrest scenarios: a systematic review and meta-analysis. Intern Emerg Med 2022; 17 (07) 2143-2158
- 13 Hejjaji V, Malik AO, Peri-Okonny PA. et al. Mobile app to improve house officers' adherence to advanced cardiac life support guidelines: quality improvement study. JMIR Mhealth Uhealth 2020; 8 (05) e15762
- 14 Field LC, McEvoy MD, Smalley JC. et al. Use of an electronic decision support tool improves management of simulated in-hospital cardiac arrest. Resuscitation 2014; 85 (01) 138-142
- 15 Siebert JN, Ehrler F, Combescure C. et al. A mobile device app to reduce time to drug delivery and medication errors during simulated pediatric cardiopulmonary resuscitation: a randomized controlled trial. J Med Internet Res 2017; 19 (02) e31
- 16 Siebert JN, Ehrler F, Combescure C. et al; PedAMINES Trial Group. A mobile device application to reduce medication errors and time to drug delivery during simulated paediatric cardiopulmonary resuscitation: a multicentre, randomised, controlled, crossover trial. Lancet Child Adolesc Health 2019; 3 (05) 303-311
- 17 Grundgeiger T, Hahn F, Wurmb T, Meybohm P, Happel O. The use of a cognitive aid app supports guideline-conforming cardiopulmonary resuscitations: a randomized study in a high-fidelity simulation. Resusc Plus 2021; 7: 100152
- 18 Crabb DB, Hurwitz JE, Reed AC. et al. Innovation in resuscitation: a novel clinical decision display system for advanced cardiac life support. Am J Emerg Med 2021; 43: 217-223
- 19 Chu AL, Ziperstein JC, Niccum BA, Joice MG, Isselbacher EM, Conley J. STAT: mobile app helps clinicians manage inpatient emergencies. Healthcare (Amst) 2021; 9 (04) 100590
- 20 Nelson McMillan K, Rosen MA, Shilkofski NA, Bradshaw JH, Saliski M, Hunt EA. Cognitive aids do not prompt initiation of cardiopulmonary resuscitation in simulated pediatric cardiopulmonary arrests. Simul Healthc 2018; 13 (01) 41-46
- 21 Clay-Williams R, Colligan L. Back to basics: checklists in aviation and healthcare. BMJ Qual Saf 2015; 24 (07) 428-431
- 22 Müller SD, Lauridsen KG, Palic AH, Frederiksen LN, Mathiasen M, Løfgren B. Mobile app support for cardiopulmonary resuscitation: development and usability study. JMIR Mhealth Uhealth 2021; 9 (01) e16114
- 23 Meaney PA, Bobrow BJ, Mancini ME. et al; CPR Quality Summit Investigators, the American Heart Association Emergency Cardiovascular Care Committee, and the Council on Cardiopulmonary, Critical Care, Perioperative and Resuscitation. Cardiopulmonary resuscitation quality: [corrected] improving cardiac resuscitation outcomes both inside and outside the hospital: a consensus statement from the American Heart Association. Circulation 2013; 128 (04) 417-435
- 24 Chan PS, Tang Y. American Heart Association's Get With the Guidelines®-Resuscitation Investigators. Risk-standardizing rates of return of spontaneous circulation for in-hospital cardiac arrest to facilitate hospital comparisons. J Am Heart Assoc 2020; 9 (07) e014837
- 25 Yakar MN, Yakar ND, Akkılıç M, Karaoğlu RO, Mingir T, Turgut N. Clinical outcomes of in-hospital cardiac arrest in a tertiary hospital and factors related to 28-day survival: a retrospective cohort study. Turk J Emerg Med 2022; 22 (01) 29-35
- 26 Rohlin O, Taeri T, Netzereab S, Ullemark E, Djärv T. Duration of CPR and impact on 30-day survival after ROSC for in-hospital cardiac arrest-a Swedish cohort study. Resuscitation 2018; 132: 1-5
- 27 Young AK, Maniaci MJ, Simon LV. et al. Use of a simulation-based advanced resuscitation training curriculum: impact on cardiopulmonary resuscitation quality and patient outcomes. J Intensive Care Soc 2020; 21 (01) 57-63
- 28 Douthit NT, McBride CM, Townsley EC. Increasing internal medicine resident confidence in leading inpatient cardiopulmonary resuscitations and improving patient outcomes. J Med Educ Curric Dev 2020; 7: 2382120520923716
- 29 Merchant RM, Topjian AA, Panchal AR. et al; Adult Basic and Advanced Life Support, Pediatric Basic and Advanced Life Support, Neonatal Life Support, Resuscitation Education Science, and Systems of Care Writing Groups. Part 1: Executive Summary: 2020 American Heart Association Guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation 2020; 142 (16_suppl_2, suppl_2): S337-S357
- 30 Sterz J, Gutenberger N, Stefanescu MC. et al. Manikins versus simulated patients in emergency medicine training: a comparative analysis. Eur J Trauma Emerg Surg 2022; 48 (05) 3793-3801
- 31 Tyerman J, Luctkar-Flude M, Graham L, Coffey S, Olsen-Lynch E. Pre-simulation preparation and briefing practices for healthcare professionals and students: a systematic review protocol. JBI Database Syst Rev Implement Reports 2016; 14 (08) 80-89
- 32 Ancker JS, Edwards A, Nosal S, Hauser D, Mauer E, Kaushal R. with the HITEC Investigators. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak 2017; 17 (01) 36
Address for correspondence
Publication History
Received: 25 April 2024
Accepted: 18 July 2024
Article published online:
02 October 2024
© 2024. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Dias RD, Scalabrini Neto A. Acute stress in residents during emergency care: a study of personal and situational factors. Stress 2017; 20 (03) 241-248
- 2 Merriel A, Ficquet J, Barnard K. et al. The effects of interactive training of healthcare providers on the management of life-threatening emergencies in hospital. Cochrane Database Syst Rev 2019; 9 (09) CD012177
- 3 Shanafelt T, Ripp J, Trockel M. Understanding and addressing sources of anxiety among health care professionals during the COVID-19 pandemic. JAMA 2020; 323 (21) 2133-2134
- 4 Stefan MS, Belforti RK, Langlois G, Rothberg MB. A simulation-based program to train medical residents to lead and perform advanced cardiovascular life support. Hosp Pract 1995; 39 (04) 63-69
- 5 Wayne DB, Butter J, Siddall VJ. et al. Mastery learning of advanced cardiac life support skills by internal medicine residents using simulation technology and deliberate practice. J Gen Intern Med 2006; 21 (03) 251-256
- 6 Hippe DS, Umoren RA, McGee A, Bucher SL, Bresnahan BW. A targeted systematic review of cost analyses for implementation of simulation-based education in healthcare. SAGE Open Med 2020; 8: 2050312120913451
- 7 Rolston DM, Li T, Owens C. et al. Mechanical, team-focused, video-reviewed cardiopulmonary resuscitation improves return of spontaneous circulation after emergency department implementation. J Am Heart Assoc 2020; 9 (06) e014420
- 8 Low D, Clark N, Soar J. et al. A randomised control trial to determine if use of the iResus© application on a smart phone improves the performance of an advanced life support provider in a simulated medical emergency. Anaesthesia 2011; 66 (04) 255-262
- 9 Fitzgerald M, Cameron P, Mackenzie C. et al. Trauma resuscitation errors and computer-assisted decision support. Arch Surg 2011; 146 (02) 218-225
- 10 Hsu JM. Digital health technology and trauma: development of an app to standardize care. ANZ J Surg 2015; 85 (04) 235-239
- 11 Roitsch CM, Patricia KE, Hagan JL, Arnold JL, Sundgren NC. Tablet-based decision support tool improves performance of neonatal resuscitation: a randomized trial in simulation. Simul Healthc 2020; 15 (04) 243-250
- 12 Corazza F, Fiorese E, Arpone M. et al. The impact of cognitive aids on resuscitation performance in in-hospital cardiac arrest scenarios: a systematic review and meta-analysis. Intern Emerg Med 2022; 17 (07) 2143-2158
- 13 Hejjaji V, Malik AO, Peri-Okonny PA. et al. Mobile app to improve house officers' adherence to advanced cardiac life support guidelines: quality improvement study. JMIR Mhealth Uhealth 2020; 8 (05) e15762
- 14 Field LC, McEvoy MD, Smalley JC. et al. Use of an electronic decision support tool improves management of simulated in-hospital cardiac arrest. Resuscitation 2014; 85 (01) 138-142
- 15 Siebert JN, Ehrler F, Combescure C. et al. A mobile device app to reduce time to drug delivery and medication errors during simulated pediatric cardiopulmonary resuscitation: a randomized controlled trial. J Med Internet Res 2017; 19 (02) e31
- 16 Siebert JN, Ehrler F, Combescure C. et al; PedAMINES Trial Group. A mobile device application to reduce medication errors and time to drug delivery during simulated paediatric cardiopulmonary resuscitation: a multicentre, randomised, controlled, crossover trial. Lancet Child Adolesc Health 2019; 3 (05) 303-311
- 17 Grundgeiger T, Hahn F, Wurmb T, Meybohm P, Happel O. The use of a cognitive aid app supports guideline-conforming cardiopulmonary resuscitations: a randomized study in a high-fidelity simulation. Resusc Plus 2021; 7: 100152
- 18 Crabb DB, Hurwitz JE, Reed AC. et al. Innovation in resuscitation: a novel clinical decision display system for advanced cardiac life support. Am J Emerg Med 2021; 43: 217-223
- 19 Chu AL, Ziperstein JC, Niccum BA, Joice MG, Isselbacher EM, Conley J. STAT: mobile app helps clinicians manage inpatient emergencies. Healthcare (Amst) 2021; 9 (04) 100590
- 20 Nelson McMillan K, Rosen MA, Shilkofski NA, Bradshaw JH, Saliski M, Hunt EA. Cognitive aids do not prompt initiation of cardiopulmonary resuscitation in simulated pediatric cardiopulmonary arrests. Simul Healthc 2018; 13 (01) 41-46
- 21 Clay-Williams R, Colligan L. Back to basics: checklists in aviation and healthcare. BMJ Qual Saf 2015; 24 (07) 428-431
- 22 Müller SD, Lauridsen KG, Palic AH, Frederiksen LN, Mathiasen M, Løfgren B. Mobile app support for cardiopulmonary resuscitation: development and usability study. JMIR Mhealth Uhealth 2021; 9 (01) e16114
- 23 Meaney PA, Bobrow BJ, Mancini ME. et al; CPR Quality Summit Investigators, the American Heart Association Emergency Cardiovascular Care Committee, and the Council on Cardiopulmonary, Critical Care, Perioperative and Resuscitation. Cardiopulmonary resuscitation quality: [corrected] improving cardiac resuscitation outcomes both inside and outside the hospital: a consensus statement from the American Heart Association. Circulation 2013; 128 (04) 417-435
- 24 Chan PS, Tang Y. American Heart Association's Get With the Guidelines®-Resuscitation Investigators. Risk-standardizing rates of return of spontaneous circulation for in-hospital cardiac arrest to facilitate hospital comparisons. J Am Heart Assoc 2020; 9 (07) e014837
- 25 Yakar MN, Yakar ND, Akkılıç M, Karaoğlu RO, Mingir T, Turgut N. Clinical outcomes of in-hospital cardiac arrest in a tertiary hospital and factors related to 28-day survival: a retrospective cohort study. Turk J Emerg Med 2022; 22 (01) 29-35
- 26 Rohlin O, Taeri T, Netzereab S, Ullemark E, Djärv T. Duration of CPR and impact on 30-day survival after ROSC for in-hospital cardiac arrest-a Swedish cohort study. Resuscitation 2018; 132: 1-5
- 27 Young AK, Maniaci MJ, Simon LV. et al. Use of a simulation-based advanced resuscitation training curriculum: impact on cardiopulmonary resuscitation quality and patient outcomes. J Intensive Care Soc 2020; 21 (01) 57-63
- 28 Douthit NT, McBride CM, Townsley EC. Increasing internal medicine resident confidence in leading inpatient cardiopulmonary resuscitations and improving patient outcomes. J Med Educ Curric Dev 2020; 7: 2382120520923716
- 29 Merchant RM, Topjian AA, Panchal AR. et al; Adult Basic and Advanced Life Support, Pediatric Basic and Advanced Life Support, Neonatal Life Support, Resuscitation Education Science, and Systems of Care Writing Groups. Part 1: Executive Summary: 2020 American Heart Association Guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation 2020; 142 (16_suppl_2, suppl_2): S337-S357
- 30 Sterz J, Gutenberger N, Stefanescu MC. et al. Manikins versus simulated patients in emergency medicine training: a comparative analysis. Eur J Trauma Emerg Surg 2022; 48 (05) 3793-3801
- 31 Tyerman J, Luctkar-Flude M, Graham L, Coffey S, Olsen-Lynch E. Pre-simulation preparation and briefing practices for healthcare professionals and students: a systematic review protocol. JBI Database Syst Rev Implement Reports 2016; 14 (08) 80-89
- 32 Ancker JS, Edwards A, Nosal S, Hauser D, Mauer E, Kaushal R. with the HITEC Investigators. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak 2017; 17 (01) 36




