Subscribe to RSS
DOI: 10.3414/ME10-01-0072
Systematic Prioritization of the STARE-HI Reporting Items
An Application to Short Conference Papers on Health Informatics EvaluationPublication History
received:10 October 2010
accepted:08 March 2010
Publication Date:
19 January 2018 (online)
Summary
Background: We previously devised and published a guideline for reporting health informatics evaluation studies named STARE-HI, which is formally endorsed by IMIA and EFMI.
Objective: To develop a prioritization framework of ranked reporting items to assist authors when reporting health informatics evaluation studies in space restricted conference papers and to apply this prioritization framework to measure the quality of recent health informatics conference papers on evaluation studies.
Method: We deconstructed the STARE-HI guideline to identify reporting items. We invited a total of 111 authors of health informatics evaluation studies, reviewers and editors of health Informatics conference proceedings to score those reporting items on a scale ranging from “0 – not necessary in a conference paper” through to “10 – essential in a conference paper” by a web-based survey. From the responses we derived a mean priority score. All evaluation papers published in proceedings of MIE2006, Medinfo2007, MIE2008 and AMIA2008 were rated on these items by two reviewers. From these ratings a priority adjusted completeness score was computed for each paper.
Results: We identified 104 reporting items from the STARE-HI guideline. The response rate for the survey was 59% (66 out of 111). The most important reporting items (mean score ≥ 9) were “Interpret the data and give an answer to the study question – (in Discussion)”, “Whether it is a laboratory, simulation or field study – (in Methods-study design)” and “Description of the outcome measure/evaluation criteria – (in Methods-study design)”. Per reporting area the statistically more significant important reporting items were distinguished from less important ones. Four reporting items had a mean score ≤ 6. The mean priority adjusted completeness of evaluation papers of recent health informatics conferences was 48% (range 14 –78%).
Conclusion: We produced a ranked list of reporting items from STARE-HI according to their prioritized relevance for inclusion in space-limited conference papers. The priority adjusted completeness scores demonstrated room for improvement for the analyzed conference papers. We believe that this prioritization framework is an aid to improving the quality and utility of conference papers on health informatics evaluation studies.
-
References
- 1 Ammenwerth E, Brender J, Nykanen P, Prokosch HU, Rigby M, Talmon J. Visions and strategies to improve evaluation of health information systems. Reflections and lessons based on the HIS-EVAL workshop in Innsbruck. Int J Med Inform 2004; 73: 479-491.
- 2 Catwell L, Sheikh A. Evaluating eHealth Interventions: The Need for Continuous Systemic Evaluation. PLoS Med 2009; 6 (08) e1000126. doi: 10.1371/journal.pmed.1000126.
- 3 Talmon J, Ammenwerth E, Brender J, de Keizer NF, Nykänen P, Rigby M. STARE-HI - Statement on reporting of evaluation studies in health informatics. Int J Med Inform 2009; 78: 1-9.
- 4 www.equator-network.org/resource-centre/library-of-health-research-reporting/reporting-guidelines/other-reporting-guidelines/ accessed September 30 2010
- 5 Ammenwerth E, de Keizer NF. An inventory of evaluation studies of information technology in health care trends in evaluation research 1982-2002. Meth Inf Med 2005; 44 (01) 44-56.
- 6 Scherer RW, Langenberg P, von Elm E. Full publication of results initially presented in abstracts. Cochrane Database of Systematic Reviews 2007 Issue 2. Art. No.: MR000005. Doi: 10.1002/14651858.MR000005.pub3.
- 7 Ammenwerth E, de Keizer N. A viewpoint on Evidence-based Health informatics, based on a pilot survey on evaluation studies in health care informatics. JAMIA 2007; 14 (03) 368-371.
- 8 Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159-174.
- 9 Talmon J, Ammenwerth E, Geven T. The quality of reporting of health informatics evaluation studies: a pilot study. In Kuhn KA, Warren JR, Leong TY. (eds). Medinfo 2007. Proceedings of the 12th World Congress on Health (Medical) Informatics; Building Sustainable Health Systems. Studies in health technology and informatics (ISSN 0926-9630). Amsterdam: IOS Press; 2007. 129 193-197.
- 10 Altman D. Better reporting of randomised controlled trials: the CONSORT statement. BMJ 1996; 313: 570-571.
- 11 Kho ME, Eva KW, Cook DJ, Brouwers MC. The Completeness of Reporting (CORE) index identifies important deficiencies in observational study conference abstracts. J Clin Epid 2008; 61: 1241-1249.
- 12 Moher D, Jones A, Lepage L. CONSORT Group (Consolidated Standards for Reporting of Trials). Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA 2001; 285: 2006-2007.