RSS-Feed abonnieren
DOI: 10.1055/s-0043-1768025
Accuracy and Utility of Internet Image Search as a Learning Tool for Retinal Pathology
Abstract
Purpose Ophthalmology residency training heavily relies on visual and pattern recognition-based learning. In parallel with traditional reference texts, online internet search via Google Image Search (GIS) is commonly used and offers an accessible fund of reference images for ophthalmology trainees seeking rapid exposure to images of retinal pathology. However, the accuracy and quality of this tool within this context is unknown. We aim to evaluate the accuracy and quality of GIS images of selected retinal pathologies.
Methods A cross-sectional study was performed of GIS of 15 common and 15 rare retinal diseases drawn from the American Academy of Ophthalmology residency textbook series. A total of 300 evaluable image results were assessed for accuracy of images and image source accountability in consultation with a vitreoretinal surgeon.
Results A total of 377 images were reviewed with 77 excluded prior to final analysis. A total of 288 (96%) search results accurately portrayed the retinal disease being searched, whereas 12 (4%) were of an erroneous diagnosis. More images of common retinal diseases were from patient education Web sites than were images of rare diseases (p < 0.01). Significantly more images of rare retinal diseases were found in peer-reviewed sources (p = 0.01).
Conclusions GIS search results yielded a modest level of accuracy for the purposes of ophthalmic education. Despite the ease and rapidity of accessing multimodal retinal imaging examples, this tool may best be suited as a supplementary resource for learning among residents due to limited accuracy, lack of sufficient supporting information, and the source Web site's focus on patient education.
#
Keywords
vitreoretinal education - online medical education tools - medical education technology - visual learningOphthalmology training for the diagnosis of retinal pathology is rooted in visual learning and image-based pattern recognition. The Google Image Search (GIS) platform is the most used internet search platform in the United States[1]; it offers free access to pictures of eye pathology and may be used by ophthalmology trainees to rapidly acquire reference images. Correctly diagnosing eye diseases depends on trainees' abilities to identify and synthesize features seen on the ophthalmic exam. In particular, the field of retina requires the interpretation of multiple modalities including the fundus examination, fluorescein angiography, optical coherence tomography (OCT), and others. The critical importance of visual learning in the field has been quantified in the past. A study of medical students with art observation training demonstrated significant improvement in their descriptions of retinal disease fundus photos compared with students who did not have this training.[2]
Prior studies have demonstrated that free internet resources including Google, YouTube, and Wikipedia, in addition to traditional reference sources such as textbooks and peer-reviewed articles, are widely used as medical education tools by trainees of various subspecialties, including ophthalmology.[3] [4] GIS offers an accessible fund of reference images for ophthalmology trainees seeking rapid exposure to pictures of retinal pathology. In contrast to traditional reference sources, the quality of these free internet resources is inconsistent. Web sites may provide incorrect or incomplete information for medical professionals and patients.[5] [6] [7] [8] [9] [10] [11] [12] [13] [14] GIS uses the proprietary PageRank algorithm to sort images into the order in which they appear on the results page. This sorting is not based on accuracy to the search terms but is rather algorithmically generated according to proprietary rulesets that prioritize images from Web sites that are linked by other prominent Web sites.[15] Therefore, it is unknown if GIS is an accurate source for medical education.
To our knowledge, this is the first study to evaluate the accuracy and quality of GIS of selected retinal pathologies. This study aims to assess whether GIS image results and their associated Web sites provide reliable educational images for common and rare retinal diseases. Our findings may help delineate the role for GIS in medical education of ophthalmology and other specialties dependent on visual learning.
Methods
GIS was performed for 30 retinal diseases, comprising 15 common and 15 rare diseases drawn from the Basic and Clinical Science Course (BCSC Residency Sets, American Academy of Ophthalmology, San Francisco, CA) Retina section, a comprehensive textbook series provided to all ophthalmology residents in the United States ([Table 1]).[16] Diseases were selected by a vitreoretinal surgeon and resident ophthalmologist to represent a diverse range of rare and common inflammatory, infectious, inherited, and vascular pathologies. However, conditions with no change in the funduscopic appearance of the retina were excluded (i.e., occult macular dystrophy) for the purpose of the study. Diseases were subdivided into common and rare conditions based on incidence data and in consultation with a vitreoretinal physician. No human subjects were involved in this study.
Abbreviation: BCSC, Basic and Clinical Science Course.
For each GIS search, Google preferences and location tracking were disabled. Searches were performed and reviewed by one ophthalmology resident using a single, private desktop computer. Image results were reviewed with a vitreoretinal surgeon if there was any question as to whether an image accurately depicted a searched disease or whether the image should be included in analysis. The first 10 image results that met inclusion criteria were evaluated. Images were excluded for reasons that would make them less likely to be used by a trainee as a reference image, including if they were nonclinical images, i.e., illustrations, nonretina images, low-resolution images precluding determination of fundus/imaging details (<250 pixels by the image's longest axis or subjective judgment of a multi-image collection), or diagrammatic pages and/or slideshow images with predominantly text (Fig. 1A and B). An example of a “nonretina” image is demonstrated in Fig. 1A, an external photograph of the absent red reflex in a child with Coats disease. The purpose of these exclusion criteria was to select images that were most likely to be used by trainees for learning retinal pathology. We tabulated the number of top image results to reach 10 images meeting criteria for each search.
For included images, accuracy of the picture, type of imaging modality (fundus photograph, fluorescein angiography, OCT, or multimodal), and its linked Web site were evaluated. Multimodal images were defined as a collection of different types of images (e.g., OCT and fundus photograph). In judging accuracy of multimodal images, all included pictures needed to be correct depictions of the searched disease pathology. Each image's origin Web site was assessed for the presence of supporting text describing the clinical entity, whether it was a peer-reviewed source or originated from a patient education Web site (i.e., ophthalmic practice Web site) and its target audience (patient vs. medical professional vs. unspecified). For some images, full supporting text was only viewable with a login or online payment, and this was noted in the results. The number of references on each Web site was tabulated, and Web sites in which references were unavailable due to need for a login were counted as having zero references. The target audience was determined by a Web site's language. The image and Web site results were reviewed by one ophthalmology resident (L.V.C.) in consultation with a vitreoretinal-trained ophthalmologist (D.X.).
Statistical analysis was performed using Excel 2021 (Microsoft Corporation, Redmond, WA). Chi-squared test of independence was used to evaluate differences between the source Web sites for common versus rare retinal diseases and correct versus incorrect images. Student's unpaired two-tailed t-test was used to compare the number of references on Web sites for common versus rare retinal disease images. Fisher's exact test was used to compare the number of incorrect images originating from peer-reviewed sources and from ophthalmic practice Web sites. p-Values less than 0.05 were considered statistically significant.
#
Results
GIS searches were performed between November 2020 and December 2020, and analysis was completed in 2021. Three hundred image results from 30 disease search terms were included in the analysis (Fig. 1C and D). To include 10 images meeting criteria for each of the 30 retinal diseases searched, a total of 377 image results were initially analyzed with 77 results excluded from analysis (Fig. 2). Among the excluded results, 46 were of common diseases and 31 of rare diseases (p = 0.13). The most frequently excluded images were nonclinical results (35 common diseases and 7 rare diseases, p < 0.001).
The accuracy of the resultant images was compared with the intended search term. Ninety-six percent (288/300) of included images accurately portrayed the diseases searched. Additionally, of the 300 included images, 52 (17%) had annotations within the image to identify pathologic features. A total of 12 (4%) images were of the incorrect diagnosis, including 6 images each of common and rare diseases. Images were inaccurate because they depicted a normal fundus photograph (five results), a related but distinct condition (one branch vein occlusion rather than central vein occlusion and one branch artery occlusion rather than central artery occlusion) or a different disease (5). Four out of the 12 (33%) incorrect images originated from peer-reviewed sources, and 3 out of 12 (25%) originated from ophthalmic practice Web sites. There was no significant difference in the number of incorrect images that originated from peer-reviewed sources (4/66) versus ophthalmic practice sites (3/51) (p = 1.0, Fisher's exact test).
The source Web site was also evaluated for accuracy toward the image search term. The 300 included images originated from 125 unique Web sites ([Table 2]). Each Web site was evaluated for the presence of supporting text, accuracy of supporting text relative to the intended search term, intended audience, and whether it was peer-reviewed or produced by an ophthalmic practice ([Table 3]). Between the common and rare diseases, there were significant differences in the number of images that came from peer-reviewed sources and ophthalmic practice Web sites. More images of common retinal diseases were sourced from ophthalmic practice Web sites than images of rare diseases (p < 0.01). Eighty-six percent (44/51) of the ophthalmic practice Web sites were aimed at patient education. Common disease images (47/150 images) were more likely than rare diseases (21/150 images) to originate from Web sites written for a patient audience than rare diseases, p < 0.01). Additionally, there were significantly (p = 0.01) more images of rare retinal diseases (42/150 images) originating from peer-reviewed sources than common disease images (24/150 images). Out of the 66 images from peer-reviewed sources, 18 (27%) were from case reports, 22 (33%) were from review articles, and the remainder were from original reports (26 images, 39%).
A total of 45 images originated from Web sites that had no accompanying text or required a subscription or membership to view the full text, including 17 peer-reviewed Web sites (38%). All Web sites with text available provided accurate information on the intended search item, except for one page (Wikipedia.org) that described branch retinal vein occlusion instead of central retinal vein occlusion. Web pages for common retinal diseases had a mean of 9.8 references, compared with Web pages for rare retinal diseases that had a mean of 11.9 references (p = 0.6).
#
Discussion
This study evaluated the accuracy and viability of GIS, the most popular internet search platform in the United States,[1] as an accessible tool for ophthalmology trainees seeking rapid exposure to retinal diseases. The search tool displayed reasonable accuracy among retinal disorders with 96% of included images correctly representing the search terms in our panel of common and rare retinal diagnoses. We found no difference in the number of incorrect images for rare (six images, 2%) and common (six images, 2%) retinal disease searches, and no association was found between incorrect images and origin Web sites (whether peer-reviewed or not).
Despite the reasonable level of accuracy, GIS may have limitations as a primary learning tool in ophthalmology. Seventy-seven top image results were excluded from analysis because they either were of low-quality unsuitable for visual learning or were not clinical images of the retina. GIS results prioritize images from sources linked by prominent Web sites, relevance metrics, and user engagement factors.[15] Therefore, in contrast to textbooks or traditional teaching materials used by students, GIS results are not, by definition, medically accurate toward the intended search term. The strength of GIS as a learning tool is its rapid exposure to pictures of retinal pathology; however, the presence of nonclinical and low-quality images in top search results should be recognized by its users.
We also characterized the source Web site and any associated supporting text. Most images came from Web sites with accurate supporting text, but there were limitations for use as an educational resource. Significantly more images of common diseases originated from ophthalmic practice Web sites compared with rare disease images, and most of the ophthalmic practice Web sites were directed toward patient education. Despite accurate content, Web sites aimed at patient education are less likely to fulfill ophthalmology trainees' educational needs. Significantly more images of rare disease came from peer-reviewed sources than common disease images. Twenty-six percent of images from peer-reviewed sources came from Web sites that required a login to view the full text. Lack of access to supporting text hamper confirmation of image correctness and reduces its utility as an educational resource.
To our knowledge, only one prior study examined internet image search as a medical learning tool. Freeman et al reported image search accuracy of orthopedic surgery in shoulder injuries, finding a 4 to 30% image inaccuracy, which was substantially lower accuracy than our results.[14] These and other medical specialties share image- and pattern-based recognition as a core component of medical competency. Image accuracy in educational materials is essential.
Our study had several limitations in its analysis of GIS results. Repeat searches were not performed to determine variability of image results over time. Also, a user's geographical location may influence the order or presence of search results (i.e., a local ophthalmic practice). We disabled location services in our study to avoid this confounding variable. Web sites were evaluated for the presence and accuracy of supporting text; however, future studies should consider grading Web sites with an objective rubric for quality of text. While prior studies have established that the internet is a widely used source for medical education, future studies should survey ophthalmology trainees to quantify their use of GIS and their learning preferences.
In this study, we evaluated internet image search through the lens of ophthalmic education for the rapid acquisition of images of retinal fundus pathology for trainees. Despite the accuracy of GIS results and their associated Web sites, GIS may be better suited as a supplementary, rather than primary, source for learning retinal pathology. It provides easy, free access to many reference images. However, most images are not annotated to identify abnormal features, nonclinical images frequently appear in top results, and linked Web sites are often aimed at patient education, reflecting the variegated reasons why internet users (patients, ophthalmologists, and all others) in general perform these searches. Nonetheless, overall, we found that GIS provides acceptably accurate reference images of the most retinal disease. However, we should remind trainees to sort results judiciously to identify high-quality educational resources and use trusted peer-reviewed sites.
#
#
Conflict of Interest
None declared.
-
References
- 1 Statcounter Global Stats - Browser. . OS, search engine including mobile usage share. StatCounter Global Stats. Accessed January 29, 2022 at: https://gs.statcounter.com/
- 2 Gurwin J, Revere KE, Niepold S. et al. A randomized controlled study of art observation training to improve medical student ophthalmology skills. Ophthalmology 2018; 125 (01) 8-14
- 3 Ekenze SO, Okafor CI, Ekenze OS, Nwosu JN, Ezepue UF. The value of internet tools in undergraduate surgical education: perspective of medical students in a developing country. World J Surg 2017; 41 (03) 672-680
- 4 Egle JP, Smeenge DM, Kassem KM, Mittal VK. The internet school of medicine: use of electronic resources by medical trainees and the reliability of those resources. J Surg Educ 2015; 72 (02) 316-320
- 5 Derakhshan A, Lee L, Bhama P, Barbarite E, Shaye D. Assessing the educational quality of ‘YouTube’ videos for facelifts. Am J Otolaryngol 2019; 40 (02) 156-159
- 6 Connelly TM, Khan MS, Alzamzami M, Cooke F. An evaluation of the quality and content of web-based stoma information. Colorectal Dis 2019; 21 (03) 349-356
- 7 Azer SA, AlSwaidan NM, Alshwairikh LA, AlShammari JM. Accuracy and readability of cardiovascular entries on Wikipedia: are they reliable learning resources for medical students?. BMJ Open 2015; 5 (10) e008187
- 8 Azer SA. Evaluation of gastroenterology and hepatology articles on Wikipedia: are they suitable as learning resources for medical students?. Eur J Gastroenterol Hepatol 2014; 26 (02) 155-163
- 9 Yacob M, Lotfi S, Tang S, Jetty P. Wikipedia in vascular surgery medical education: comparative study. JMIR Med Educ 2020; 6 (01) e18076
- 10 Scaffidi MA, Khan R, Wang C. et al. Comparison of the impact of Wikipedia, UpToDate, and a digital textbook on short-term knowledge acquisition among medical students: randomized controlled trial of three web-based resources. JMIR Med Educ 2017; 3 (02) e20
- 11 Smith DA. Situating Wikipedia as a health information resource in various contexts: a scoping review. PLoS One 2020; 15 (02) e0228786
- 12 Aykut A, Kukner AS, Karasu B, Palancıglu Y, Atmaca F, Aydogan T. Everything is ok on YouTube! Quality assessment of YouTube videos on the topic of phacoemulsification in eyes with small pupil. Int Ophthalmol 2019; 39 (02) 385-391
- 13 McKee HD, Jhanji V. Learning DMEK from YouTube. Cornea 2017; 36 (12) 1477-1479
- 14 Freeman R, Ashouri F, Papanikitas J, Ricketts D. Accuracy of internet images of glenoid labral injuries. Ann R Coll Surg Engl 2013; 95 (06) 418-420
- 15 Brin S, Page L. The anatomy of a large-scale hypertextual Web search engine. Comput Netw ISDN Syst 1998; 30 (01) 107-117
- 16 American Academy of Ophthalmology. McCannel CA, Berrocal AM. et al. 2019-2020 BCSC: Basic and Clinical Science Course. San Francisco, CA: American Academy of Ophthalmology; 2019
Address for correspondence
Publikationsverlauf
Eingereicht: 14. August 2022
Angenommen: 01. März 2023
Artikel online veröffentlicht:
12. April 2023
© 2023. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)
Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA
-
References
- 1 Statcounter Global Stats - Browser. . OS, search engine including mobile usage share. StatCounter Global Stats. Accessed January 29, 2022 at: https://gs.statcounter.com/
- 2 Gurwin J, Revere KE, Niepold S. et al. A randomized controlled study of art observation training to improve medical student ophthalmology skills. Ophthalmology 2018; 125 (01) 8-14
- 3 Ekenze SO, Okafor CI, Ekenze OS, Nwosu JN, Ezepue UF. The value of internet tools in undergraduate surgical education: perspective of medical students in a developing country. World J Surg 2017; 41 (03) 672-680
- 4 Egle JP, Smeenge DM, Kassem KM, Mittal VK. The internet school of medicine: use of electronic resources by medical trainees and the reliability of those resources. J Surg Educ 2015; 72 (02) 316-320
- 5 Derakhshan A, Lee L, Bhama P, Barbarite E, Shaye D. Assessing the educational quality of ‘YouTube’ videos for facelifts. Am J Otolaryngol 2019; 40 (02) 156-159
- 6 Connelly TM, Khan MS, Alzamzami M, Cooke F. An evaluation of the quality and content of web-based stoma information. Colorectal Dis 2019; 21 (03) 349-356
- 7 Azer SA, AlSwaidan NM, Alshwairikh LA, AlShammari JM. Accuracy and readability of cardiovascular entries on Wikipedia: are they reliable learning resources for medical students?. BMJ Open 2015; 5 (10) e008187
- 8 Azer SA. Evaluation of gastroenterology and hepatology articles on Wikipedia: are they suitable as learning resources for medical students?. Eur J Gastroenterol Hepatol 2014; 26 (02) 155-163
- 9 Yacob M, Lotfi S, Tang S, Jetty P. Wikipedia in vascular surgery medical education: comparative study. JMIR Med Educ 2020; 6 (01) e18076
- 10 Scaffidi MA, Khan R, Wang C. et al. Comparison of the impact of Wikipedia, UpToDate, and a digital textbook on short-term knowledge acquisition among medical students: randomized controlled trial of three web-based resources. JMIR Med Educ 2017; 3 (02) e20
- 11 Smith DA. Situating Wikipedia as a health information resource in various contexts: a scoping review. PLoS One 2020; 15 (02) e0228786
- 12 Aykut A, Kukner AS, Karasu B, Palancıglu Y, Atmaca F, Aydogan T. Everything is ok on YouTube! Quality assessment of YouTube videos on the topic of phacoemulsification in eyes with small pupil. Int Ophthalmol 2019; 39 (02) 385-391
- 13 McKee HD, Jhanji V. Learning DMEK from YouTube. Cornea 2017; 36 (12) 1477-1479
- 14 Freeman R, Ashouri F, Papanikitas J, Ricketts D. Accuracy of internet images of glenoid labral injuries. Ann R Coll Surg Engl 2013; 95 (06) 418-420
- 15 Brin S, Page L. The anatomy of a large-scale hypertextual Web search engine. Comput Netw ISDN Syst 1998; 30 (01) 107-117
- 16 American Academy of Ophthalmology. McCannel CA, Berrocal AM. et al. 2019-2020 BCSC: Basic and Clinical Science Course. San Francisco, CA: American Academy of Ophthalmology; 2019