CC BY-NC-ND 4.0 · Indian J Radiol Imaging
DOI: 10.1055/s-0044-1788605
Original Article

Exploring Radiology Postgraduate Students' Engagement with Large Language Models for Educational Purposes: A Study of Knowledge, Attitudes, and Practices

1   Department of Radiodiagnosis, All India Institute of Medical Sciences, Deoghar, Jharkhand, India
,
Braja Behari Panda
2   Department of Radiodiagnosis, Veer Surendra Sai Institute of Medical Sciences and Research, Burla, Odisha, India
,
3   Department of Radiodiagnosis, Mysore Medical College and Research Institute, Mysore, India
,
2   Department of Radiodiagnosis, Veer Surendra Sai Institute of Medical Sciences and Research, Burla, Odisha, India
,
4   Department of Otorhinolaryngology and Head and Neck Surgery, All India Institute of Medical Sciences, Deoghar, Jharkhand, India
,
5   Department of Physiology, All India Institute of Medical Sciences, Deoghar, Jharkhand, India
› Author Affiliations
Funding None.
 

Abstract

Background: The integration of large language models (LLMs) into medical education has received increasing attention as a potential tool to enhance learning experiences. However, there remains a need to explore radiology postgraduate students' engagement with LLMs and their perceptions of their utility in medical education. Hence, we conducted this study to investigate radiology postgraduate students' knowledge, attitudes, and practices regarding LLMs in medical education.

Methods: A cross-sectional quantitative survey was conducted online on Google Forms. Participants from all over India were recruited via social media platforms and snowball sampling techniques. A previously validated questionnaire was used to assess knowledge, attitude, and practices regarding LLMs. Descriptive statistical analysis was employed to summarize participants' responses.

Results: A total of 252 (139 [55.16%] males and 113 [44.84%] females) radiology postgraduate students with a mean age of 28.33 ± 3.32 years participated in the study. The majority of the participants (47.62%) were familiar with LLMs with their potential incorporation with traditional teaching–learning tools (71.82%). They are open to including LLMs as a learning tool (71.03%) and think that it would provide comprehensive medical information (62.7%). Residents take the help of LLMs when they do not get the desired information from books (46.43%) or Internet search engines (59.13%). The overall score of knowledge (3.52 ± 0.58), attitude (3.75 ± 0.51), and practice (3.15 ± 0.57) were statistically significantly different (analysis of variance [ANOVA], p < 0.0001), with the highest score in attitude and lowest in practice. However, no significant differences were found in the scores for knowledge (p = 0.64), attitude (p = 0.99), and practice (p = 0.25) depending on the year of training.

Conclusion: Radiology postgraduate students are familiar with LLM and recognize the potential benefits of LLMs in postgraduate radiology education. Although they have a positive attitude toward the use of LLMs, they are concerned about its limitations and use it only in limited situations for educational purposes.


#

Introduction

The integration of newer technology into medical education has witnessed significant advancements in recent years.[1] The recent addition is a potential integration of large language models (LLMs).[2] LLMs like ChatGPT, Gemini (formerly Bard), and Copilot (formerly Bing) can potentially be used in medical education, particularly in the context of postgraduate training.[3] [4] [5] These models can serve as valuable resources for accessing vast amounts of medical literature, clinical guidelines, and case studies. LLMs can aid in the comprehension of complex medical concepts by simplifying them, interpretation of diagnostic imaging, and formulation of differential diagnoses.[3] [6] [7] However, despite their benefits, LLMs also pose several limitations that must be considered. These include concerns regarding the accuracy and reliability of information generated and potential biases in data sources.[8] Therefore, while LLMs hold promise in enhancing medical education, their integration must be approached thoughtfully, with a focus on maximizing benefits while mitigating potential limitations.[9] [10]

LLMs are revolutionizing radiology education by offering versatile and interactive learning experiences. These models assist in structuring and organizing radiology reports, making them more comprehensible for learners.[11] Additionally, they simulate text-based radiology board–style examinations, providing learners with practical exposure and assessment opportunities.[4] [12] LLMs also offer differential diagnoses based on imaging patterns enhancing learners' understanding of various pathologies and their visual manifestations.[13] [14] Moreover, they suggest follow-up imaging by established guidelines, reinforcing evidence-based practices.[15] [16] By integrating these functionalities, LLMs provide learners with personalized learning experiences, support critical thinking, and promote a deeper understanding of radiological concepts, ultimately enhancing radiology education.[3]

Despite the growing interest and enthusiasm surrounding the integration of LLMs into medical education, the perception of radiology postgraduate students is yet to be explored to know the current knowledge, attitude toward integrating LLM in radiology, and the pattern of utilization of these tools in learning and teaching. Therefore, this study aims to address this gap by surveying radiology postgraduates to know the students' knowledge, attitudes, and practices regarding LLMs.


#

Materials and Methods

Type and Setting

This study employed a cross-sectional quantitative survey conducted online to investigate radiology postgraduate students' engagement with LLMs in postgraduate medical education. The survey was conducted on an online platform—Google Forms—and participants from all the states of India were eligible to participate.


#

Sampling Methods

We used a snowball sampling method to recruit participants. Any radiology postgraduate students with any year of study in private or government-run medical colleges or institutions were included in the study. We shared the link to the survey via a social media messaging application and asked the participants to share the link with other potential radiology residents.


#

Questionnaire

We created a form in Google Forms with a part for demographic details of the participants including age, sex, year of study, course (MD or DNB), state where they study, and type of institution (private, government, and institute of national importance; [Supplementary Material], available in the online version only). This part also had an informed consent form. The second part was the questionnaire that had three domains, each for knowledge, attitude, and practice, and each domain had six questions. There were 26 questions in total including questions on demographics. All questions were compulsory except the last one (open comment section) where participants could provide their opinions. The questionnaire was used in a previous study in India and it had a satisfactory internal consistency and reliability.[17] The questionnaire is free to use for noncommercial use as stated by the authors. We additionally took permission from the developer of the questionnaire.


#

Data Collection

Data collection was done fully online via Google Forms. All the questions were made compulsory. Hence, all the responses were complete. The message created for dissemination of the survey link had a message that included the inclusion criteria and a request for voluntary participation. When any interested participants clicked the link, it took them to a web page of Google Forms that contained informed consent text, demographics, and survey proper. After submission of the survey, the researchers immediately get the digital copy via Google Forms. The data collection spanned 1 month ranging from January 25 to February 25, 2024. After the final day, the data were downloaded from the server for further analysis.


#

Data Analysis

Descriptive statistical analysis was employed to summarize participants' responses to the questionnaire items. The responses were coded for quantitative analysis using a 5-point Likert scale (strongly agree = 5; agree = 4; neutral = 3; disagree = 2; strongly disagree = 1). For each student, the average score for a domain was determined by summing the scores of six responses and dividing the total by 6. Subsequently, the average scores for all students were calculated for each domain and presented as mean ± standard deviation. Additionally, scores were analyzed year wise. Frequencies and percentages were calculated to describe the distribution of responses across different categories within each domain (knowledge, attitude, and practices). Any categorical data were compared statistically by the chi-squared test where significance indicates that the occurrence was not by chance. The analysis of variance (ANOVA) with post hoc analysis was used to compare the mean score of knowledge, attitude, and practice. The textual data were analyzed by two authors and a consensus was reached to extract the themes. The themes were presented with relevant quotations. We used Microsoft Excel 2010 for calculating the percentage and GraphPad Prism 9.5.0 for calculating the chi-squared test. A p-Value of less than 0.05 was considered statistically significant.


#
#

Result

A total of 252 radiology postgraduate students (139 [55.16%] males and 113 [44.84%] females) with a mean age of 28.33 ± 3.32 years participated in the study. The demographics are presented in [Table 1]. Their age was similar and representation from both genders was also similar. However, there was a higher number of participants who were perusing MD radiology, with the majority being in private institutions and in the third year of the study.

Table 1

Characteristics of the participants

Parameter

Category

Number (%)

p-Value

Age (mean ± SD), y

Overall

28.33 ± 3.32

0.37[a]

Male

28.5 ± 3.2

Female

28.12 ± 3.46

Sex

Male

139 (55.16)

0.057[b]

Female

113 (44.84)

Course

MD

206 (81.74)

<0.0001[b]

DNB

46 (18.25)

Institution category

Private

146 (57.94)

<0.0001[b]

Government

88 (34.92)

Government INI

18 (7.14)

Year of study

First

63 (25)

0.002[b]

Second

81 (32.14)

Third

108 (42.86)

Abbreviations: DNB, diplomat of national board; INI, institution of national importance; MD, doctor of medicine; SD, standard deviation.


a p-Value of unpaired t-test between males and females.


b p-Value of chi-squared test.


State-wise distribution of the students is shown in [Fig. 1]. The top participating states were Odisha (24.6%), Karnataka (19.05%), Telangana (10.71%), Maharashtra (4.36%), Uttar Pradesh (4.36%), and Andhra Pradesh (4.36%).

Zoom Image
Fig. 1 Number of participants from Indian states.

Knowledge of residents about the LLM is shown in [Table 2]. The majority of the participants (47.62%) were familiar with LLMs with fair knowledge about its method of text generation and its potential to help the traditional teaching–learning process (71.82%). However, almost a quarter (25.39%) of the participants were not familiar with LLMs, while another quarter responded neutrally (26.98%) regarding their familiarity.

Table 2

Responses of participants on the knowledge questions

Question

Strongly agree

Agree

Neutral

Disagree

Strongly disagree

p-Value

I am familiar with LLMs like ChatGPT, Google Bard, Microsoft Bing, or Perplexity

20 (7.94)

100 (39.68)

68 (26.98)

46 (18.25)

18 (7.14)

 < 0.0001

I understand how LLMs generate information and responses

11 (4.37)

79 (31.35)

73 (28.97)

67 (26.59)

22 (8.73)

 < 0.0001

LLMs can generate wrong information

36 (14.29)

128 (50.79)

77 (30.56)

09 (3.57)

02 (0.79)

 < 0.0001

LLM can be used by both teachers and students

52 (20.63)

148 (58.73)

42 (16.67)

7 (2.78)

3 (1.19)

 < 0.0001

Using LLMs helps me simplify complicated medical concepts in radiology

21 (8.33)

106 (42.06)

98 (38.89)

23 (9.13)

4 (1.59)

 < 0.0001

LLMs can help along with traditional learning materials like textbook, notes, e-books, etc.

37 (14.68)

144 (57.14)

54 (21.43)

13 (5.16)

4 (1.59)

 < 0.0001

Note: p-Value is of the chi-squared test where the observed distribution was compared with expected equal distribution in all categories.


The attitudes of the residents are shown in [Table 3]. They are open to including LLMs as a learning tool (71.03%) and think that it would provide comprehensive medical information (62.7%). However, they think that they should not rely more on the models for clinical reasoning (70.64%) and they do not blindly believe the accuracy of the generated contents (80.16%).

Table 3

Responses of participants on the attitude questions

Question

Strongly agree

Agree

Neutral

Disagree

Strongly disagree

p-Value

I am open to including LLMs as extra learning tools for medical studies

38 (15.08)

141 (55.95)

57 (22.62)

10 (3.97)

6 (2.38)

<0.0001

LLMs would help by providing comprehensive medical information

22 (8.73)

136 (53.97)

66 (26.19)

22 (8.73)

6 (2.38)

<0.0001

Medical colleges should promote the use of LLMs in the teaching–learning process

30 (11.90)

107 (42.46)

82 (32.54)

22 (8.73)

11 (4.37)

<0.0001

LLMs could change how we learn and access medical knowledge

33 (13.10)

151 (59.92)

50 (19.84)

15 (5.95)

3 (1.19)

<0.0001

Relying too much on LLMs might not develop my clinical reasoning skills

64 (25.40)

114 (45.24)

49 (19.44)

22 (8.73)

3 (1.19)

<0.0001

There is a risk of learning the wrong concept; hence, I would not blindly believe it

67 (26.59)

135 (53.57)

45 (17.86)

4 (1.59)

1 (0.40)

<0.0001

Note: p-Value is of the chi-squared test where the observed distribution was compared with expected equal distribution in all categories.


The response in the practice domain is shown in [Table 4]. Residents take the help of LLMs when they do not get the desired information in books (46.43%) or Internet search engines (59.13%). However, they do not get confidence from the content generated by LLM (33.33%) and a significant portion (43.65%) responded neutrally.

Table 4

Responses of participants on the practice questions

Question

Strongly agree

Agree

Neutral

Disagree

Strongly disagree

p-Value

I use LLMs to get clearer explanations on medical topics I'm learning

10 (3.97)

89 (35.32)

83 (32.94)

59 (23.41)

11 (4.37)

 < 0.0001

LLMs have shown me new resources and references for my medical studies

13 (5.16)

101 (40.08)

90 (35.71)

39 (15.48)

9 (3.57)

 < 0.0001

I use those only when I cannot get the information in books

7 (2.78)

110 (43.65)

81 (32.14)

48 (19.05)

6 (2.38)

 < 0.0001

I use those only when I cannot get the information in Google or other search engines

23 (9.13)

126 (50.00)

69 (27.38)

31 (12.30)

3 (1.19)

 < 0.0001

I adapt my self-study based on insights I get from LLMs

5 (1.98)

57 (22.62)

97 (38.49)

78 (30.95)

15 (5.95)

 < 0.0001

Using LLMs has made me more confident in talking about medical subjects

6 (2.38)

52 (20.63)

110 (43.65)

72 (28.57)

12 (4.76)

 < 0.0001

Note: p-Value is of the chi-squared test where the observed distribution was compared with expected equal distribution in all categories.


The overall and year-wise scores in knowledge, attitude, and practice are detailed in [Table 5]. The analysis revealed that residents had the highest scores in attitude, lower scores in knowledge, and the lowest scores in practice, with these differences being statistically significant (p < 0.0001). However, no significant year-wise differences were found in the scores for knowledge (p = 0.64), attitude (p = 0.99), and practice (p = 0.25).

Table 5

Overall and year wise score in knowledge, attitude, and practice

Overall

First

Second

Third

p-Value[a]

Knowledge

3.52 ± 0.58

3.51 ± 0.59

3.57 ± 0.51

3.49 ± 0.62

0.64

Attitude

3.75 ± 0.51

3.75 ± 0.52

3.75 ± 0.46

3.76 ± 0.54

0.99

Practice

3.15 ± 0.57

3.23 ± 0.55

3.07 ± 0.52

3.16 ± 0.62

0.25

p-Value[b]

<0.0001

<0.0001

<0.0001

<0.0001

a p-Value of analysis of variance (ANOVA) for first-, second-, and third-year students.


b p-Value of ANOVA among scores of knowledge, attitude, and practice.


A total of 32 participants commented on the open text field. We identified a total of 12 themes from the texts. The themes and related quotes are presented in [Table 6].

Table 6

Responses of participants on the practice questions

Themes

Quotations

Importance of LLMs in education

“Llama tutorials for residents and radiologists is very important”

Dependence on input quality

“The output generated by LLMs is very dependent on the prompt or type of input that is provided”

Need for specialized LLMs in radiology

“A LLM model specific for radiology is a tool needed which will eventually help the resistance all over the world”

Trustworthiness of LLMs

“LLMs can be aided tools but not trustworthy and dependable.” “I believe at this stage LLM can be an adjunct to our learning process but to be used as a sole method of learning it needs more development”

Integration with traditional teachings

“LLMs should be integrated with medical knowledge in combination with traditional teachings not for sole reliance”

Limited awareness and experience

“I haven't used it so my responses are neutral.” “Though I'm aware of LLMs I have almost never used it till now.” “We have not much knowledge about the LLMs.” “I haven't used or I'm not much dependent on LLMs so couldn't relate to most of the questions asked here”

Potential for future integration

“LLMs will be the future, so including LLM in MBBS and MD curriculum will be a futuristic idea”

Usage for report writing

“I think they help in writing reports in a better language”

Mixed experiences with accuracy

“I have personal experience with LLMs generating wrong concepts or explanation. However, in terms of analyzing and interpreting statistics-based data, it does a decent job”

Request for education and training

“Henceforth I'd request you to keep a short live discussion about the same to make it more understandable.” “Kindly provide a platform for medical students and residents where we can learn to use LLM for making our subject knowledge better”

Cost consideration

“ChatGPT 4 is the best and everyone should use it, but the only drawback is its subscription is expensive”

Lack of awareness about benefits

“Till now, I am unaware of the benefits of LLM in radiology”


#

Discussion

This study explored the radiology postgraduate students' prevailing knowledge, their attitude toward the usage of LLM in medical education, and how they are actually using the models in their educational purposes. We found that a substantial proportion of radiology postgraduate students demonstrate familiarity with LLMs, with a majority indicating awareness of prominent models such as ChatGPT, Google Bard or Gemini, Microsoft Bing or Copilot, or Perplexity. This suggests that LLMs have gained recognition within the radiology community as potential resources for enhancing learning experiences and accessing medical information. While many students acknowledge the potential benefits of LLMs in simplifying complex medical concepts, concerns regarding the accuracy and reliability of information generated by these models are evident. Respondents recognize the utility of LLMs as supplementary educational resources.

The majority of the students are ready to accept LLMs as their supplementary educational resources as they think it has the potential to generate comprehensive medical information. However, they also opine that they tend not to rely on the chatbots blindly as they are concerned about generation of false information by the models. These findings indicate a need for critical evaluation of LLM responses and additional education and practical training on LLMs' applications to effectively bridge the gap between awareness and utilization.[8]

Although knowledge was adequate and there was an overall positive attitude, the practice is not yet in its full phase. Students mainly use it for simplifying any topic and they take help of the model when they cannot get clear information from books or Internet search engine like Google or Bing. They do not rely on these models for their self-study or for getting confidence to use the information for their studies.

Students are concerned about LLMs potentially hindering critical clinical reasoning skills and introducing incorrect concepts. This highlights the necessity for a balanced approach to LLM integration in medical education, considering both benefits and potential pitfalls.[7]

Numerous published studies have explored radiologists' opinions and perspectives on artificial intelligence (AI) and many studies have also explored on medical students' perception on AI.[17] [18] [19] [20] [21] [22] Despite our comprehensive search, we have not come across any existing literature that examines the knowledge, attitude, and practice (KAP) of radiology postgraduate students or radiologists regarding LLMs in medical education.

Biri et al assessed the knowledge, attitude, and practice regarding LLMs in medical education of undergraduate medical students from an Indian medical college and found positive attitudes toward the incorporation of LLMs in medical education but limited usage due to potential inaccuracies.[17] A study by Tung and Dong found that Malaysian medical students show awareness of AI and interest in learning more.[18] Buabbas et al found that students have a positive attitude toward AI in medical education, with the majority believing that AI can enhance their teaching and learning experiences.[19] Alkhaaldi et al in their web-based study on 265 recently graduated medical students from the United Arab Emirates found that students revealed minimal formal AI experience but showed positive attitudes toward its potential. Students expressed optimism about AI's future role but stressed that a structured curriculum is required to prepare adequately for its integration into medicine.[20] In a study by Al Mohammad et al, radiologists and radiographers were surveyed to gauge their opinions on AI and its integration into the radiology department.[21] The study revealed a positive attitude among participants toward learning about AI and its application in radiology practice. However, they also highlighted barriers to AI learning, with the foremost challenge being the absence of mentorship and guidance from experts. Li and Qin[22] surveyed 1,243 undergraduate and postgraduate students from 13 universities and 33 hospitals in China and found that 54.3% had prior experience with medical AI, with postgraduates demonstrating higher awareness. Factors positively influencing AI acceptance and intention to use include performance expectancy, habit, hedonic motivation, and trust. Future medical education should prioritize enhancing students' performance through engaging and easily accessible courses to prepare them adequately for their careers.

It should be noted that open-source LLMs like ChatGPT 3.5 are freely available, allowing for self-learning. Additionally, numerous free tutorials exist for learning ChatGPT. It is important to mention that the response generated by LLMs depends on the prompt's structure. Several Web sites offer training in prompt engineering.[23]

Furthermore, LLMs can help alleviate faculty shortages, aid in research and innovation, and promote the development of critical thinking skills.[24]


#

Limitation of the Study

It is important to acknowledge the limitations of this study for informed interpretation. The finding has relevance at this point in the development of LLMs with students currently studying in different states of India. There is clustering of samples in some states and some of the states have no representation at all. Despite our full effort, we were not able to get responses from students from some of these states. The study result is prepared from self-reported responses and it was beyond our capability to detect whether there was any social desirability bias. A more randomized sampling method to reduce bias could be helpful.

This study can be extended with a balanced sample from all states with a higher number of participants for more generalization results. Similar studies can be conducted in other countries too. Future research is recommended to include a detailed analysis of qualitative data to gain deeper insights into the reasons behind the students' attitudes and practices regarding LLMs, which can provide practical recommendations for educators on how to integrate LLMs into the curriculum effectively.[22] Future research may also explore longitudinal trends in radiology postgraduate students' engagement with LLMs. An investigation may also be interesting to find the effectiveness of any educational interventions aimed at proper utilization of LLM for educational purposes. Additionally, studies exploring the impact of LLMs on learning outcomes, clinical decision-making, and patient care in radiology practice may also be conducted.


#

Conclusion

This study provides insights into radiology postgraduate students' perceptions and engagement with LLMs in medical education, highlighting both the potential benefits and challenges associated with their utilization. Indian radiology postgraduate students are familiar with LLM and recognize the potential benefits of LLMs in postgraduate radiology education. Although they have a positive attitude toward the use of LLMs, they are concerned about their limitations and use them only in limited situations for educational purposes. Hence, an improvement of the LLM model, especially for radiology education with proper training on its usage can help leverage the power of LLM in radiology education for augmented learning experience.


#
#

Conflict of Interest

None declared.

Acknowledgments

We extend our sincere gratitude to all the anonymous participants who took the time to complete the survey, contributing valuable insights to our study. We would also like to express our appreciation to Prof. M.V.K. Rao from the Department of Radiodiagnosis at MKCG Medical College and Hospital, Berhampur, Odisha, India, and Prof. Basanta Manjari Swain from the Department of Radiodiagnosis at KIMS, Bhubaneswar, Odisha, for their invaluable support and encouragement in motivating students to participate in the survey. Their assistance was instrumental in the successful completion of this research endeavor.

Supplementary Material

  • References

  • 1 Tokuç B, Varol G. Medical education in the era of advancing technology. Balkan Med J 2023; 40 (06) 395-399
  • 2 Abd-Alrazaq A, AlSaad R, Alhuwail D. et al. Large language models in medical education: opportunities, challenges, and future directions. JMIR Med Educ 2023; 9: e48291
  • 3 Akinci D'Antonoli T, Stanzione A, Bluethgen C. et al. Large language models in radiology: fundamentals, applications, ethical considerations, risks, and future directions. Diagn Interv Radiol 2024; 30 (02) 80-90
  • 4 Sarangi PK, Narayan RK, Mohakud S, Vats A, Sahani D, Mondal H. Assessing the capability of ChatGPT, Google Bard, and Microsoft Bing in solving radiology case vignettes. Indian J Radiol Imaging 2023; 34 (02) 276-282
  • 5 Das D, Kumar N, Longjam LA. et al. Assessing the capability of ChatGPT in answering first- and second-order knowledge questions on microbiology as per competency-based medical education curriculum. Cureus 2023; 15 (03) e36034
  • 6 Mondal H, Mondal S, Podder I. Using ChatGPT for writing articles for patients' education for dermatological diseases: a pilot study. Indian Dermatol Online J 2023; 14 (04) 482-486
  • 7 Ahn S. The impending impacts of large language models on medical education. Korean J Med Educ 2023; 35 (01) 103-107
  • 8 Safranek CW, Sidamon-Eristoff AE, Gilson A, Chartash D. The role of large language models in medical education: applications and implications. JMIR Med Educ 2023; 9: e50945
  • 9 Daungsupawong H, Wiwanitkit V. ChatGPT and radiology in the future: comment. Indian J Radiol Imaging 2023; 34 (02) 371-372
  • 10 Kapilamoorthy TR. Clinical radiology: past, present, and future-whither are we going?. Indian J Radiol Imaging 2024; 34 (02) 361-364
  • 11 Sarangi PK, Lumbani A, Swarup MS. et al. Assessing ChatGPT's Proficiency in Simplifying Radiological Reports for Healthcare Professionals and Patients. Cureus 2023; 15 (12) e50881
  • 12 Bhayana R, Krishna S, Bleakney RR. Performance of ChatGPT on a radiology board-style examination: insights into current strengths and limitations. Radiology 2023; 307 (05) e230582
  • 13 Kottlors J, Bratke G, Rauen P. et al. Feasibility of differential diagnosis based on imaging patterns using a large language model. Radiology 2023; 308 (01) e231167
  • 14 Sarangi PK, Irodi A, Panda S, Nayak DSK, Mondal H. Radiological differential diagnoses based on cardiovascular and thoracic imaging patterns: perspectives of four large language models. Indian J Radiol Imaging 2023; 34 (02) 269-275
  • 15 Rao A, Kim J, Kamineni M. et al. Evaluating GPT as an adjunct for radiologic decision making: GPT-4 versus GPT-3.5 in a breast imaging pilot. J Am Coll Radiol 2023; 20 (10) 990-997
  • 16 Rau A, Rau S, Zoeller D. et al. A context-based chatbot surpasses trained radiologists and generic ChatGPT in following the ACR Appropriateness Guidelines. Radiology 2023; 308 (01) e230970
  • 17 Biri SK, Kumar S, Panigrahi M, Mondal S, Behera JK, Mondal H. Assessing the utilization of large language models in medical education: insights from undergraduate medical students. Cureus 2023; 15 (10) e47468
  • 18 Tung AYZ, Dong LW. Malaysian medical students' attitudes and readiness toward AI (artificial intelligence): a cross-sectional study. J Med Educ Curric Dev 2023; 10: 23 821205231201164
  • 19 Buabbas AJ, Miskin B, Alnaqi AA. et al. Investigating students' perceptions towards artificial intelligence in medical education. Healthcare (Basel) 2023; 11 (09) 1298
  • 20 Alkhaaldi SMI, Kassab CH, Dimassi Z. et al. Medical student experiences and perceptions of ChatGPT and artificial intelligence: cross-sectional study. JMIR Med Educ 2023; 9: e51302
  • 21 Al Mohammad B, Aldaradkeh A, Gharaibeh M, Reed W. Assessing radiologists' and radiographers' perceptions on artificial intelligence integration: opportunities and challenges. Br J Radiol 2024; 97 (1156): 763-769
  • 22 Li Q, Qin Y. AI in medical education: medical student perception, curriculum recommendations and design suggestions. BMC Med Educ 2023; 23 (01) 852
  • 23 Sarangi PK, Mondal H. Response generated by large language models depends on the structure of the prompt. Indian J Radiol Imaging 2024; 34 (03) 574-575
  • 24 Vagha S, Mishra V, Joshi Y. Reviving medical education through teachers training programs: a literature review. J Educ Health Promot 2023; 12: 277

Address for correspondence

Pradosh Kumar Sarangi, MD, PDF, EDiR
Department of Radiodiagnosis, All India Institute of Medical Sciences
Deoghar 814152, Jharkhand
India   

Publication History

Article published online:
19 July 2024

© 2024. Indian Radiological Association. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Thieme Medical and Scientific Publishers Pvt. Ltd.
A-12, 2nd Floor, Sector 2, Noida-201301 UP, India

  • References

  • 1 Tokuç B, Varol G. Medical education in the era of advancing technology. Balkan Med J 2023; 40 (06) 395-399
  • 2 Abd-Alrazaq A, AlSaad R, Alhuwail D. et al. Large language models in medical education: opportunities, challenges, and future directions. JMIR Med Educ 2023; 9: e48291
  • 3 Akinci D'Antonoli T, Stanzione A, Bluethgen C. et al. Large language models in radiology: fundamentals, applications, ethical considerations, risks, and future directions. Diagn Interv Radiol 2024; 30 (02) 80-90
  • 4 Sarangi PK, Narayan RK, Mohakud S, Vats A, Sahani D, Mondal H. Assessing the capability of ChatGPT, Google Bard, and Microsoft Bing in solving radiology case vignettes. Indian J Radiol Imaging 2023; 34 (02) 276-282
  • 5 Das D, Kumar N, Longjam LA. et al. Assessing the capability of ChatGPT in answering first- and second-order knowledge questions on microbiology as per competency-based medical education curriculum. Cureus 2023; 15 (03) e36034
  • 6 Mondal H, Mondal S, Podder I. Using ChatGPT for writing articles for patients' education for dermatological diseases: a pilot study. Indian Dermatol Online J 2023; 14 (04) 482-486
  • 7 Ahn S. The impending impacts of large language models on medical education. Korean J Med Educ 2023; 35 (01) 103-107
  • 8 Safranek CW, Sidamon-Eristoff AE, Gilson A, Chartash D. The role of large language models in medical education: applications and implications. JMIR Med Educ 2023; 9: e50945
  • 9 Daungsupawong H, Wiwanitkit V. ChatGPT and radiology in the future: comment. Indian J Radiol Imaging 2023; 34 (02) 371-372
  • 10 Kapilamoorthy TR. Clinical radiology: past, present, and future-whither are we going?. Indian J Radiol Imaging 2024; 34 (02) 361-364
  • 11 Sarangi PK, Lumbani A, Swarup MS. et al. Assessing ChatGPT's Proficiency in Simplifying Radiological Reports for Healthcare Professionals and Patients. Cureus 2023; 15 (12) e50881
  • 12 Bhayana R, Krishna S, Bleakney RR. Performance of ChatGPT on a radiology board-style examination: insights into current strengths and limitations. Radiology 2023; 307 (05) e230582
  • 13 Kottlors J, Bratke G, Rauen P. et al. Feasibility of differential diagnosis based on imaging patterns using a large language model. Radiology 2023; 308 (01) e231167
  • 14 Sarangi PK, Irodi A, Panda S, Nayak DSK, Mondal H. Radiological differential diagnoses based on cardiovascular and thoracic imaging patterns: perspectives of four large language models. Indian J Radiol Imaging 2023; 34 (02) 269-275
  • 15 Rao A, Kim J, Kamineni M. et al. Evaluating GPT as an adjunct for radiologic decision making: GPT-4 versus GPT-3.5 in a breast imaging pilot. J Am Coll Radiol 2023; 20 (10) 990-997
  • 16 Rau A, Rau S, Zoeller D. et al. A context-based chatbot surpasses trained radiologists and generic ChatGPT in following the ACR Appropriateness Guidelines. Radiology 2023; 308 (01) e230970
  • 17 Biri SK, Kumar S, Panigrahi M, Mondal S, Behera JK, Mondal H. Assessing the utilization of large language models in medical education: insights from undergraduate medical students. Cureus 2023; 15 (10) e47468
  • 18 Tung AYZ, Dong LW. Malaysian medical students' attitudes and readiness toward AI (artificial intelligence): a cross-sectional study. J Med Educ Curric Dev 2023; 10: 23 821205231201164
  • 19 Buabbas AJ, Miskin B, Alnaqi AA. et al. Investigating students' perceptions towards artificial intelligence in medical education. Healthcare (Basel) 2023; 11 (09) 1298
  • 20 Alkhaaldi SMI, Kassab CH, Dimassi Z. et al. Medical student experiences and perceptions of ChatGPT and artificial intelligence: cross-sectional study. JMIR Med Educ 2023; 9: e51302
  • 21 Al Mohammad B, Aldaradkeh A, Gharaibeh M, Reed W. Assessing radiologists' and radiographers' perceptions on artificial intelligence integration: opportunities and challenges. Br J Radiol 2024; 97 (1156): 763-769
  • 22 Li Q, Qin Y. AI in medical education: medical student perception, curriculum recommendations and design suggestions. BMC Med Educ 2023; 23 (01) 852
  • 23 Sarangi PK, Mondal H. Response generated by large language models depends on the structure of the prompt. Indian J Radiol Imaging 2024; 34 (03) 574-575
  • 24 Vagha S, Mishra V, Joshi Y. Reviving medical education through teachers training programs: a literature review. J Educ Health Promot 2023; 12: 277

Zoom Image
Fig. 1 Number of participants from Indian states.