New to MyHealth?
Manage Your Care From Anywhere.
Access your health information from any device with MyHealth. You can message your clinic, view lab results, schedule an appointment, and pay your bill.
ALREADY HAVE AN ACCESS CODE?
DON'T HAVE AN ACCESS CODE?
NEED MORE DETAILS?
MyHealth for Mobile
Large language models versus traditional textbooks: optimizing learning for plastic surgery case preparation.
Large language models versus traditional textbooks: optimizing learning for plastic surgery case preparation. BMC medical education Hinson, C., Stingl, C. S., Nazerali, R. 2025; 25 (1): 984Abstract
BACKGROUND: Large language models (LLMs), such as ChatGPT-4 and Gemini, represent a new frontier in surgical education by offering dynamic, interactive learning experiences. Despite their potential, concerns about the accuracy, depth of knowledge, and bias in LLM responses persist. This study evaluates the effectiveness of LLMs in aiding surgical trainees in plastic and reconstructive surgery through comparison with traditional case-preparation textbooks.METHODS: Six representative cases from key areas of plastic and reconstructive surgery-craniofacial, hand, microsurgery, burn, gender-affirming, and aesthetics-were selected. Four types of questions were developed for each case to cover clinical anatomy, indications, contraindications, and complications. Responses from LLMs (ChatGPT-4 and Gemini) and textbooks were compared using surveys distributed to medical students, research fellows, residents, and attending surgeons. Reviewers rated each response on accuracy, thoroughness, usefulness for case preparation, brevity, and overall quality using a 5-point Likert scale. Statistical analyses, including ANOVA and unpaired T-tests, were conducted to assess the differences between LLM and textbook responses.RESULTS: A total of 90 surveys were completed. LLM responses were rated as more thorough (p<0.001) but less concise (p<0.001) than textbook responses. Textbooks were rated superior for answering questions on contraindications (p=0.027) and complications (p=0.014). ChatGPT was perceived as more accurate (p=0.018), thorough (p=0.002), and useful (p=0.026) than Gemini. Gemini was rated lower in quality (p=0.30) compared to ChatGPT along with being inferior to textbook answers for burn-related questions (p=0.017) and anatomical questions (p=0.013).CONCLUSION: While LLMs show promise in generating thorough educational content, they require improvement in conciseness, accuracy, and utility for practical case preparation. ChatGPT generally outperforms Gemini, indicating variability in LLM capabilities. Further development should focus on enhancing accuracy and consistency to establish LLMs as reliable tools in medical education and practice.
View details for DOI 10.1186/s12909-025-07550-8
View details for PubMedID 40597031