BMC Oral Health, cilt.25, sa.1, 2025 (SCI-Expanded)
Background: Artificial intelligence (AI) technologies have revolutionized fields such as economics, law, and healthcare. Large language models, like ChatGPT, have shown significant potential in dentistry, supporting diagnostic accuracy, treatment planning, and education. However, earlier versions of ChatGPT were limited to text-based data. The latest multimodal model, ChatGPT-4o, introduced in 2024, now processes text, images, audio, and video, enabling broader applications in clinical education. This study evaluates ChatGPT-4o’s diagnostic accuracy in endodontic cases, comparing it with dental students’ performance. Materials and methods: This study included two groups of dental students: 3rd-year and 5th-year, alongside ChatGPT-4o. Participants answered 15 multiple-choice questions designed using radiographs, clinical photographs, and patient histories. These questions, based on the American Association of Endodontists’ Clinical Guidelines, were administered via Google Forms for students and ChatGPT-4o. Responses were categorized as correct, incorrect, or unanswered. Data were analyzed statistically. Results: ChatGPT-4o demonstrated a higher accuracy rate and lower error rate compared to students, with 91.4% correct and 8.2% incorrect responses. Third-year students had a correct response rate of 60.8%, while fifth-year students achieved 79.5%. A statistically significant difference was found between the study groups in terms of correct response rates (p < 0.05), with ChatGPT outperforming both student groups (p < 0.001). Additionally, fifth-year students showed a higher correct response rate compared to third-year students. Conclusion: ChatGPT-4o demonstrates significant potential as a diagnostic support tool in dental education, particularly in endodontics. Its high diagnostic accuracy and consistency highlight its value as an innovative (application in clinical training and decision-making.