Abstract
Background
Patients increasingly use chatbots powered by artificial intelligence to seek information. However, there is a lack of reliable studies on the accuracy and reproducibility of the information provided by these models. Therefore, we conducted a study investigating the ChatGPT’s responses to questions about otosclerosis.
Methods
96 otosclerosis-related questions were collected from internet searches and websites of professional institutions and societies. Questions are divided into four sub-categories. These questions were directed at the latest version of ChatGPT Plus, and these responses were assessed by two otorhinolaryngology surgeons. Accuracy was graded as correct, incomplete, mixed, and irrelevant. Reproducibility was evaluated by comparing the consistency of the two answers to each specific question.
Results
The overall accuracy and reproducibility rates of GPT-4o for correct answers were found to be 64.60% and 89.60%, respectively. The findings showed correct answers for accuracy and reproducibility for basic knowledge were 64.70% and 91.20%; for diagnosis & management, 64.0% and 92.0%; for medical & surgical treatment, 52.95% and 82.35%; and for operative risks & postoperative period, 75.0% and 90.0%, respectively. There were no significant differences found between the answers and groups in terms of accuracy and reproducibility (p = 0.073 and p = 0.752, respectively).
Conclusion
GPT-4o achieved satisfactory accuracy results, except in the diagnosis & management and medical & surgical treatment categories. Reproducibility was generally high across all categories. With the audio and visual communication capabilities of GPT-4o, under the supervision of a medical professional, this model can be utilized to provide medical information and support for otosclerosis patients.