School of Medicine Publications
Document Type
Article
Publication Date
10-2025
Abstract
Background: Low health literacy among patients hinders comprehension of care instructions and worsens outcomes, yet most otolaryngology patient materials and chatbot responses to medical inquiries exceed the recommended reading level of sixth- to eighth-grade. Whether chatbots can be pre-programmed to provide accurate, plain-language responses has yet to be studied. This study aims to compare response readability of a GPT model customized for plain language with GPT-4 when answering common otolaryngology patient questions.
Methods: A custom GPT was created and provided thirty-three questions from Polat et al. (Int J Pediatr Otorhinolaryngol., 2024), and their GPT-4 answers were reused with permission. Questions were grouped by theme. Readability was calculated with Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE) via online calculator. A board-certified, practicing otolaryngologist assessed content similarity and accuracy. The primary outcome was readability, measured by FKGL (0-18; equivalent to United States grade level) and FRE (0-100; higher scores indicate greater readability).
Results: The custom GPT reduced FKGL by an average of 4.2 grade levels (95 % confidence interval [CI]: 3.2, 5.1; p < 0.001) and increased FRE by an average of 17.3 points (95 % CI: 12.5, 21.7; p < 0.001). Improvements remained significant in three of four theme subgroups (p < 0.05). Readability was consistent across question types, and variances were equal between models. Expert review confirmed overall accuracy and content similarity.
Conclusion: Preprogramming a custom GPT to generate plain-language instructions yields outputs that meet Centers for Medicare & Medicaid Services readability targets without significantly compromising content quality. Tailored chatbots could enhance patient communication in otolaryngology clinics and other medical settings.
Recommended Citation
Alsabawi, Y., Quesada, P. R., & Rouse, D. T. (2025). Readability of custom chatbot vs. GPT-4 responses to otolaryngology-related patient questions. American journal of otolaryngology, 46(5), 104717. https://doi.org/10.1016/j.amjoto.2025.104717
Publication Title
American journal of otolaryngology
DOI
10.1016/j.amjoto.2025.104717
Academic Level
medical student

Comments
Copyright © 2025 The Authors. Published by Elsevier Inc. All rights reserved.
http://creativecommons.org/licenses/by-nc-nd/4.0/