An analysis of the yearly distribution of AI publications reveals a rapid increase starting in 2018, peaking in 2024. This trend highlights the growing significance of artificial intelligence in the field of otorhinolaryngology and suggests that this technology will become even more impactful in the coming years. The rapid growth in the number of studies, particularly in recent years, strongly indicates that AI will hold a prominent role in areas such as diagnosis, treatment, surgical interventions, and patient monitoring.
The geographical distribution of AI research in ORL reveals that the United States leads significantly, followed by China, Germany, Japan, and the United Kingdom. This distribution is likely influenced by several factors, including the economic strength and developmental status of these countries, which enable greater investment in research and development (R&D). Nations like the USA and China allocate substantial funding for scientific research, fostering innovation in cutting-edge fields such as AI. Additionally, these countries have a high density of advanced universities and research laboratories, which serve as hubs for technological development. The availability and accessibility of AI technologies in these regions also play a pivotal role, as do their well-established collaborations between academic and industrial sectors. Previous studies have highlighted that the infrastructure and resources in these nations significantly contribute to their higher productivity in scientific research [8, 9].
The most frequently used keyword in the ORL field is cochlear implant (CI). AI offers various solutions to enhance clinical applications and optimize user outcomes in the context of CI [11]. For instance, patients with cochlear implants often struggle with communication in noisy environments. Noise masks speech signals, making it difficult for patients to perceive and understand speech and leading to challenges in selective hearing in environments with multiple speakers. It has been determined that AI can improve speech intelligibility for cochlear implant users in noisy settings through noise reduction algorithms and speech enhancement methods [12, 13]. In cochlear implant surgery, numerous factors influence the postoperative effectiveness of the device, including the patient’s demographic data, cochlear nerve function, hearing test results, and cognitive perception. AI has developed models that analyze individual patient data to predict postoperative success [14]. These algorithms predict potential outcomes by utilizing data from previous patients. AI has been reported to assist in creating personalized treatment plans by predicting surgical success [15]. Additionally, AI-based efficiency tools enable remote device adjustments and accelerate programming processes [16, 17]. Robotic-assisted techniques and electrode placement strategies optimized with AI have also been shown to make surgical procedures safer and more effective [18].
The second most frequently researched topic in AI is Head and Neck Cancer. Head and neck cancers are of significant importance in ORL practice due to their high mortality and morbidity rates. In this group of diseases, the early diagnosis of tumors, accurate staging, and the development of effective treatment plans play a critical role in patient survival. AI-based machine learning algorithms enhance treatment processes by predicting conditions such as lymph node metastasis and delays in postoperative adjuvant radiotherapy [19, 20]. Prognostic models derived from electronic health records and radiomic data have been shown to provide more precise outcomes in predicting patient survival compared to traditional methods [21]. Moreover, AI-driven pathomic models provide critical insights into tumor biology and gene expression, enabling prognosis prediction [22]. AI-supported decision support systems and chatbots improve patient counseling during diagnosis and treatment [2], while also being effective in surgical planning and the management of postoperative complications [23]. In rehabilitation processes, such as evaluating voice and swallowing functions, AI offers personalized approaches to improve patients’ quality of life [24]. In this context, the integration of AI into clinical applications for head and neck cancers accelerates diagnostic and treatment processes while enabling the development of more personalized and effective treatment plans.
The third most common application is the processing of MRI images using AI. Due to the complex anatomical structures in the head and neck region, MRI imaging provides detailed visualization, frequently used for detecting tumoral structures. However, challenges such as time-consuming analyses, the need for expert evaluation, and the complex nature of images persist. AI enhances the effectiveness of MRI imaging in ORL by addressing these challenges. Through radiomics and deep learning algorithms, AI can automatically analyze MRI data, predicting tumor size, malignancy risk, lymph node metastasis, bone invasion, and postoperative complications [25, 26]. For instance, it aids in assessing the malignancy risk of lesions in inverted papillomas, enabling accurate diagnosis and treatment planning [27]. AI-based models have achieved significant success in tasks such as tumor segmentation in vestibular schwannomas [28], differential diagnosis of parotid tumors [29], and predicting radiation-induced complications in nasopharyngeal cancer [30]. Additionally, by comparing CT and MRI in cholesteatoma diagnosis, AI demonstrates its potential to improve imaging accuracy and play a vital role in clinical decision support systems [31]. These methods not only enhance diagnostic accuracy but also personalize treatment processes and accelerate clinical decision-making.
Hearing loss has also been a frequently researched topic using AI. Early diagnosis and intervention are crucial for patients with hearing loss, including preventable childhood cases that may delay language development and adult cases linked to depression and cognitive decline [32]. Predicting hearing loss in advance offers opportunities for early treatment, improving the effectiveness of hearing aid use, and preventing exposure-related losses in individuals working in noisy environments [33]. AI has developed early warning systems across a wide spectrum, from industrial noise-exposed workers [34] to preventable hearing loss in children [35]. Additionally, in challenging conditions such as sudden sensorineural hearing loss, AI has emerged as a powerful tool for predicting disease prognosis through big data analysis and machine learning algorithms [36, 37].
AI has also been frequently studied in the context of patient education. Patients often turn to internet searches, social media platforms, or general health websites to access information. However, these sources pose significant risks regarding accuracy and reliability. Incorrect, incomplete, or unverified information may increase patients’ anxiety, lead them to inappropriate treatments, or result in unnecessary medical interventions. The complexity of medical terminology often makes it difficult for patients to comprehend accurate information and incorporate it into their healthcare decisions. Here, AI offers an effective solution for improving patient education. AI-powered chatbots like ChatGPT can answer patient questions, simplify medical information, and provide personalized insights [38]. For example, in otolaryngology, AI has facilitated patient education by making information about conditions such as anosmia, tracheotomy, and laryngopharyngeal reflux more understandable and accessible [39, 40, 41]. Additionally, AI has been noted to simplify complex information about surgical procedures, enhancing patient understanding [42].
Apart from these keywords, numerous other terms in the field of otorhinolaryngology (ORL) stand out in AI-related studies (Figs. 2 and 3). This suggests that AI will play an increasingly significant role in ORL in the future. The rapid growth of AI technologies in the ORL field in recent years has made substantial contributions to this discipline. Our bibliometric analysis revealed that trending keywords have shifted over the years in AI-related ORL research. While specific topics like hearing aids in 2018 and quality of life in 2019 were prominent, from 2020 onward, there has been a focus on more technical areas, such as audiometry, psychoacoustics, and audiograms. Recent years have highlighted advanced technologies like MRI and radiomic analysis, along with applications in chronic rhinosinusitis, laryngology, and surgical procedures. These trends demonstrate that AI is transforming not only diagnostic and treatment processes but also broader clinical applications, such as patient education and prognosis prediction. The observed shift in research focus from hearing-related technologies to diagnostic imaging, patient education, and holistic patient management may reflect advancements in imaging technologies and the growing recognition of AI’s potential beyond isolated therapeutic areas. While hearing-related technologies remain critical, the expansion of AI applications to encompass broader clinical contexts, such as radiomics and educational tools, aligns with an evolving emphasis on personalized and comprehensive care models in medicine. These shifts are likely driven by the increasing availability of AI tools capable of handling complex data types, such as imaging and clinical records, alongside a heightened focus on patient-centered care.
In 2023, some of the priority trends in ORL included the potential applications of AI in MRI and head and neck cancer treatment. Another significant trend was radiomics. A study focusing on AI and radiomic analyses examined CT radiomic models developed to non-invasively predict granzyme A in head and neck squamous cell carcinoma and machine learning models for external auditory canal CT scans to evaluate the feasibility of endoscopic ear surgery. These technologies demonstrate the potential to enhance diagnostic accuracy and improve surgical processes [43]. Research on AI and clinical decision support systems in pediatric otitis media treatment highlights the efficacy of deep learning approaches in diagnosing atelectasis and attic retraction pockets using otoscopic images. Quality improvement efforts in clinical decision-making processes for pediatric otitis media treatment were also emphasized [44, 45]. Studies linking AI with prognosis illustrate the capabilities of AI-based deep learning models to predict treatment outcomes from nasal polyp histology Sect. [46] and forecast hearing prognosis after intact canal wall mastoidectomy and tympanoplasty. These advances improve surgical treatment processes and patient outcome predictions [47]. Furthermore, research on AI and CT underscores the potential of deep learning algorithms in differentiating eosinophilic from non-eosinophilic chronic rhinosinusitis in preoperative CT images [48] and detecting anatomical landmarks in temporal bone CT scans, enhancing diagnostic accuracy and surgical planning [49].
In 2024, AI applications focusing on patient education emerged as a significant trend in ORL. Chronic rhinosinusitis became another focal point, with AI playing a role in diagnosis [50], predicting postoperative outcomes [51], and patient education [52]. AI-based tools have been explored for their potential in assisting diagnosis and treatment across various otolaryngology fields, including head and neck cancers [2, 19, 22, 24], ENT diseases [5, 6, 7], pediatric otitis media [53], laryngopharyngeal cancer [3], cervical lymphadenopathy [54], and vocal cord leukoplakia [55]. These technologies offer accuracy, reliability, and innovative approaches compared to traditional methods. Studies linking AI and surgery comprehensively examined the roles of ChatGPT and other AI-based tools in otolaryngology and head-neck surgery, focusing on diagnosis, treatment, patient education, preoperative counseling, postoperative monitoring, clinical decision support systems, and surgical processes. These tools were found to enhance accuracy, reliability, readability, and clinical applications [5, 6, 56, 57]. Research on AI and computer vision demonstrated their capacity to bridge gaps between clinical needs and AI capabilities, improving diagnostic accuracy and surgical workflows [7]. Additionally, AI-driven studies in laryngology highlighted ChatGPT’s potential in answering patient questions, improving educational materials, supporting clinical decisions, addressing ethical challenges, ensuring guideline compliance, and creating evaluative educational materials [5, 6, 7].
The identified research trends underscore AI’s transformative potential in clinical decision-making, patient care, and healthcare efficiency within ORL. For example, advancements in diagnostic imaging, such as MRI and CT-based radiomics, have significantly enhanced diagnostic accuracy and surgical planning. Patient education tools, including AI-driven chatbots, improve patient understanding and engagement, leading to better compliance with treatment plans. Furthermore, AI’s role in prognostic modeling enables more precise predictions of treatment outcomes, facilitating personalized care pathways. Collectively, these advancements contribute to streamlined workflows, reduced clinical errors, and improved healthcare delivery, underscoring the far-reaching benefits of AI integration in ORL.
The trends identified in ORL research largely parallel broader developments in AI applications within the medical domain. Across various specialties, there has been a notable transition from task-specific applications, such as predictive models for isolated conditions, to integrative approaches that leverage AI for diagnostic imaging, patient education, and multidisciplinary management. For example, radiomics and deep learning, which are now prevalent in ORL, have also seen widespread adoption in oncology and neurology [58]. Similarly, patient education tools, such as AI-driven chatbots, are gaining traction in fields like primary care and pediatrics [59, 60]. These parallels suggest that ORL is keeping pace with advancements in the broader medical community, demonstrating how interdisciplinary innovations can translate into specialty-specific benefits.
The factor analysis results provided valuable insights into the conceptual framework of AI research in ORL by categorizing the literature into four main clusters (Fig. 5). The first and most comprehensive cluster represents the primary applications of AI in ORL, focusing on topics such as cochlear implants, head and neck cancer, hearing loss, and patient education. This cluster emphasizes AI’s role in diagnosing, treating, and managing various ORL pathologies, reflecting its wide-ranging clinical applications. The second cluster highlights AI applications in visual data analysis, particularly MRI-based imaging methods. This includes advanced topics like radiomics, chronic rhinosinusitis, and the segmentation of acoustic neuroma and vestibular schwannoma, showcasing how AI enhances diagnostic accuracy and surgical planning. The third cluster is centered around CT imaging and its applications in conditions such as oropharyngeal cancer, skull base pathologies, and paranasal sinus diseases, illustrating AI’s potential in improving imaging-based diagnosis and treatment strategies. The fourth cluster focuses on hearing-related AI applications, including audiograms, audiometry, and psychoacoustics. These findings underscore the diverse scope of AI in ORL, spanning from imaging innovations to patient-centered applications, and highlight its transformative potential across multiple domains in the field.
The clustering of research topics in ORL-AI highlights opportunities for collaboration and interdisciplinary innovation. For instance, the integration of imaging-based clusters (MRI and CT applications) with patient-centered clusters (education and hearing loss management) could lead to the development of comprehensive decision-support tools. These synergies emphasize the potential for collaboration between radiologists, surgeons, and data scientists. Additionally, clusters such as radiomics and psychoacoustics suggest untapped research areas, including the role of AI in bridging auditory diagnostics with imaging technologies. Identifying these intersections can inform targeted research initiatives, fostering advancements that benefit multiple disciplines.
This study has certain limitations. First, as the data were obtained solely from a specific database (WoS), studies available in other databases were not included in the analysis, which might limit the scope of the literature. Nonetheless, WoS was chosen as a reliable data source for scientific bibliometric studies due to its inclusion of high-quality, peer-reviewed publications, its broad coverage, and its provision of detailed citation analyses [8, 9]. Second, as this is a cross-sectional observational study, the analysis is based on data available at a specific point in time. Future indexing and newly published studies may alter these findings. Additionally, bibliometric analyses, while valuable for identifying trends and patterns, may benefit from complementary qualitative approaches to provide deeper insights into the content and implications of the research. Despite these limitations, the study offers a comprehensive overview of AI research in ORL, highlighting significant trends and areas for future exploration.
Comments (0)