Introduction The recent increasing use of large language models (LLMs) such as ChatGPT in scientific writing raises concerns about authenticity and accuracy in biomedical literature. This study aims to quantify the prevalence of Artificial Intelligence (AI)-generated text in urology abstracts from 2010 to 2024 and assess its variation across journal categories and impact factor quartiles.
Methods A retrospective analysis was conducted on 64,444 unique abstracts from 38 journals in the field of urology published between 2010-2024 and retrieved via the Entrez API from PubMed. Abstracts were then categorized by subspecialty and stratified by impact factor (IF). A synthetic reference corpus of 10,000 abstracts was generated using GPT-3.5-turbo. A mixture model estimated the proportion of AI-generated text 
 annually, using maximum likelihood estimation with Laplace smoothing. Calibration and specificity of the estimator were assessed using pre-LLM abstracts and controlled mixtures of real and synthetic text. Statistical analyses were performed using Python 3.13.
Results The proportion of AI-like text was negligible from 2010 to 2019, rising to 1.8% in 2020 and 5.3% in 2024. In 2024, Men’s Health journals showed the highest AI-like text, while Oncology journals had the lowest. Journals in the highest and lowest IF quartiles showed a higher proportion of AI-like text than mid-quartiles. Type-Token Ratio (TTR) remained stable across the study period. Our validation calibration set showed an area under the curve of 0.5326.
Conclusion Textual patterns similar to AI-generated language has risen sharply in urology abstracts since ChatGPT’s release in late 2022, and this trend varies by journal type and IF.
Patient summary We looked at whether large language models (LLM) such as ChatGPT are influencing the way urology research is written. We found that signs of LLM-like text were rare in abstracts before 2020 but increased in recent years, especially after the release of ChatGPT. Some journal types use these tools more than others. These findings help raise awareness about the growing role of artificial intelligence in scientific communication.
Competing Interest StatementWassim Kassouf has consulted or acted in advisory roles for Sesen Bio, Ferring, Roche, BMS, Merck, Janssen, Bayer, Astellas, Seagen, Pfizer/EMD Serono, and Photocure. Adel Arezki and James Tsui has no conflicts of interest.
Funding StatementThis study did not receive any funding
Author DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.
Yes
I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.
Yes
I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).
Yes
I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.
Yes
Data AvailabilityAll data produced in the present study are available upon reasonable request to the authors
Comments (0)