Evaluating the Readability and Reliability of Large Language Model Generated Information About Anxiety and Depression

Abstract

Artificial intelligence (AI) large language models (LLMs) hold great potential to transform psychiatry and mental health care by delivering relevant and tailored mental health information. This study aimed to evaluate the quality of mental health information generated by LLMs by determining their level of accessibility, reliability, and interpreting any bias present. Generative Pre-trained Transformer-4 (GPT-4) (San Francisco, California: Open AI), Gemini 1.5 Flash (Mountain View, California: Google LLC), and Large Language Model Meta AI 3.2 (Llama 3.2) (Menlo Park, California: Meta Inc.) were prompted with 20 questions commonly asked about anxiety and depression. The responses were evaluated using Flesch-Kincaid readability tests to quantify their ease of understanding through grade level and reading ease score measures. The text was subsequently analyzed using a modern DISCERN score to assess the reliability of the health-related information presented. Finally, the LLM responses were evaluated for stigmatizing language using communication and language guidelines for mental health. A significant difference in grade levels and reading ease scores was observed between GPT-4 and Llama (p < 0.01) and Gemini and Llama(p < 0.01), with both GPT-4 and Gemini having higher readability scores. No significant difference was observed in the grade level and reading ease scores between GPT-4 and Gemini (p > 0.05). All three models reported moderate reliability scores but no significant differences were observed (p > 0.05). GPT-4, Gemini, and Llama included stigmatizing phrases in 10%, 15%, and 20% of their responses respectively. These phrases were attributed to descriptions of mental health conditions and substance use; however, no significant differences were observed in the proportions of stigmatizing phrases present across the three models (p > 0.05). Furthermore, comparisons between anxiety and depression responses for each model revealed no significant differences in readability, reliability, or bias. All models demonstrated the ability to generate mental health information that generally satisfied criteria for accessibility, reliability, and minimal bias, with GPT-4 and Gemini reporting higher readability than Llama. However, the lack of additional resources cited in responses and the occasional presence of certain stigmatizing phrases, indicate that additional model training and fine-tuning, specifically for mental health applications, may be necessary before these tools can be deployed on a large scale for mental health care.

Competing Interest Statement

The authors have declared no competing interest.

Funding Statement

This study did not receive any funding

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

Yes

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

Yes

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

Yes

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

Yes

Data Availability

All data produced in the present study are available upon request to the authors

Comments (0)

No login
gif