Specialist consults in primary care and inpatient settings typically address complex clinical questions beyond standard guidelines. eConsults have been developed as a way for specialist physicians to review cases asynchronously and provide clinical answers without a formal patient encounter. Meanwhile, large language models (LLMs) have approached human-level performance on structured clinical tasks, but their real-world effectiveness requires evaluation, which is bottlenecked by time-intensive manual physician review. To address this, we evaluate two automated methods: LLM-as-judge and a decompose-then-verify framework that breaks down AI answers into verifiable claims against human eConsult responses. Using 40 real-world physician-to-physician eConsults, we compared AI-generated responses to human answers using both physician raters and automated tools. LLM-as-judge outperformed decompose-then-verify, achieving human-level concordance assessment with F1-score of 0.89 (95% CI: 0.750, 0.960) and Cohen’s kappa of 0.75 (95% CI 0.47,0.90) —comparable to physician inter-rater agreement κ = 0.69-0.90 (95% CI 0.43-1.0).
Competing Interest StatementD.W. consults for Chronicle Medical Software J.H.C. reports cofounding Reaction Explorer, which develops and licenses organic chemistry education software, as well as paid consulting fees from Sutton Pierce, Younker Hyde Macfarlane and Sykes McAllister as a medical expert witness. He receives funding from the National Institutes of Health (NIH)/National Institute of Allergy and Infectious Diseases (1R01AI17812101), NIH/National Institute on Drug Abuse Clinical Trials Network (UG1DA015815 CTN 0136), Stanford Artificial Intelligence in Medicine and Imaging Human Centered Artificial Intelligence Partnership Grant, the NIHNCATS Clinical & Translational Science Award (UM1TR004921), Stanford BioX Interdisciplinary Initiatives Seed Grants Program (IIP) [R12], NIH/Center for Undiagnosed Diseases at Stanford (U01 NS134358) and the American Heart Association Strategically Focused Research Network Diversity in Clinical Trials. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.
Funding StatementDr. Chen has received research funding support in part by NIH/National Institute of Allergy and Infectious Diseases (1R01AI17812101) NIH-NCATS-Clinical & Translational Science Award (UM1TR004921) NIH/National Institute on Drug Abuse Clinical Trials Network (UG1DA015815 - CTN-0136) Stanford Bio-X Interdisciplinary Initiatives Seed Grants Program (IIP) [R12] [JHC] NIH/Center for Undiagnosed Diseases at Stanford (U01 NS134358) Stanford Institute for Human-Centered Artificial Intelligence (HAI) Gordon and Betty Moore Foundation (Grant #12409)
Author DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.
Yes
The details of the IRB/oversight body that provided approval or exemption for the research described are given below:
IRB of Stanford University gave ethical approval for this work. (IRB-47618)
I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.
Yes
I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).
Yes
I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.
Yes
Data AvailabilityAll code produced in the present study are available upon reasonable request to the authors
Comments (0)