Show simple item record

Files in this item

Thumbnail

Item metadata

dc.contributor.authorEckrich, Johanna
dc.contributor.authorEllinger, Jörg
dc.contributor.authorCox, Alexander
dc.contributor.authorStein, Johannes
dc.contributor.authorRitter, Manuel
dc.contributor.authorBlaikie, Andrew
dc.contributor.authorKuhn, Sebastian
dc.contributor.authorBuhr, Christoph Raphael
dc.date.accessioned2024-04-03T09:30:07Z
dc.date.available2024-04-03T09:30:07Z
dc.date.issued2024-04-03
dc.identifier300916197
dc.identifiera52d52e9-4d1c-49c4-92ad-3488ac199a47
dc.identifier85189903280
dc.identifier.citationEckrich , J , Ellinger , J , Cox , A , Stein , J , Ritter , M , Blaikie , A , Kuhn , S & Buhr , C R 2024 , ' Urology consultants versus large language models : potentials and hazards for medical advice in urology ' , BJUI Compass , vol. Early View . https://doi.org/10.1002/bco2.359en
dc.identifier.issn2688-4526
dc.identifier.otherRIS: urn:7195D15AB83DA25B09E60EA5010BAE91
dc.identifier.otherORCID: /0000-0001-7913-6872/work/157140228
dc.identifier.urihttps://hdl.handle.net/10023/29591
dc.description.abstractBackground Current interest surrounding large language models (LLMs) will lead to an increase in their use for medical advice. Although LLMs offer huge potential, they also pose potential misinformation hazards. Objective This study evaluates three LLMs answering urology-themed clinical case-based questions by comparing the quality of answers to those provided by urology consultants. Methods Forty-five case-based questions were answered by consultants and LLMs (ChatGPT 3.5, ChatGPT 4, Bard). Answers were blindly rated using a six-step Likert scale by four consultants in the categories: ‘medical adequacy’, ‘conciseness’, ‘coherence’ and ‘comprehensibility’. Possible misinformation hazards were identified; a modified Turing test was included, and the character count was matched. Results Higher ratings in every category were recorded for the consultants. LLMs' overall performance in language-focused categories (coherence and comprehensibility) was relatively high. Medical adequacy was significantly poorer compared with the consultants. Possible misinformation hazards were identified in 2.8% to 18.9% of answers generated by LLMs compared with <1% of consultant's answers. Poorer conciseness rates and a higher character count were provided by LLMs. Among individual LLMs, ChatGPT 4 performed best in medical accuracy (p < 0.0001) and coherence (p = 0.001), whereas Bard received the lowest scores. Generated responses were accurately associated with their source with 98% accuracy in LLMs and 99% with consultants. Conclusions The quality of consultant answers was superior to LLMs in all categories. High semantic scores for LLM answers were found; however, the lack of medical accuracy led to potential misinformation hazards from LLM ‘consultations’. Further investigations are necessary for new generations.
dc.format.extent7
dc.format.extent589386
dc.language.isoeng
dc.relation.ispartofBJUI Compassen
dc.subjectArtificial intelligence (AI)en
dc.subjectBarden
dc.subjectChatbotsen
dc.subjectChatGPTen
dc.subjectDigital healthen
dc.subjectGlobal healthen
dc.subjectLarge language models (LLMs)en
dc.subjectLow- and middle-income countries (LMICs)en
dc.subjectTelehealthen
dc.subjectTelemedicineen
dc.subjectUrologyen
dc.subjectDASen
dc.titleUrology consultants versus large language models : potentials and hazards for medical advice in urologyen
dc.typeJournal articleen
dc.contributor.institutionUniversity of St Andrews. School of Medicineen
dc.contributor.institutionUniversity of St Andrews. Sir James Mackenzie Institute for Early Diagnosisen
dc.contributor.institutionUniversity of St Andrews. Infection and Global Health Divisionen
dc.identifier.doihttps://doi.org/10.1002/bco2.359
dc.description.statusPeer revieweden


This item appears in the following Collection(s)

Show simple item record