アブストラクト | Though Vaccines are instrumental in global health, mitigating infectious diseases and pandemic outbreaks, they can occasionally lead to adverse events (AEs). Recently, Large Language Models (LLMs) have shown promise in effectively identifying and cataloging AEs within clinical reports. Utilizing data from the Vaccine Adverse Event Reporting System (VAERS) from 1990 to 2016, this study particularly focuses on AEs to evaluate LLMs' capability for AE extraction. A variety of prevalent LLMs, including GPT-2, GPT-3 variants, GPT-4, and Llama2, were evaluated using Influenza vaccine as a use case. The fine-tuned GPT 3.5 model (AE-GPT) stood out with a 0.704 averaged micro F1 score for strict match and 0.816 for relaxed match. The encouraging performance of the AE-GPT underscores LLMs' potential in processing medical data, indicating a significant stride towards advanced AE detection, thus presumably generalizable to other AE extraction tasks. |
ジャーナル名 | PloS one |
Pubmed追加日 | 2024/3/21 |
投稿者 | Li, Yiming; Li, Jianfu; He, Jianping; Tao, Cui |
組織名 | McWilliams School of Biomedical Informatics, The University of Texas Health;Science Center at Houston, Houston, TX, United States of America.;Department of Artificial Intelligence and Informatics, Mayo Clinic, Jacksonville,;FL, United States of America. |
Pubmed リンク | https://www.ncbi.nlm.nih.gov/pubmed/38512919/ |