• Can Large Language Models Handle the Complexity of Family Medicine?

    Kenny Lin, MD, MPH
    Posted on July 8, 2024

    Should family physicians be excited or apprehensive about the potential applications of artificial intelligence (AI) and large language models (LLMs) in primary care? An article by Dr. Richard Young and colleagues in the Journal of the American Board of Family Medicine recently made the case for both. Observing that primary care is a “complex adaptive system,” the authors suggested that AI “will likely work when its tasks are limited in scope, have clean data that are mostly linear and deterministic, and fit well into existing workflows.” On the other hand, AI may struggle to incorporate contextual and relational factors, process noisy and inaccurate data, or document vague symptoms that do not indicate a single disease condition.

    In an editorial on chatbots and LLMs in the June 2024 issue of American Family Physician, Dr. Aaron Saguil discussed how family medicine practices are turning to LLMs to “help decrease administrative burden and combat burnout.” These tools can already compose visit notes, remotely monitor patients with interactive chats, and draft replies to patient portal messages.

    In the future, LLMs may be integrated into electronic health records to provide real-time clinical decision support, suggesting “diagnostic possibilities, recommended ancillary evaluations, and possible treatment strategies.” To minimize the risks of LLMs propagating biased data, generating misinformation, or usurping the family physician’s role on health care teams, Dr. Saguil advised being actively involved in their implementation:

    The best defense against AI risks becoming realities is conscientious physicians guiding the development and implementation of LLMs into clinical care settings, pointing out what LLMs can do and what they cannot. In family medicine, no LLM can yet address a complex patient in a unique sociocultural situation with overlapping comorbidities and health states from the vantage point of a longitudinal relationship.

    A related FPM article by Dr. Steven Waldren, chief medical informatics officer at the American Academy of Family Physicians, explored other uses of LLMs in primary care, such as rewriting medical or legal forms for patients with lower health literacy or native languages other than English; summarizing information from a medical record, guideline, or research articles; drafting referral letters, prior authorization requests, and insurance appeals; and populating clinical registries. Dr. Waldren recommended three safeguards when using AI in medical practice: using LLMs only “when the physician or other user is able to easily verify the accuracy of the AI output”; not entering protected health or private organizational information in open online LLMs such as ChatGPT; and for now, using LLMs only in low-risk (nonclinical) situations. Echoing Dr. Saguil, Dr. Waldren called on family physicians to “weigh in on the design, development, and deployment of AI in medicine to ensure it is more helpful than harmful to patients, primary care physicians, and practices.”


    Disclaimer
    The opinions expressed here are those of the authors and do not necessarily reflect the opinions of the American Academy of Family Physicians or its journals. This service is not intended to provide medical, financial, or legal advice. All comments are moderated and will be removed if they violate our Terms of Use.