Medical technology and the human touch
Chief Medical Information Officer at Stanford Medicine Children’s Health
As a pediatrician and Chief Medical Information Officer (CMIO), I have observed the tension between technology and humanism in medicine up close. Technological advancements such as electronic health records (EHRs) have improved data accessibility, computerized clinical decision support, and advanced analytics. However, the humanistic aspect of medicine has been inadvertently disrupted. Healthcare providers must spend more time on data entry, documentation, and administrative tasks on computers, and they feel pulled away from direct patient interactions.
What will be the impact of large language models (LLMs) like ChatGPT on medical practice, healthcare providers’ experiences, and individual and community health outcomes? Technology’s paradoxical role in medicine is its capacity to both enhance and diminish humanistic care. There’s hope LLMs will offer solutions to some of the problems of previous generations of health technology. By streamlining documentation processes, automating administrative tasks, and making relevant information more easily available, LLMs could free providers to focus more time on cultivating vital relationships with patients and their families.
Technology’s paradoxical role in medicine is its capacity to both enhance and diminish humanistic care.
Since ChatGPT’s unveiling, a myriad of promising applications in healthcare have emerged. Physicians and other healthcare stakeholders eagerly share potential use cases on blogs, listservs, and social media. Doctors could generate insurance response letters in seconds and answer patient inquiries in their overflowing inboxes. They could get assistance in generating a broad differential diagnosis with the associated workup and possible therapeutic options. Amidst the excitement, countless health tech companies are rushing to integrate LLMs into their services.
The question is how much of this is hype.
Does this tool have genuine potential to augment our approach to healthcare and improve human well-being? To maximize its transformative power, we must continue to explore appropriate use cases and identify and mitigate limitations like “hallucinations”, misinformation, and bias. We must be thoughtful about incorporating this tool in a way that protects and enables human connection.
There are reasons to be optimistic. We have early indications that this technology may have significant advantages over its predecessors in medicine. The LLMs interactivity and ability to contextualize a question create a new paradigm for clinical decision support that augments healthcare providers’ cognitive processes. Unlike traditional systems, ChatGPT considers the context of a query, delivering tailored responses that aid healthcare professionals in assessing clinical situations and understanding the implications of treatment options. As the model is continually updated and trained on new data, it may help providers stay current and consistent with the latest medical research and guidelines. Because the model is interactive and stochastic, it can generate alternative perspectives, incorporating information across multiple disciplines, and suggesting several possible approaches to clinical problems. Traditional clinical decision support systems have had unintended consequences of automation complacency. The interactive, generative nature of LLMs may encourage critical and creative thinking about treatment strategies that will best meet the needs of individual patients.
The interactive, generative nature of LLMs may encourage critical and creative thinking about treatment strategies that will best meet the needs of individual patients.
Similarly, by engaging patients and families in interactive conversations, ChatGPT may encourage active learning and participation in developing personalized health plans. This approach cultivates critical thinking and problem-solving skills as patients seek answers, explore concepts, and ask for clarification. By presenting information accessibly and understandably, these models could empower patients and families, strengthening connections with their providers as they explore options together. They could promote a more patient-centered approach to medicine.
Because LLMs are based on wide ranging data sets, they may provide opportunities to span disciplines beyond healthcare to address healthcare and social determinants of health more holistically. As a pediatrician, I can imagine these tools fostering closer collaboration between education and health systems to enable early identification of developmental delays or other conditions that impact learning. They could potentially augment educators’ ability to assess students for issues such as ADHD or dyslexia or mental health problems and facilitate prompt referral to appropriate healthcare providers.
We are just beginning to understand the potential of these tools to transform the way we approach healthcare. As we integrate ChatGPT and other LLMs into healthcare practices, it is crucial to understand the capabilities and limitations of these new tools fully. We must recognize and mitigate potential unintended consequences and integrate new technologies in a way that preserves the compassionate care at the core of our profession. With the proper applications of these tools, we can hope to achieve a more harmonious coexistence of technology and human touch. We can create an environment where providers and patients feel acknowledged, understood, and supported. By doing so, we can build a more resilient, responsive, and patient-centered healthcare system that nurtures the well-being of individuals and communities alike.
Natalie Pageler
Dr. Natalie Pageler is Clinical Professor of Pediatric Critical Care at Stanford University School of Medicine and Chief Medical Information Officer at Stanford Medicine Children’s Health. Dr. Pageler is also the Program Director and co-founder of the Stanford Clinical Informatics Fellowship, one of the first ACGME-accredited fellowships in the nation.