Can ChatGPT Help Clinicians Manage GERD?

Megan Brooks

July 17, 2023

The artificial intelligence chatbot ChatGPT can potentially be used as source of information for patients, as well as an aid to clinicians managing gastroesophageal reflux disease (GERD), a study has found.

The researchers say the tool's conversational format could improve clinical efficiency and reduce the volume of patient messages and calls, potentially diminishing clinician burnout.

However, inconsistencies and content errors observed require a certain level of clinical oversight, caution the researchers, led by Jacqueline Henson, MD, with the Division of Gastroenterology, Duke University School of Medicine, Durham, North Carolina.

The study was published online July 7 in the American Journal of Gastroenterology.

Putting ChatGPT to the GERD Test

Affecting nearly 30% of US adults, GERD is a common and increasingly complex condition to manage. AI technologies like ChatGPT (Open AI/Microsoft) have demonstrated an increasing role in medicine, although the ability of ChatGPT to provide guidance for GERD management is uncertain.

Henson and colleagues assessed ChatGPT's ability to provide accurate and specific responses to questions regarding GERD care.

They generated 23 GERD management prompts based on published clinical guidelines and expert consensus recommendations. Five questions were about diagnosis, 11 on treatment, and seven on both diagnosis and treatment.

Each prompt was submitted to ChatGPT 3.5 (version 3/14/2023) three times on separate occasions without feedback to assess the consistency of the answer. Responses were rated by three board-certified gastroenterologists for appropriateness and specificity.

ChatGPT returned appropriate responses to 63 of 69 (91.3%) queries, with 29% considered completely appropriate and 62.3% mostly appropriate.

However, responses to the same prompt were often inconsistent, with 16 of 23 (70%) prompts yielding varying appropriateness, including three (13%) with both inappropriate and appropriate responses.

Prompts regarding treatment received the highest proportion of completely appropriate responses (39.4%), while prompts for diagnosis and management had the highest proportion of mostly inappropriate responses (14.3%).

For example, the chatbot failed to recommend consideration of Roux-en-Y gastric bypass for ongoing GERD symptoms with pathologic acid exposure in the setting of obesity, and some potential risks associated with proton pump inhibitor therapy were stated as fact.

However, the majority (78.3%) of responses contained at least some specific guidance, especially for prompts assessing diagnosis (93.3%). In all responses, ChatGPT suggested contacting a healthcare professional for further advice.

Eight patients from a range of educational backgrounds who provided feedback on the responses generally felt that the ChatGPT responses were both understandable and useful.

Overall, ChatGPT "provided largely appropriate and at least some specific guidance for GERD management, highlighting the potential for this technology to serve as a source of information for patients, as well as an aid for clinicians," Henson and colleagues write.

However, "the presence of inappropriate responses with inconsistencies to the same prompt largely preclude its application within healthcare in its present state, at least for GERD," they add.

The study had no commercial funding. Henson has served as a consultant for Medtronic.

Am J Gastroenterol. Published online July 7, 2023. Abstract

For more news, follow Medscape on Facebook, Twitter, Instagram, YouTube, and LinkedIn

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....