Skip to content

Is ChatGPT in your doctor’s inbox?


May 3, 2023: What happens when a chatbot sneaks into your doctor’s direct messages? Depending on who you ask, it could improve the results. On the other hand, it could raise some red flags.

The consequences of the COVID-19 pandemic have been far-reaching, especially when it comes to frustration over not being able to contact a doctor for an appointment, let alone get answers to health questions. And with the rise of telehealth and a substantial increase in patient emails over the last 3 years, inboxes are filling up rapidly at the same time as physician burnout is increasing.

The old adage that timing is everything applies, especially as technological advances in artificial intelligence, or AI, have been rapidly gaining speed over the last year. The solution to overloaded inboxes and delayed responses may lie in artificial intelligence. ChatGPT, that was shown to substantially improve the quality and tone of responses to patient questions, according to study findings published in JAMA Internal Medicine.

“There are millions of people who can’t get answers to the questions they have, so they post them on public social media forums like Reddit Ask Docs and hope that sometime, somewhere, an anonymous doctor will answer and give them the answer. advice they’re looking for,” said John Ayers, PhD, the study’s lead author and a computational epidemiologist at the Qualcomm Institute at the University of California-San Diego.

“AI-assisted messaging means doctors spend less time worrying about verb conjugation and more time worrying about medicine,” he said.

r/Askdocs vs. Ask Your Doctor

Ayers refers to the Reddit subforum r/Askdocs, a platform dedicated to providing patients with answers to their most pressing health and medical questions with guaranteed anonymity. The forum has 450,000 members and at least 1,500 are active online at any given time.

For the study, he and his colleagues randomly selected 195 Reddit exchanges (consisting of unique questions from patients and responses from doctors) from the forums last October, then fed each full-text question into a new chatbot session ( which means it was free from previous questions). which could bias the results). The question, the doctor’s response, and the chatbot’s response were then stripped of any information that might indicate who (or what) was answering the question, and subsequently reviewed by a team of three licensed healthcare professionals.

“Our initial study shows surprising results,” Ayers said, pointing to findings that showed healthcare professionals overwhelmingly preferred responses generated by chatbots over responses from physicians 4 to 1.

The reasons for the preference were simple: better quantity, quality and empathy. Not only were the chatbot’s responses significantly longer (mean 211 words to 52 words) than those from physicians, but the proportion of physician responses that were deemed “less than acceptable” in quality was more than 10 times higher than the chatbot (which were mostly “better than good”). And compared to responses from physicians, responses from chatbots were rated more frequently in terms of bedside demeanor, resulting in a 9.8 times higher prevalence of “empathetic” or “very empathetic” ratings. “.

A world of possibilities

The last decade has shown that there is a world of possibilities for AI applications, from creating mundane virtual foremen (like Apple’s Siri or Amazon’s Alexa) to correction of inaccuracies in the histories of past civilizations.

In healthcare, AI/machine learning models are being integrated into diagnostics and data analysis, for example, to speed analysis of X-ray, CT and MRI images or assist researchers and doctors to collate and filter reams of genetic and other data to learn more about the connections between disease and fuel discovery.

“The reason this is a timely issue now is that the release of ChatGPT has made AI finally accessible to millions of doctors,” said Bertalan Meskó MD, PhD, director of the Futuristic Medical Institute. “What we need now is not better technologies, but to prepare the health care workforce to use such technologies.”

Meskó believes that an important role for AI lies in automating repetitive or data-driven tasks, noting that “any technology that improves the doctor-patient relationship has a place in healthcare,” also highlighting the need for “ AI-based solutions that enhance your relationship by giving you more time and attention to dedicate to each other.”

The “how” of the integration will be key.

“I think there are definitely opportunities for AI to mitigate issues related to physician burnout and give them more time with their patients,” said Kelly Michelson, MD, MPH, director of the Center for Bioethics and Medical Humanities at the College of Feinberg Medicine from Northwestern University. and attending physician at Ann & Robert H. Lurie Children’s Hospital of Chicago. “But there are a lot of subtle nuances that clinicians consider when interacting with patients that, at least right now, don’t seem like things that can be translated through algorithms and AI.”

If anything, Michelson said he would argue that at this stage, the AI ​​should be a complement.

“We need to think carefully about how we incorporate it and not just use it to take over one thing until it has been better tested, including the response to the message,” he said.

Ayers agreed.

“It’s really just a phase zero study. And it shows that we should now move towards patient-centred studies using these technologies and not just flip the switch arbitrarily.”

The patient paradigm

When it comes to the patient side of ChatGPT messaging, a number of questions arise, including relationships with your healthcare providers.

“Patients want the ease of Google, but the confidence that only their own provider can provide when responding,” said Annette Ticoras, MD, a board-certified patient advocate serving the greater Columbus, OH area.

“The goal is to ensure that doctors and patients exchange information of the highest quality. The messages for patients are only as good as the data that was used to provide an answer,” she said.

This is especially true with respect to bias.

“AI tends to be generated by existing data, so if there are biases in the existing data, those biases are perpetuated in the output developed by the AI,” Michelson said, referring to a concept called “the black box.”

“The thing about more complex AI is that we often can’t discern what’s driving it to make a particular decision,” he said. “You can’t always determine whether or not that decision is based on existing inequalities in the data. or some other underlying problem.”

Still, Michelson is hopeful.

“We need to be strong advocates for patients and make sure that whenever and however AI is brought into healthcare, we do it in a thoughtful, evidence-based way that doesn’t remove the essential human component that exists in medicine. . ” she said.


—————————————————-

Source link

For more news and articles, click here to see our full list.