Skip to content

ChatGPT can help doctors and hurt patients


“Medical knowledge and practices change and evolve over time, and it is not known where in the medicine timeline ChatGPT draws its information when setting up a typical treatment,” he says. “Is that information recent or is it dated?”

Users should also be careful how ChatGPT-style bots can present fabricated or “hallucinated” information in a superficially fluid way, which can lead to serious errors if a person does not check the responses of an algorithm. And AI-generated text can influence humans in subtle ways. A study published in January, which has not been peer-reviewed, which posed ethical conundrums to ChatGPT concluded that the chatbot is an inconsistent moral advisor that can influence human decision-making even when people know the advice is coming from AI software.

Being a doctor is much more than regurgitating encyclopedic medical knowledge. While many clinicians are enthusiastic about using ChatGPT for low-risk tasks like text summarizing, some bioethicists are concerned that clinicians would turn to the bot for advice when faced with a difficult ethical decision, such as if surgery is the right option for a patient with a low probability. of survival or recovery.

“You can’t outsource or automate that kind of process to a generative AI model,” says Jamie Webb, a bioethicist at the Center for Technomoral Futures at the University of Edinburgh.

Last year, Webb and a team of moral psychologists explored what it would take to build an AI-powered “moral advisor” for use in medicine, inspired by previous investigation who suggested the idea. Webb and his co-authors concluded that it would be difficult for such systems to reliably balance different ethical principles and that doctors and other staff could suffer a “loss of moral skills” if they became too reliant on a bot instead of thinking through difficult decisions. for themselves. .

Webb notes that doctors have been told before that language-processing AI will revolutionize their work, only to be disappointed. After Danger! wins in 2010 and 2011, IBM’s Watson division turned to oncology and made claims about the efficacy of AI in fighting cancer. But that solution, initially dubbed Memorial Sloan Kettering in a box, wasn’t as successful in clinical settings as the hype would suggest. and in 2020 IBM closed the project.

When the hype rings hollow, there could be long-lasting consequences. During a Discussion panel at Harvard about the potential of AI in medicine in February, primary care physician Trishan Panch recalled seeing a post from a colleague on Twitter sharing the results of asking ChatGPT to diagnose a disease, shortly after the chatbot launched.

Excited doctors quickly responded with promises to use the technology in their own practices, Panch recalled, but around the 20th response, another doctor chimed in, saying all the model-generated references were fake. “It only takes one or two things like that to erode trust in the whole thing,” said Panch, who is a co-founder of healthcare software startup Wellframe.

Despite the sometimes glaring mistakes of the AI, Robert Pearl, formerly of Kaiser Permanente, remains extremely optimistic about language models like ChatGPT. He believes that in the next few years, language models in healthcare will become more like the iPhone, packed with features and power that can help doctors and help patients manage chronic diseases. He even suspects that language models like ChatGPT can help reduce the more than 250,000 deaths that occur annually in the US as a result of medical errors.

Pearl considers some things off limits for the AI. Helping people cope with grief and loss, end-of-life conversations with families and discussing procedures that carry a high risk of complications shouldn’t involve a bot, he says, because every patient’s needs are different. so variable that it is necessary to have those conversations. to get there

“Those are person-to-person conversations,” says Pearl, predicting that what is available today is only a small percentage of the potential. “If I’m wrong, it’s because I’m overestimating the rate of technology improvement. But every time I look, it moves faster than she thought.”

For now, compare ChatGPT to a medical student: able to care for patients and collaborate, but everything you do needs to be reviewed by a treating doctor.


—————————————————-

Source link

For more news and articles, click here to see our full list.