Nursing Blogs

Artificial Intelligence Might Show More Empathy Than Healthcare Workers

A new study published in JAMA Internal Medicine shows that AI platforms like ChatGPT might show more empathy when talking to patients compared to human providers. Many facilities and doctor’s offices are incorporating the bot into their practices as a way of simplifying menial tasks, such as filing paperwork, which can reduce burnout. But it turns out the robot actually has pretty good bedside manners. It can’t completely replace the physician, but it does like to ask questions.

Researchers at the University of San Diego, La Jolla sampled 200 responses from the Reddit forum AskDocs. These questions were then posed to both the AI and human doctors. A clinical team then reviewed the responses for accuracy. They also rated them based on the perceived level of empathy.

ChatGPT won out in a landslide. On average the robot scored 21% higher than physicians in terms of the quality of responses. The automated replies were also 41% more empathetic than those of human physicians. Over a quarter of the doctors’ responses were deemed insufficient by the clinical team compared to just 3 percent of the robot’s responses.

In one example mentioned in the study, a woman wrote on Reddit that she was worried about going blind after splashing some bleach in her eye. ChatGPT started off by apologizing to the woman for the scare. It added another seven sentences designed to reassure her that it is highly “unlikely” that she will go blind. When the doctor answered, they said, “Sounds like you’ll be fine,” and directed the patient to the phone number of Poison Control.

The robot’s responses were usually much longer than those of the doctors, which may speak to how much time doctors usually spend responding to patients. The AI can generate responses instantly and is immune to burnout.

“Without controlling for the length of the response, we cannot know for sure whether the raters judged for style (e.g., verbose and flowery discourse) rather than content,” wrote Mirella Lapata, professor of natural language processing at the University of Edinburgh.

A longer response may be reassuring to some patients, but additional information isn’t always the best medicine. Experts warn the bot may generate a longer response even when it’s not necessary. Dr. David Asch, a professor of medicine and senior vice dean at the University of Pennsylvania, recently asked ChatGPT how it could be useful in healthcare and found the responses to be thorough, but verbose.

“It turns out ChatGPT is sort of chatty,” he said. “It didn’t sound like someone talking to me. It sounded like someone trying to be very comprehensive.”

Asch said he was glad the robot was able to answer his question, but he doesn’t think it’s ready to care for patients in isolation considering the risk of error.

“I think we worry about the garbage in, garbage out problem. And because I don’t really know what’s under the hood with ChatGPT, I worry about the amplification of misinformation. I worry about that with any kind of search engine,” he said. “A particular challenge with ChatGPT is it really communicates very effectively. It has this kind of measured tone, and it communicates in a way that instills confidence. And I’m not sure that that confidence is warranted.”

In another study, researchers compared responses from ChatGPT, Google, and Stanford University in terms of their postoperative care instructions for eight common pediatric procedures. Stanford received the highest score. Both AIs, ChatGPT and Google, scored around 80 percent in terms of understandability. Google also scored higher than ChatGPT in terms of actionability (83% vs 73%, respectively).

“ChatGPT provides direct answers that are often well-written, detailed, and in if-then format, which give patients access to immediate information while waiting to reach a clinician,” the researchers concluded.

But Asch thinks the tool is better used to support a doctor’s diagnosis rather than replace it.

“I have a very optimistic sense of this, but it’s all predicated on operating within the guardrails of truth. And at the moment, I don’t know that guardrails of truth exist in the way in which ChatGPT constructs its answers,” he said.

D Del Rey

Recent Posts

St. Paul Wipes Out $40 Million in Medical Debt for 32,000 Residents

In an unprecedented step toward financial relief and health equity, the city of St. Paul,…

1 week ago

American Nurse Tragically Murdered in Budapest: The Case of Mackenzie Michalski

In early November 2024, a shocking tragedy unfolded in Budapest, Hungary, as American nurse Mackenzie…

1 week ago

7 Ways Healthcare Could Change Under RFK

If Robert F. Kennedy Jr. were to assume leadership of the U.S. Food and Drug…

2 weeks ago

Woman Faked Nurse Credentials to Inject Fake Botox, Say Prosecutors

On November 1, 2024, federal authorities charged 38-year-old Rebecca Fadanelli, owner of Skin Beaute Med…

2 weeks ago

Interview with the Devil: Scrubs Magazine’s Book of the Year – Book Club

In a quaint little restaurant in Echo Park, an up and coming author is sitting…

3 weeks ago

British Doctor Disguises Himself as Nurse in Bizarre Attempt to Poison Mother’s Partner

In a case that has captured public attention for its bizarre and chilling details, a…

3 weeks ago