Tech

Parents Trust ChatGPT More Than Doctors, New Study Finds

A new study found that parents are more likely to trust ChatGPT than doctors when it comes to healthcare information for their children.

dad and daughter on computer
Photo by NanoStockk

Parents are more likely to trust ChatGPT than a doctor when it comes to their children’s health. A new study, which was published in the Journal of Pediatric Psychology by the University of Kansas Life Span Institute, found the surprising faith parents put into artificial intelligence.

The study sought to find out if text generated by ChatGPT under the supervision of a healthcare expert was as trustworthy as text generated by an expert themselves. To do so, the study looked at 116 parents aged 18 to 65.

Videos by VICE

“When we began this research, it was right after ChatGPT first launched—we had concerns about how parents would use this new, easy method to gather health information for their children,” lead author Calissa Leslie-Miller said. “Parents often turn to the internet for advice, so we wanted to understand what using ChatGPT would look like and what we should be worried about.”

Participants were asked to complete a baseline assessment of their behavioral intentions regarding pediatric healthcare topics. Afterwards, participants rated text generated by either an expert or by ChatGPT under supervision of an expert.

The Study’s Shocking Findings

The study found that engineered ChatGPT is capable of impacting behavioral intentions for medication, sleep, and diet decision-making.

It also discovered that there was “little distinction” between ChatGPT and expert content when it came to perceived morality, trustworthiness, expertise, accuracy, and reliance. However, when those differences were present, ChatGPT text scored a higher rating in trustworthiness and accuracy than expert information.

Participants also indicated that they would be more likely to rely on ChatGPT information than on information from an expert.

“This outcome was surprising to us, especially since the study took place early in ChatGPT’s availability,” Leslie-Miller said. “We’re starting to see that AI is being integrated in ways that may not be immediately obvious, and people may not even recognize when they’re reading AI-generated text versus expert content.”

Why the Study’s Findings Raise Concerns

On top of being surprising, the study’s findings are a cause of concern for researchers.

“During the study, some early iterations of the AI output contained incorrect information. This is concerning because, as we know, AI tools like ChatGPT are prone to ‘hallucinations’—errors that occur when the system lacks sufficient context,” Leslie-Miller said. “In child health, where the consequences can be significant, it’s crucial that we address this issue. We’re concerned that people may increasingly rely on AI for health advice without proper expert oversight.”

Leslie-Miller went on to note that, despite how participants rated the texts, “there are still differences in the trustworthiness of sources.” With that in mind, Leslie-Miller said that, if parents do turn to AI for information, they should look for a source “that’s integrated into a system with a layer of expertise that’s double-checked.”

“I believe AI has a lot of potential to be harnessed. Specifically, it is possible to generate information at a much higher volume than before,” she said. “But it’s important to recognize that AI is not an expert, and the information it provides doesn’t come from an expert source.”