Patients are split on getting health care from artificial intelligence

"While many patients appear resistant to the use of AI, accuracy of information, nudges and a listening patient experience may help increase acceptance," says Marvin J. Slepian. (Credit: Getty Images)

About 52% of participants in a new study would choose a human doctor rather than AI for diagnosis and treatment.

Artificial intelligence-powered medical treatment options are on the rise and have the potential to improve diagnostic accuracy.

The findings in PLOS Digital Health, however, show that most patients aren’t convinced the diagnoses provided by AI are as trustworthy as those delivered by human medical professionals.

“While many patients appear resistant to the use of AI, accuracy of information, nudges, and a listening patient experience may help increase acceptance,” says Marvin J. Slepian, professor of medicine at the University of Arizona College of Medicine-Tucson, of the study’s other primary finding: that a human touch can help clinical practices use AI to their advantage and earn patients’ trust.

“To ensure that the benefits of AI are secured in clinical practice, future research on best methods of physician incorporation and patient decision making is required.”

For the study, the researchers placed participants into scenarios as mock patients and asked whether they would prefer to have an AI system or a physical doctor for diagnosis and treatment, and under what circumstances.

In the first phase, the researchers conducted structured interviews with actual patients, testing their reactions to current and future AI technologies. In the second phase of the study, the researchers polled 2,472 participants across diverse ethnic, racial, and socioeconomic groups using a blinded, randomized survey that tested eight variables.

Overall, participants were almost evenly split, with more than 52% choosing human doctors as a preference versus approximately 47% choosing an AI diagnostic method. If study participants were prompted that their primary care physicians felt AI was superior and helpful as an adjunct to diagnosis or otherwise nudged to consider AI as good, the acceptance of AI by study participants on re-questioning increased. This signaled the significance of the human physician in guiding a patient’s decision.

Disease severity—leukemia versus sleep apnea—did not affect participants’ trust in AI. Compared to white participants, Black participants selected AI less often and Native Americans selected it more often. Older participants were less likely to choose AI, as were those who self-identified as politically conservative or viewed religion as important.

The racial, ethnic, and social disparities identified suggest that differing groups will warrant specific sensitivity and attention as to informing them as to the value and utility of AI to enhance diagnoses.

“I really feel this study has the import for national reach. It will guide many future studies and clinical translational decisions even now,” Slepian says. “The onus will be on physicians and others in health care to ensure that information that resides in AI systems is accurate, and to continue to maintain and enhance the accuracy of AI systems as they will play an increasing role in the future of health care.”

Additional coauthors are from the University of Texas at Arlington, the James E. Rogers College of Law, the University of Utah, and the University of Arizona.

The National Institutes of Health funded the study.

Source: University of Arizona