Who would you rather talk to about your health, a doctor or an AI-driven chatbot?
According to research from the UK’s University of Westminster, the answer is it depends on what you want to talk about.
For the study, researchers from the U. of Westminster and University College London conducted an online survey of UK citizens that sought specifically to determine how “perceived stigma and severity of various health issues are associated with the acceptability for three sources of health information and consultation: an automated chatbot, a General Practitioner (GP), or a combination of both.”
What they found is that while “healthcare professionals are perceived as the most desired sources of health information, chatbots may be useful for sensitive health issues in which disclosure of personal information is challenging.”
In other words, for serious conditions such as cancer, patients would prefer to speak to a GP for health advice. On the other hand, chatbots can be a useful tool when it comes to sensitive situations such as sexual transmitted diseases, and they can even be used to encourage patients to bring their concerns to a human provider.
According to Tom Nadarzynski, MD, lead author of the study at the University of Westminster, “Many AI developers need to assess whether their AI-based healthcare tools, such as symptoms checkers or risk calculators are acceptable interventions. Our research finds that patients value the opinion of healthcare professionals, therefore implementation of AI in healthcare may not be suitable in all cases, especially for serious illnesses.”
In the study, the researchers that “(c)hatbot acceptability is influenced by a person’s effort expectancy of using a chatbot, facilitating conditions, social influences, price value, habit, compatibility and perceived access to the healthcare system. The quality of the chatbot content, the perceived accuracy of health information, and the sources underpinning the chatbot are associated with acceptability. Their acceptability is low due to perceived chatbot responsibility, liability, perceived chatbot competence. Patient safety has also been identified as an important factor associated with acceptability.”
While the generally low acceptability of chatbots, despite their proven effectiveness, is inclined lead to “suboptimal uptake of this intervention,” the researchers they can still be useful as they offer greater anonymity than a face-to-face GP consultation.
Consequently, they wrote, while “this technology as a stand-alone intervention may not be utilised by healthcare customers and should not be used as a substitute for creditable health information source from a health professional . . . chatbots could be considered as an aid for doctor-patient communication for conditions with lower perceived stigma and severity.”
Photo by Daisy-Daisy/Getty Images