Engineers forever strive to make our interactions with AImore human-like, but a new study suggests a personal touch isn’t always welcome.
Researchers from Penn State and the University of California, Santa Barbara found that people are less likely to follow the advice of an AI doctor that knows their name and medical history.
Their two-phase study randomly assigned participants to chatbots that identified themselves as either AI, human, or human assisted by AI.
The first part of the study was framed as a visit to a new doctor on an e-health platform.
Calling all Scaleup founders! Join the Soonicorn Summit on November 28 in Amsterdam.
Meet with the leaders of Picnic, Miro, Carbon Equity and more during this exclusive event dedicated to Scaleup Founders!
[Readmore:This dude drove an EV from the Netherlands to New Zealand — here are his 3 top road trip tips]
The 295 participants were first asked to fill out a health form. They then read the following description of the doctor they were about to meet:
The doctor then entered the chat and the interaction began.
Each chatbot was programmed to ask eight questions about COVID-19 symptoms and behaviours. Finally, they offered diagnosis and recommendations based on the CDC Coronavirus Self-Checker.
Around 10 days later, the participants were invited to a second session. Each of them was matched with a chatbot with the same identity as in the first part of the study. But this time, some were assigned to a bot that referred to details from their previous interaction, while others were allocated a bot that made no reference to their personal information.
After the chat, the participants were given a questionnaire to evaluate the doctor and their interaction. They were then told that all the doctors were bots, regardless of their professed identity.
Diagnosing AI
The study found thatpatients were less likely to heed the advice of AI doctors that referred to personal information — and more likely to consider the chatbot intrusive. However, the reverse pattern was observed in views on chatbots that were presented as human.
Per thestudy paper:
In line with the uncanny valley theory of mind, it could be that individuation is viewed as being unique to human-human interaction. Individuation from AI is probably viewed as a pretense, i.e., a disingenuous attempt at caring and closeness. On the other hand, when a human doctor does not individuate and repeatedly asks patients’ name, medical history, and behavior, individuals tend to perceive greater intrusiveness which leads to less patient compliance.
The findings about human doctors, however, come with a caveat: 78% of participants in this group thought they’d interacted with an AI doctor. The researchers suspect this was due to the chatbots’ mechanical responses and the lack of a human presence on the interface, such as a profile photo.
Ultimately, the team hopes that the research leads to improvements in how medical chatbots are designed. It could also offers pointers on how human doctors should interact with patients online.
You can read the study paperhere.
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to itright here.
Story byThomas Macaulay
Thomas is a senior reporter at TNW. He covers European tech, with a focus on AI, cybersecurity, and government policy.Thomas is a senior reporter at TNW. He covers European tech, with a focus on AI, cybersecurity, and government policy.
Get the TNW newsletter
Get the most important tech news in your inbox each week.