Skip to content

The influence of AI on trust in human interaction — ScienceDaily


As AI becomes more and more realistic, our trust in those with whom we communicate may be compromised. Researchers at the University of Gothenburg have examined how advanced AI systems affect our trust in the people we interact with.

In one scenario, a would-be scammer, believing he is calling an elderly man, is connected to a computer system that communicates via pre-recorded loops. The con artist spends considerable time attempting the fraud, patiently listening to the “man’s” somewhat confusing and repetitive stories. Oskar Lindwall, professor of communication at the University of Gothenburg, notes that it often takes people a long time to realize they are interacting with a technical system.

He has written, in collaboration with computer science professor Jonas Ivarsson, an article entitled Suspicious minds: the problem of trust and conversational agents, exploring how individuals interpret and relate to situations where one party might be an AI agent. The article highlights the negative consequences of being suspicious of others, such as the damage it can cause in relationships.

Ivarsson provides an example of a romantic relationship where trust issues arise, leading to jealousy and a greater tendency to look for evidence of cheating. The authors argue that not being able to fully trust a conversation partner’s intentions and identity can result in excessive suspicion even when there is no reason to.

Their study found that during interactions between two humans, some behaviors were interpreted as signals that one of them was actually a robot.

The researchers suggest that a pervasive design perspective is driving the development of AI with increasingly human-like characteristics. While this can be attractive in some contexts, it can also be problematic, especially when it’s unclear who you’re communicating with. Ivarsson questions whether AI should have such human voices, as they create a sense of intimacy and lead people to form impressions based on voice alone.

In the case of the would-be scammer calling the “old man,” the scam is only exposed after a long time, which Lindwall and Ivarsson attribute to the credibility of the human voice and the assumption that the confusing behavior is due to age. . Once an AI has a voice, we infer attributes like gender, age, and socioeconomic background, making it difficult to identify that we’re interacting with a computer.

The researchers propose creating AI with eloquent voices that work well, but remain clearly synthetic, increasing transparency.

Communication with others involves not only deception, but also the building of relationships and the joint construction of meanings. The uncertainty of whether one is talking to a human or a computer affects this aspect of communication. While it may not matter in some situations, such as cognitive behavioral therapy, other forms of therapy that require more human connection may be negatively affected.

Jonas Ivarsson and Oskar Lindwall analyzed the data available on YouTube. They studied three types of conversations and audience reactions and comments. In the first type, a robot calls a person to schedule a hair appointment, without the person on the other end knowing. In the second type, one person calls another person for the same purpose. In the third type, telemarketers are transferred to a computer system with pre-recorded voice.


—————————————————-

Source link

🔥📰 For more news and articles, click here to see our full list.🌟✨

👍 🎉Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯