Skip to content

Your ChatGPT relationship status shouldn’t be complicated


the technology behind ChatGPT has been around for several years without attracting much attention. It was the addition of a chatbot interface that made it so popular. In other words, what captured the world’s attention was not a development in AI itself, but a change in the way AI interacted with people.

Very quickly, people started to think of ChatGPT as an autonomous social entity. This is not surprising. As early as 1996, Byron Reeves and Clifford Nass looked at personal computers of their day and found that “equating mediated and real life is neither rare nor unreasonable. It’s very common, it’s easy to promote, it doesn’t depend on fancy media equipment, and thinking won’t make it go away. In other words, people’s fundamental expectation of technology is for it to behave and interact like a human being, even when they know it’s “just a computer”. Sherry Turkle, an MIT professor who has studied artificial intelligence agents and robots since the 1990s, emphasizes the same point and claims that actual forms of communication, such as body language and verbal cues, “push our Darwinian buttons”: they have the ability to make us experience technology as social, even if we rationally understand that it is not.

If these scholars saw the social potential—and risk—in decades-old computing interfaces, it’s reasonable to assume that ChatGPT may also have a similar, and probably stronger effect. Use first-person language, preserve context, and provide answers in a compelling, confident, and conversational style. Bing’s ChatGPT implementation even uses emojis. This is a big step up the social ladder from the more technical result one would get when searching, for example, on Google.

Critics of ChatGPT have focused on the damage that your products can causesuch as misinformation and hateful content. But there are also risks in the mere choice of a social conversation style and in the AI’s attempt to emulate people as closely as possible.

The risks of social interfaces

New York Times Reporter Kevin Roose got caught in a two-hour conversation with the Bing chatbot that ended with the chatbot’s declaration of love, despite Roose repeatedly asking it to stop. This type of emotional manipulation would be even more damaging for vulnerable groups, such as teenagers or people who have suffered bullying. This can be very disturbing for the user, and using human terminology and emotion cues, such as emojis, is also a form of emotional cheating. A language model like ChatGPT has no emotions. He doesn’t laugh or cry. In reality, he does not even understand the meaning of such actions.

Emotional deception in AI agents is not only morally problematic; their human-like design can also make such agents more persuasive. Technology that acts in a human way is likely to convince people to act, even when the requests are irrational, made by a faulty AI agentand in emergency situations. Their persuasiveness is dangerous because companies can use them in ways that are unwanted or even unknown to users, from convincing them to buy products to influencing their political views.

As a result, some have taken a step back. Robot design researchers, for example, have promoted a non-human approach as a way of lowering people’s expectations of social interaction. They suggest alternative designs that do not replicate the ways people interact, thus setting more appropriate expectations of a piece of technology.

Rule definition

Some of the risks of social interactions with chatbots can be addressed by designing clear social roles and boundaries for them. Humans choose and change roles all the time. The same person can alternate between her roles as parent, employee, or sibling. Based on the change from one role to another, the context and expected limits of interaction also change. You wouldn’t use the same language when you talk to your child as you would when you talk to a coworker.

Rather, ChatGPT exists in a social vacuum. Although there are some red lines that he tries not to cross, he has no clear social role or experience. Nor does it have a specific goal or predefined intention. Perhaps this was a conscious choice by OpenAI, the creators of ChatGPT, to promote a multitude of uses or one entity that does it all. Most likely it was just a lack of understanding of the social reach of conversational agents. Whatever the reason, this opening sets the stage for extreme and risky interactions. The conversation could take any route and the AI ​​could take on any social role, from efficient email assistant to obsessive lover.



Source link