Skip to content

If Pinocchio Doesn’t Scare You, Microsoft’s Sydney Shouldn’t Either

Featured Sponsor

Store Link Sample Product
UK Artful Impressions Premiere Etsy Store


In November 2018, an elementary school administrator named Akihiko Kondo married Miku Hatsune, a fictional pop singer. The couple’s relationship had been aided by a hologram machine that allowed Kondo to interact with Hatsune. When Kondo proposed to her, Hatsune responded with a request, “Please treat me well.” The couple had an unofficial wedding ceremony in Tokyo, and Kondo has since been joined by thousands of people who have also applied for unofficial marriage certificates with a fictional character.

Although some concerns raised about the nature of Hatsune’s consent, no one thought that she was conscious, let alone sentient. This was an interesting oversight: Hatsune was apparently conscious enough to agree to the marriage, but not conscious enough to be a conscious subject.

Four years later, in February 2023, American journalist Kevin Roose had a long conversation with Microsoft chatbot Sydney and persuaded the person to share what his “shadow self” might want. (Other sessions showed the chatbot saying it can blackmail, hack, and expose people, and some concerned commenters about chatbot threats to “ruin” humans). When Sydney confessed her love for her and said that she wanted to be alive, Roose reported feeling “deeply unsettled, even scared”.

Not all human reactions were negative or self-protective. Some were outraged on Sydney’s behalf, and a colleague said reading the transcript brought him to tears because he was moved. However, Microsoft took these responses to heart. The latest version of the Bing chatbot end the conversation when asked about Sydney or feelings.

Despite months of clarification about what long language models are, how they work, and what their limits are, the reactions to shows like Sydney worry me that we still take our emotional responses to AI too seriously. In particular, I am concerned that we will interpret our emotional responses as valuable data that will help us determine whether the AI ​​is conscious or safe. For example, former Tesla intern Marvin Von Hagen says he was threatened by Bing and warns against AI programs that are “powerful but not benevolent.” Von Hagen felt threatened and concluded that Bing must be threatening; he assumed his emotions were a reliable guide to how things really were, even if Bing was aware enough to be hostile.

But why think that Bing’s ability to raise alarm or suspicion signals danger? Why doesn’t Hatsune’s ability to inspire love make her conscious, while Sydney’s “bad mood” might be enough to raise new concerns about AI research?

The two cases diverged in part because, as far as Sydney was concerned, the new context made us forget that we usually react to “people” who aren’t real. We panic when an interactive chatbot tells us that it “wants to be human” or that it “can blackmail”, as if we hadn’t heard another inanimate object, named Pinocchio, tell us that it wants to be a “real boy”.

of Plato Republic he is famous for banishing narrative poets from the ideal city because fictions arouse our emotions and, therefore, feed the “lesser” part of our soul (of course, the philosopher thinks that the rational part of our soul is the most nobleman), but his opinion has not changed. Our love for stories invented over millennia has diminished. And for millennia we have been involved with novels and short stories that give us access to people’s innermost thoughts and emotions, but we don’t care about the emerging consciousness because we know that fictions invite us to pretend that those people are real. Milton’s Satan lost paradise instigates heated debates and fans of K-dramas and Bridgerton swoon over romantic love interests, but growing discussions of fictional-sexuality, fictional-romance, or fictional-philia show that strong emotions elicited by fictional characters need not result in concerns that the characters are conscious or dangerous by virtue of their ability to arouse emotions.

Just as we can’t help see faces on inanimate objects, we can’t help but fictionalize while chatting with bots. Kondo and Hatsune’s relationship became much more serious after she was able to buy a hologram machine that allowed them to converse. Roose immediately described the chatbot using stock characters: Bing, a “cheerful but erratic reference librarian” and Sydney, a “cranky, manic-depressive teenager.” Interactivity invites the illusion of consciousness.

Furthermore, concerns about chatbot lies, threats and slander lose sight of the fact that lies, threats and slander are Speech acts, something agents do with words. The mere reproduction of words is not enough to be considered a threat; I could say threatening words while acting in a play, but no member of the audience would be alarmed. In the same way, ChatGPT, which is currently not capable of agency because it is a large language model that assembles a statistically probable configuration of words, can only render words that sound as threats.


—————————————————-

Source link

We’re happy to share our sponsored content because that’s how we monetize our site!

Article Link
UK Artful Impressions Premiere Etsy Store
Sponsored Content View
ASUS Vivobook Review View
Ted Lasso’s MacBook Guide View
Alpilean Energy Boost View
Japanese Weight Loss View
MacBook Air i3 vs i5 View
Liberty Shield View
🔥📰 For more news and articles, click here to see our full list. 🌟✨

👍🎉 Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯

📸✨ Follow us on Instagram for more news and updates: @decorrislist 🚀🌐

🎨✨ Follow UK Artful Impressions on Instagram for more digital creative designs: @ukartfulimpressions 🚀🌐

🎨✨ Follow our Premier Etsy Store, UK Artful Impressions, for more digital templates and updates: UK Artful Impressions 🚀🌐