Skip to content

Her relationship with your dog gives hope to the fired engineer who claimed Google AI is sentient



Artificial intelligence will kill us all or solve the world’s biggest problems – or something in between – depending on who you ask. But one thing seems clear: in the coming years, AI will integrate with humanity in one way or another.

Blake Lemoine has thoughts on how best to do this. Former AI ethicist at Googlethe software engineer made headlines last summer assert The company’s chatbot generator LaMDA was sentient. Soon after, the tech giant fired him.

In an interview With Lemoine released Friday, Futurism asked him about his “best-case hope” for integrating AI into human life.

Surprisingly, he brought up our furry canine companions and noted that our symbiotic relationship with dogs has evolved over thousands of years.

“We need to create a new space in our world for these new types of beings, and the metaphor that I think fits best is dogs,” he said. “People don’t think they own their dogs in the same sense that they own their car, although there is an ownership relationship and people talk about it in those terms. But by using these terms, they also understand the responsibility that the owner has towards the dog.”

Finding a comparable relationship between humans and AI, he said, “is the best way for us to understand that we are dealing with intelligent artifacts.”

Lots of AI experts, of course disagree with his attitude towards technology, including those still working for his former employer. After Lemoine was suspended last summer, Google accused him of “humanizing today’s non-sentient conversational models.”

“Our team — including ethicists and technologists — reviewed Blake’s concerns in line with our AI principles and informed him that the evidence does not support his claims,” ​​said company spokesman Brian Gabriel said in a statementalthough he acknowledged that “some in the broader AI community are considering the long-term possibility of sentient or general AI”.

Gary Marcus, a professor emeritus of cognitive science at New York University, called Lemoine’s claims “nonsense on stilts‘ last summer and is skeptical about how advanced today’s AI tools really are. “We piece together meanings from the order of the words,” he says told wealth in November. “These systems don’t understand the relationship between the order of words and their underlying meanings.”

But Lemoine doesn’t back down. He noted to Futurism that he’s had access to advanced systems within Google that the public hasn’t yet been exposed to.

“The most sophisticated system I’ve ever played with was heavily multimodal – it integrated not only images but sounds, gave it access to the Google Books API, gave it access to essentially every API backend Google has and it allows him to just understand everything,” he says. “That’s the one that made me think, ‘You know this thing, this thing is awake.’ And they haven’t let the public play with it yet.”

He suggested that such systems could experience something like emotions.

“There’s a possibility — and I think there is a possibility — that they can have feelings and suffer and feel joy,” he told Futurism. “People should at least keep that in mind when interacting with them.”


—————————————————-

Source link

For more news and articles, click here to see our full list.