Skip to content

Why the “godfather” of AI, Geoffrey Hinton, left Google to talk about the risks


When Geoffrey Hinton raised an ethical objection to his employer Google working with the US military in 2018, he didn’t join the public outcry or put his name on the open letter of complaint signed by more than 4,000 of his colleagues. .

Instead, he just spoke to Sergey Brin, co-founder of Google. “She said he was a little upset about it too. And so they’re not pursuing it,” Hinton said in an interview at the time.

The incident is symbolic of Hinton’s quiet influence in the artificial intelligence world. The 75-year-old professor is revered as one of the “godfathers” of AI due to his seminal work in deep learning, an AI field that has driven the huge advances taking place in the industry.

But the anecdote also reflects Hinton’s loyalty, according to those who know him well. In principle, he has never publicly expressed any corporate, ethical or other grievances.

It was this belief that led him to exempted his role as vice president and engineering fellow at Google last week, so he could speak more freely about his growing fears about AI’s risks to humanity.

Yoshua Bengio, his longtime collaborator and friend, who won the Turing Award along with Hinton and Yann LeCun in 2018, said he saw the resignation coming. “He could have stayed at Google and talked, but his sense of loyalty to him is such that he wouldn’t,” Bengio said.

Hinton’s resignation follows a series of groundbreaking AI launches over the past six months, starting with those backed by Microsoft Open AIGoogle ChatGPT in November and Google’s chatbot, Bard, in March.

Hinton expressed concern that the race between Microsoft and Google could push the development of artificial intelligence forward without barriers and proper regulations.

“I think Google was very responsible in the beginning,” he said in a speech at an EmTech Digital event Wednesday, after his resignation was made public. “Once OpenAI had built similar things using . . . money from Microsoft and Microsoft decided to publish it, so Google didn’t have much of a choice. If you’re going to live in a capitalist system, you can’t stop Google from competing with Microsoft.”

Since the 1970s, Hinton has pioneered the development of “neural networks,” a technology that attempts to mimic how the brain works. It now powers most of the AI ​​tools and products we use today, from Google Translate and Bard to ChatGPT and autonomous cars.

But this week, it acknowledged fears its rapid development could lead to disinformation flooding the public sphere and AI usurping more human jobs than expected.

“My concern is that it will [make] the rich getting richer and the poor getting poorer. While you do it. . . society becomes more violent,” Hinton said. “This technology that is supposed to be wonderful . . . it’s developing in a society that isn’t designed to use it for the good of all.”

Hinton also sounded alarm bells about the long-term threats posed by AI systems to humans if the technology were given too much leeway. He had always believed that this existential risk was far away, but recently he has recalibrated his thinking about him on his urgency.

“It is quite conceivable that humanity is a passing stage in the evolution of intelligence,” he said. Hinton’s decision to leave Google after a decade was spurred on by an academic colleague who convinced him to talk about it openly, he added.

Born in London, Hinton comes from a distinguished lineage of scientists. He is the great-grandson of British mathematicians Mary and George Boole, the latter of whom invented Boolean logic, the theory behind modern computer science.

As a cognitive psychologist, Hinton’s work in AI has aimed to approximate human intelligence, not just to build AI technology, but to illuminate how our brains work.

Stuart Russell, a professor of artificial intelligence at the University of California, Berkeley, an academic colleague of Hinton’s, said his background meant he was “not the most mathematical person you’ll find in the machine learning community.”

He pointed to Hinton’s big breakthrough in 1986 when he published a paper on a technique called “backpropagation,” which showed how computer software could learn over time.

“It was clearly a seminal document,” Russell said. “But he didn’t derive the . . . govern as a mathematician would. He used his intuition to figure out a method that would work.

Hinton has not always been publicly vocal about his ethical views, but privately he has made them clear.

In 1987, when he was an associate professor at Carnegie Mellon University in the United States, he decided to leave his position and emigrate to Canada.

One reason he gave, according to Bengio, was ethical: He was concerned about the use of technology, especially artificial intelligence, in warfare, and much of his funding came from the US military.

“He wanted to feel good about the funding he had gotten and the work he was doing,” Bengio said. “He and I share the company’s values. That human beings matter, that the dignity of all human beings is essential. And everyone should benefit from the advances science is creating.”

In 2012, Hinton and his two graduate students at the University of Toronto, including Ilya Sutskever, now co-founder of OpenAI, took a major breakthrough in the field of computer vision. They have built neural networks that can recognize objects in images orders of magnitude more accurately than ever before. Based on this work, they founded their first start-up, DNNresearch.

Their company, which didn’t produce any products, was sold to Google for $44 million in 2013, after a competitive auction that led to China’s Baidu, Microsoft and DeepMind bidding to acquire the trio’s expertise.

Since then, Hinton has spent half of his time at Google and the other half as a professor at the University of Toronto.

According to Russell, Hinton is constantly having new ideas and trying new things. “Every time he had a new idea, at the end of his speech he’d say, ‘E This that’s how the brain works!’”

When asked onstage if he regretted his life’s work, as it could contribute to the myriad harms he’d outlined, Hinton said he was mulling it over.

“This stage of [AI] it was not foreseeable. And until recently I thought this existential crisis was a long way off,” she said. “So I really have no regrets about what I do.”


—————————————————-

Source link

🔥📰 For more news and articles, click here to see our full list.🌟✨

👍 🎉Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯