Skip to content

You won’t believe the mind-blowing reason why containing artificial intelligence is nearly impossible, according to experts!

When Mustafa Suleyman co-founded DeepMind over a decade ago, his goal of building a machine capable of replicating human intelligence seemed ambitious. However, he now believes that rapid progress in artificial intelligence (AI) means this goal could be achieved in the near future, as discussed in his new book, “The Incoming Wave.” In an interview with Lily Jamali, Suleyman, now the CEO and co-founder of Inflection AI, explores the concept of containment, which he believes is the central challenge of the next few decades. While advocating for containment, he also acknowledges its difficulty, given the nature of technological advancements. Suleyman discusses the potential risks associated with the development of AI and the need for human oversight. He explains that as AI becomes more accessible and powerful, there is a concern that individuals may experiment with it in potentially dangerous ways. Suleyman highlights the difference between containing nuclear technology and AI, emphasizing that AI software is cheaper, more readily available, and accessible to millions of people. Regarding the Turing test, Suleyman questions its relevance in determining AI intelligence as conversational capabilities alone do not necessarily indicate intelligence. Instead, he proposes a modern Turing test focused on measuring AI capabilities in achieving high-level goals. Suleyman explains that such tests can help understand the implications of AI advancement in various fields and how power dynamics may change. Although Suleyman considers himself a default optimist, his perspective changed as he researched the history of containment, realizing that saying no to technology has been rare. He emphasizes the importance of introducing friction and human oversight to ensure accountability to democratic governments and the public interest in AI development. Lastly, Suleyman mentions the recent cooperation among major AI companies in implementing safeguards to manage the risks associated with new AI tools, following their meeting with President Joe Biden.

—————————————————-

Article Link
UK Artful Impressions Premiere Etsy Store
Sponsored Content View
90’s Rock Band Review View
Ted Lasso’s MacBook Guide View
Nature’s Secret to More Energy View
Ancient Recipe for Weight Loss View
MacBook Air i3 vs i5 View
You Need a VPN in 2023 – Liberty Shield View

When Mustafa Suleyman co-founded the artificial intelligence research firm DeepMind more than a decade ago, his goal seemed ambitious, even a little far-fetched: to build a machine capable of replicating human intelligence.

Now, he says, rapid progress in the development of artificial intelligence means the goal could be achieved in the near future three yearsand the implications of this milestone are enormous.

Suleyman explores these implications in his new book, “The incoming wave”, released this week. Marketplace’s Lily Jamali spoke with Suleyman, now CEO and co-founder of Inflection AI, about a central theme of her book: the idea of ​​containment.

The following is an edited transcript of their conversation.

Mustafa Suleyman: The idea of ​​containment is that we should always have the ability to slow down or potentially even stop any technology completely at any point in its development or deployment. It seems like a simple and reasonable idea. Who wouldn’t want our species to always have control and oversight over the things we invent? But it is, in my view, the great challenge of the next few decades, precisely because of the pace of change with artificial intelligence and synthetic biology and how quickly things are getting better.

Lily Jamali: In your book, at times, it sounds like you’re advocating containment while also arguing that, to some extent, containment is impossible. Is that a fair assessment of the argument you are making?

Suleiman: Yes exactly. I think when you look at the history of all technologies, things get cheaper and easier to use and spread far and wide. Everything from the ax to the discovery of fire to the invention of steam and electricity has gotten cheaper and easier to use over time, and everyone can access it. If that’s the nature of technology, sort of the law of technology, then that really raises some pretty complicated questions about where we’re going to end up in the next few decades.

Mustafa Suleyman (Hiltzik strategies courtesy)

Jamali: Tell me about that timeline. In your opinion, what will happen in the next 10 years or so?

Suleiman: Let’s try to be specific about the capabilities that might be of concern. So if you explicitly design an AI to recursively self-improve, i.e. have the power to modify and optimize its own code, then you are in a sense closing the loop on its own agency or behavior and taking a human out of the loop. . As these models become more widely available in open source, more and more people will be able to train really powerful AI models. Today, only 20 organizations in the world can do it, but if 200 million people actually manage to train these models over the next decade, which is likely or even inevitable given the exponential reduction in computing costs, then someone is making a move. run the risk of tinkering and experimenting in a potentially dangerous way, which could cause detrimental effects as a result of recursively self-improving AI. This is the kind of thing I think we’re all concerned about.

Jamali: You write that containment of new technologies has always failed at some point, but nuclear weapons and nuclear technology seem to be something of an exception to this rule. Can you explain it?

Suleiman: Nuclear is an exception in the sense that today there are really only a few nuclear powers in the world. In fact, the number of nuclear powers has dropped from 11 to seven. We’ve basically spent the last 70 years reducing nuclear stockpiles, tracking the movements of all the uranium enrichment facilities and very carefully licensing and limiting access to knowledge about those kinds of materials and so on. In one sense that’s a great achievement, but unfortunately it’s very different from today’s artificial intelligence and synthetic biology. Nuclear power is extremely expensive to produce, is very complicated, and involves accessing and manipulating very dangerous radioactive materials. This is quite different from the nature of AI software, which is increasingly cheaper, more readily available, and accessible to millions of people.

Jamali: Some people may be familiar with the Turing test. This is a test created by computer scientist Alan Turing in the 1950s, and is intended to assess a computer’s intelligence by testing its written conversation skills. Basically, if a human can’t figure out whether he’s conversing with a computer or another human, we’d say the computer passed the test. In 2023, does this test still have meaning?

Suleiman: This is a question I explore in the book because now that we have AIs that are nearly as good as many humans at natural conversation, it’s unclear whether we’re any closer to knowing if they’re intelligent or not. And so, the initial goal of the Turing test was to measure intelligence, but it turns out that what an AI can say doesn’t necessarily correlate to whether it’s smart. So another approach to this, which I think is actually more revealing and more useful, is to try to measure what an AI can do and focus on capabilities instead.

Jamali: In your book you propose a “modern Turing test”. What do you mean by that phrase?

Suleiman: The modern Turing test I’ve proposed is to give an AI a very general high-level goal, for example, you’d say, “With a $100,000 investment, make $1 million over the course of a few months.” AI could interpret this goal by saying: I will invent a new type of product and search online to see what people like, what they don’t like, what they might be interested in. Then I will contact a manufacturer, maybe in China, for my new product and negotiate the price, details, design of that product. Then I’ll take it dropshipping and sell it on amazon or online somewhere. Then I’ll try to build marketing materials around that. All this is clearly only possible today with digital tools, but it would require a lot of human intervention to do it, but it is increasingly possible that the whole thing could be done autonomously, from start to finish, although perhaps with a little intervention where there are legal requirements. The goal here isn’t necessarily to make money, just to take advantage of the dollar as a measure of progress over a given period of time. If a system could do this kind of task, then we could begin to understand what the implications would be for future work and how power will proliferate. Because if you have access to one of these tools, suddenly you’re able to do much, much, much more with less, and that changes the landscape of power.

Jamali: I remember at one point in the book you called yourself a default optimist and towards the end you write that you originally intended to write a more positive book about AI. But then your perspective changed. What caused this change?

Suleiman: I think the thing that made me more concerned is, when I started researching for the book and looked back at the history of containment, there really aren’t that many examples where we’ve said no to a technology. I spent much of the first third of the book incredibly optimistic and in love with technology. I love technology. I am a creator, builder and producer and this inspires me every day to create things and make things. So it’s hard to accept that technology is getting smaller and more powerful at the same time. When you implement it for 10 or 20 years, it only opens up the fundamental question of how these models will look in the future. What does it mean that we will be able to engineer synthetic life? So introducing friction into this process and introducing human oversight and traditional governance is how we can make sure we have the best chance of making it accountable to democratic governments and the general public interest.

Read more about this

Something tends to happen as companies race to dominate an emerging technology. They compete, but once regulators get involved, they cooperate. AI is no different.

In the month of July, seven major US-based AI companies met with President Joe Biden and agreed safeguards to help manage the risks of new AI tools. Mustafa Suleyman was present at the meeting and explained to me that the companies involved have agreed to test their models, try to break them, and then share best practices discovered in the process with each other. But here’s the rub: For now, this commitment is voluntary. But it is a step that Suleyman called “appropriate for the moment”.

There are critics who say these voluntary pledges are really just a way to AI companies have to write their own rules. But in the end, this is the job of Congress, whose members have done it entertained us with memorable performances of ignorance about technology over the years.

That could change, though. Senate Majority Leader Chuck Schumer recently launched a plan convene a panel of experts to give lawmakers a crash course in artificial intelligence.

—————————————————-