Skip to content

Gary Marcus used to call AI stupid, now he calls it dangerous


Back then, just a few months ago, Marcus’s niceties were technical. But now that the great language models have become a global phenomenon, the approach to it has changed. The gist of Marcus’ new message is that OpenAI, Google and other chatbots are dangerous entities whose powers will lead to a tsunami of misinformation, security bugs and slanderous “hallucinations” that will automate smears. This seems to court a contradiction. For years, Marcus had charged that the AI ​​builders’ claims were exaggerated. Why is AI now so formidable that society must now restrain it?

Marcus, always loquacious, has an answer: “Yes, I have said for years that [LLMs] they’re actually pretty dumb, and I still think so. But there is a difference between power and intelligence. And all of a sudden we are giving them a lot of power.” In February he realized that the situation was alarming enough that he devoted most of his energy to addressing the problem. Eventually, he says, he’d like to run a nonprofit dedicated to making the most of and avoiding the worst of AI.

Marcus argues that to counter all the potential damage and destruction, lawmakers, governments and regulators must rein in the development of AI. Along with Elon Musk and dozens of other scientists, political nerds, and just scared onlookers, he signed the now-famous petition. demanding a six-month break in training new LLMs. But he admits that he doesn’t really think a hiatus like that would make a difference and that he signed on primarily to align himself with the AI ​​critics community. Instead of a training timeout, he would prefer a break in deployment new models or iterating the current ones. Presumably this would have to be forced on companies, as there is fierce, almost existential competition between Microsoft and Google, with Apple, Meta, Amazon, and countless startups wanting to get into the game.

Marcus has an idea of ​​who might enforce the law. He has lately insisted that the world needs, immediately, “a global, neutral, not-for-profit International Agency for AI,” which he would refer to with a shout-sounding acronym (Iaai!).

as outlined in a opinion article of which he is co-author in it Economist, such a body could function as the International Atomic Energy Agency, which conducts audits and inspections to identify nascent nuclear programs. Presumably, this agency would monitor the algorithms to make sure they don’t include bias or promote misinformation or take over power grids while we’re not looking. While it seems like a stretch to imagine the United States, Europe and China working together on this, perhaps the threat of an extraterrestrial intelligence, albeit a local one, overthrowing our species could lead them to act in the interests of the Human Team. Hey, it worked with that other global threat, climate change! Oh…

In any case, the AI ​​control discussion will gain even more steam as technology becomes increasingly woven into our lives. So expect to see a lot more of Marcus and a host of other talking heads. And that’s not a bad thing. The discussion about what to do with AI is healthy and necessary, even if fast-moving technology may well develop regardless of our conscientious and belated measures. ChatGPT’s rapid ascension to an all-purpose business tool, entertainment device, and confidant indicates that, scary or not, we want these things. Like any other great technological breakthrough, superintelligence seems destined to bring us irresistible benefits, even as it changes the workplace, our cultural consumption, and, inevitably, us.


—————————————————-

Source link

🔥📰 For more news and articles, click here to see our full list.🌟✨

👍 🎉Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯