Skip to content

Some Glimpse AGI on ChatGPT. Others call it a mirage


Sebastien Bubeck, a machine learning researcher at Microsoftwoke up one night last September thinking about artificial intelligence—and unicorns.

Bubeck had recently gained early access to GPT-4a powerful text generation algorithm open AI and an update to the machine learning model at the heart of the popular chatbot. ChatGPT. Bubeck was part of a team working to integrate the new artificial intelligence system at Microsoft. Bing seeker. But he and his colleagues continued to marvel at how different GPT-4 seemed from anything they had seen before.

GPT-4, like its predecessors, was fed large amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in response to text input. But for Bubeck, the system output seemed to do much more than make statistically plausible guesses.

That night, Bubeck got up, went to his computer, and asked GPT-4 to draw a unicorn using TikZ, a relatively obscure programming language for generating scientific diagrams. Bubeck was using a version of GPT-4 that only worked with text, not images. But the code the model presented to him, when fed into a TikZ rendering software, produced a crude but distinctly unicorn image cobbled together from ovals, rectangles, and a triangle. For Bubeck, such a feat surely required an abstract understanding of the elements of such a creature. “Something new is happening here,” he says. “Perhaps for the first time we have something we could call intelligence.”

How smart AI is getting and how much to trust the increasingly commonplace feeling that a piece of software is intelligent has become a pressing, almost panic-inducing question.

After Open AI launched ChatGPT, then powered by GPT-3, last November, stunned the world with his ability to write poetry and prose on a wide range of topics, solve coding problems, and synthesize knowledge from the web. But astonishment has been compounded by shock and concern about the potential for academic fraud, disinformationand mass unemployment—and he fears that companies like Microsoft will rush to develop technology that could be dangerous.

Understanding the potential or risks of new AI abilities means having a clear idea of ​​what those abilities are and are not. But while there’s wide agreement that ChatGPT and similar systems give computers significant new abilities, researchers are just beginning to study these behaviors and determine what’s going on behind the prompt.

While OpenAI has promoted GPT-4 by touting its performance on bar and medical school exams, scientists who study aspects of human intelligence say its remarkable capabilities differ from ours in crucial ways. The tendency of models to make things up is well known, but the divergence runs deeper. And with millions of people using technology every day and companies staking their futures on it, this is a huge mystery.

sparks of disagreement

Bubeck and other AI researchers at Microsoft were inspired to get into the debate by their experiences with GPT-4. A few weeks after the system connected to Bing and launched its new chat feature, the company published a paper stating that in early experiments, GPT-4 displayed “sparks of artificial general intelligence”.

The authors presented a scattering of examples where the system performed tasks that appear to reflect more general intelligence, significantly beyond earlier systems like GPT-3. The examples show that, unlike most previous AI programs, GPT-4 is not limited to a specific task, but can solve all kinds of problems, a necessary quality of general intelligence.



Source link