Stay informed with free updates
Simply register at Artificial intelligence myFT Digest – Delivered straight to your inbox.
Any company like OpenAI, headed for a loss of $5 billion last year on $3.7 billion in revenueYou need a good story to tell to keep the funding flowing. And there are no stories much more compelling than saying that your company is about to transform the world and create a “glorious future” by developing artificial general intelligence.
Definitions vary on what AGI means, as it represents a theoretical rather than a technological threshold. But most AI researchers would say it is the point at which machine intelligence surpasses human intelligence in most cognitive fields. Achieving AGI is the industry’s holy grail and the explicit mission of companies like OpenAI and Google DeepMind, although some holdouts still doubt it will ever be achieved.
Most predictions for when we might reach AGI have been getting closer due to amazing progress in the industry. Still, Sam Altman, CEO of OpenAI, surprised many on Monday. when he published on their blog: “We are now confident that we know how to build AGI as we have traditionally understood it.” The company, which sparked the latest AI investment frenzy after launching its ChatGPT chatbot in November 2022, It was valued at $150 billion in October. ChatGPT now has over 300 million weekly users.
There are several reasons to be skeptical of Altman’s claim that AGI is essentially a solved problem. OpenAI’s most persistent critic, AI researcher Gary Marcus, quickly got it right. “We are now confident that we can make up nonsense at unprecedented levels and get away with it,” Marcus he tweeted, parodying Altman’s statement. In a separate post, Marcus repeated his claim that “there is no justification for claiming that current technology has achieved general intelligence,” citing its lack of reasoning power, understanding, and reliability.
But OpenAI’s extraordinary valuation apparently means that Altman may be right. In his post, he suggested that AGI should be viewed more as a process toward achieving superintelligence than as an end point. Still, if the threshold were ever crossed, AGI would likely count as the largest event of the century. Even the sun god of news that is Donald Trump would be eclipsed.
Investors believe that a world in which machines become smarter than humans in most fields would generate phenomenal wealth for their creators. If used wisely, AGI could accelerate scientific discoveries and help us become much more productive. But super-powered AI also raises concerns: excessive concentration of corporate power and possibly existential risk.
As fun as these debates can be, they remain theoretical and, from an investment perspective, unknowable. But OpenAI suggests that enormous value can still be gained by applying increasingly powerful but limited AI systems to an increasing number of real-world uses. The industry phrase of the year is agent AI, which uses digital assistants to perform specific tasks. Speech at the CES event in Las Vegas this weekJensen Huang, CEO of chip designer Nvidia, defined agent AI as systems that can “perceive, reason, plan and act.”
Agent AI is undoubtedly one of the biggest attractions for venture capital. CB Insights 2024 State of Risk Report estimated that AI startups attracted 37 percent of the global total of $275 billion in venture capital funding last year, up from 21 percent in 2023. The fastest-growing areas of investment were agents of AI and customer service. “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the production of companies,” Altman wrote.
Take travel as an example. Once prompted via text or voice message, AI agents can book entire business trips: securing the best flights, finding the most convenient hotel, scheduling daily appointments, and arranging taxi pickups. That methodology applies to a wide range of business functions and it’s a fair bet that an AI startup somewhere will be discover how to automate them.
Relying on autonomous AI agents to perform such tasks requires the user to trust the technology. The problem of hallucinations is now well known. Another concern is rapid injection.where a malicious counterparty tricks an AI agent into revealing sensitive information. Building a secure multi-agent economy at scale will require developing reliable infrastructure, which may take some time.
The returns from AI will also have to be spectacular to justify the colossal investments being made by big tech companies and venture capital firms. How long will impatient investors remain?