Why it matters: Major tech players have spent the last few years betting that simply throwing more computing power at AI will lead to artificial general intelligence (AGI) – systems that match or surpass human cognition. But a recent survey of AI researchers suggests growing skepticism that endlessly scaling up current approaches is the right path forward.
A recent survey of 475 AI researchers reveals that 76% believe adding more computing power and data to current AI models is “unlikely” or “very unlikely” to lead to AGI.
The survey, conducted by the Association for the Advancement of Artificial Intelligence (AAAI), reveals a growing skepticism. Despite billions poured into building massive data centers and training ever-larger generative models, researchers argue that the returns on these investments are diminishing.
Stuart Russell, a computer scientist at UC Berkeley and a contributor to the report, told New Scientist: “The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced.”
The numbers tell the story. Last year alone, venture capital funding for generative AI reportedly topped $56 billion, according to a TechCrunch report. The push has also led to massive demand for AI accelerators, with a February report stating that the semiconductor industry reached a whopping $626 billion in 2024.
Running these models has always required massive amounts of energy, and as they’re scaled up, the demands have only risen. Companies like Microsoft, Google, and Amazon are therefore securing nuclear power deals to fuel their data centers.
Yet, despite these colossal investments, the performance of cutting-edge AI models has plateaued. For instance, many experts have suggested that OpenAI’s latest models have shown only marginal improvements over their predecessor.
Beyond the skepticism, the survey also highlights a shift in priorities among AI researchers. While 77% prioritize designing AI systems with an acceptable risk-benefit profile, only 23% are focused on directly pursuing AGI. Additionally, 82% of respondents believe that if AGI is developed by private entities, it should be publicly owned to mitigate global risks and ethical concerns. However, 70% oppose halting AGI research until full safety mechanisms are in place, suggesting a cautious but forward-moving approach.
Cheaper, more efficient alternatives to scaling are being explored. OpenAI has experimented with “test-time compute,” where AI models spend more time “thinking” before generating responses. This method has yielded performance boosts without the need for massive scaling. Unfortunately, Arvind Narayanan, a computer scientist at Princeton University, told New Scientist that this approach is “unlikely to be a silver bullet.”
On the flip side, tech leaders like Google CEO Sundar Pichai remain optimistic, asserting that the industry can “just keep scaling up” – even as he hinted that the era of low-hanging fruit with AI gains was over.