
Nik Kairinos, CEO & Founder, Fountech AI
For years, the path to Artificial General Intelligence (AGI, human-like cognitive abilities) seemed straightforward: just keep scaling. Bigger was always better: more compute, larger datasets, bigger models. This formula delivered exponential improvements in AI capabilities, with each generation of models vastly outperforming the last. The industry was resolute that brute-force scaling would eventually yield human-level intelligence.
That era is over. Frontier AI models are speeding towards collapse, as industry leaders find that traditional scaling laws are hitting fundamental limits. Simply put, there’s just not enough data to keep making AI smarter.
As such, we’re faced with two critical challenges: an impending data scarcity crisis, and a profound deficit in human nuance that no amount of raw computation can resolve.
The Data Wall
The recent plateau in GPT model improvements was a warning sign. OpenAI’s GPT-4 was a giant leap for generative AI, while industry reaction suggested that GPT-5 was a marginal step forward at best. This slowdown represents a mathematical inevitability; the strategy of simply adding more data and compute is colliding with physical reality. Data, unlike compute, is fundamentally finite.
According to Epoch AI, the indexed web contains an estimated 300-500 trillion tokens of text. Meanwhile, today’s frontier models like Meta’s Llama 3 consume 10-15 trillion tokens in a single training run. With dataset sizes growing 2.5 times annually while compute scales 4 times per year, we’re already speeding towards exhaustion.
No surprise then that Epoch AI projects we’ll deplete publicly available training data somewhere between 2026 and 2032.
The instinctive response is to say that AI should generate and then train itself on synthetic data, but this only compounds the problem. Synthetic data creates nothing fundamentally new; it merely recombines existing patterns, forming an echo chamber that amplifies existing limitations.
It’s like trying to learn something new by reading a book you wrote yourself.
Worse, this approach risks triggering ‘model collapse’: research from NYU’s Centre for Data Science shows that when models are trained on synthetic data generated by other AI models, then used to create more synthetic data in a feedback loop, performance degrades significantly.
The Intelligence Gap
But even infinite data wouldn’t solve AI’s deeper problem: the absence of genuine human intelligence. Current models reflect the biases, limitations, and surface-level patterns of their training data. They lack the nuanced understanding that characterises human cognition.
This deficit cannot be resolved through more data or additional compute. True intelligence encompasses emotional reasoning, moral judgment, creative insight, and empathy – qualities that cannot be scraped from the web or refined through algorithmic optimisation.
These capabilities must be actively taught by humans who possess them, transferred through interaction rather than extraction.
The diversity of human experience matters profoundly here. There is no Rosetta Stone for intelligence – it varies massively across culture and context. At present, LLMs are rife with bias and oversight: in fact, a joint study by leading US universities found that AI chatbots produce less empathetic responses based on a user’s race.
Human Scale, not Data
Shooting for infinite growth with finite data supplies does not lead to AGI. The missing element from this equation is humans themselves. The goal isn’t just building systems that answer questions correctly but are prone to bias and misinterpretation – it should be developing AI that is intuitive, ethical, and capable of adapting to diverse cultural context.
This is a fundamental paradigm shift – instead of treating humans as data sources to be mined, we must position them as active educators, transferring the subtle, contextual intelligence that makes people human.
The future of AI must be built on the infinite complexity of human experience, not the finite expanse of the web. That future requires investment not just in compute, but in the systems and infrastructure that enable humans to teach machines what it truly means to think.
This goal is going to require unprecedented collaboration between academia, industry innovators, enterprises, and investors. Betting on AI’s continued progress without acknowledging the wall that we’re speeding towards is dangerously shortsighted, no matter how much artificial data and hardware we throw at it.
– ENDS –
About the Author
Nik Kairinos is Founder, CEO and Chief AI Architect of Fountech AI, bringing over 40 years of pioneering work in artificial intelligence and deep learning.
His focus is on developing the mathematical frameworks needed to achieve the next breakthrough in Artificial General Intelligence (AGI). At Fountech, Nik and his team are dedicated to demonstrating that the future lies in a symbiotic partnership between humans and AI, where AGI enhances, not replaces, human intelligence.
He actively collaborates with industry leaders, research institutions, and governments to implement AI in practical, high-impact applications. Through strategic partnerships, Fountech licenses its breakthrough IP and launches agile spin-offs, embedding transformative AI into core business strategy and operations.




