Home » Latest » News » Gemini 3 may shine, but scientists say AI is stuck in stupidity

Gemini 3 may shine, but scientists say AI is stuck in stupidity

0 hits

AI brain coming out of laptop screen
3 minutes
  • NeurIPS 2025 AI researchers say the current scaling method is reaching its limits
  • Despite Gemini 3’s strong performance, experts say the LLM still cannot reason or understand cause and effect.
  • Without a fundamental overhaul of how AI is built and trained, AGI is still far from complete.

The recent successes of AI models like Gemini 3 don’t overshadow the most disheartening message from this week’s NeurIPS 2025 AI conference: we could be building AI skyscrapers on intellectual sand.

As Google celebrated the leap in performance of its latest model, researchers at the world’s largest conference on artificial intelligence warned: As impressive as the current list of major language models may seem, the dream of artificial general intelligence will become increasingly distant unless the field rethinks its fundamental principles.

Everyone agrees that simply scaling up current Transformer models with more data, more GPUs, and more training time no longer yields significant returns. The big jump from GPT-3 to GPT-4 is increasingly seen as a one-off event; Since then, it’s become less like breaking glass ceilings and more like polishing glass.

This is a problem not only for scientists, but also for anyone who entertains the idea that AGI is coming. According to this year’s participating researchers, the truth is much less cinematic. What we have developed are very complex models. They are good at finding answers that seem right. But looking smart and being smart are two very different things, and NeurIPS has made it clear that the gap is not closing.

The common technical term is “climbing a wall”. This is the idea that the current approach (training ever-larger models on ever-larger datasets) is reaching its limits both physically and cognitively. We lack high-quality human data. We burn huge amounts of electricity for small marginal gains. And perhaps most worryingly, the models are still making mistakes that no one wants their doctor, their pilot or their science lab to make.

It’s not that Gemini 3 didn’t impress people. And Google spent resources optimizing the model architecture and training techniques instead of just dedicating more hardware to fix the problem, resulting in incredibly good performance. But Gemini 3’s dominance only highlighted the problem. It’s still based on the same architecture that everyone now tacitly admits was not designed to accommodate general intelligence: it’s just the best version of a fundamentally limited system.

Manage expectations

Among the most discussed alternatives were neurosymbolic architectures. These are hybrid systems that combine the statistical pattern recognition of deep learning with the structured logic of ancient symbolic AI.

Others have advocated “global models” that mimic how humans internally simulate cause and effect. If you ask one of today’s chatbots what happens when you drop a plate, they might write something poetic. But he has no internal sense of physics or any real understanding of what will happen next.

Suggestions aren’t about making chatbots more charming; It’s about making AI systems reliable in environments where it matters. The idea of ​​AGI has become a marketing term and fundraising pitch. But when the smartest people in the room say we’re still missing the basic ingredients, maybe it’s time to redefine expectations.

NeurIPS 2025 may not be remembered for its presentation, but for its recognition that the industry’s current trajectory is incredibly profitable but intellectually stagnant. To move forward, we must let go of the idea that more is always better.