I recently watched an interview with DeepMind founder Hassabis at YC, and some of his viewpoints hit close to home. He said that if you start a deep technology project with a ten-year horizon now, you must plan for the emergence of AGI along the way. This is not alarmism; his personal timeline is around 2030.



Listening to his technical details, I understood why AGI is still missing one or two puzzle pieces. Large-scale pretraining, RLHF, and chain-of-thought techniques have already been validated, and he’s confident they will become part of the final AGI architecture. But continual learning, long-term reasoning, and certain aspects of memory are still not fully solved. Currently, the common approach is to cram everything into the context window, which is quite crude. He gave an example: a million-token context window seems large, but for real-time video streaming, it’s only enough for 20 minutes of data. To make the system understand your life over one or two months is far from enough.

The reasoning problem is even more interesting. He often observes Gemini playing chess and finds that sometimes it recognizes a move is bad but can’t find a better alternative, so it still makes that poor move. Precise reasoning systems shouldn’t behave this way. That’s why we see the so-called “sawtooth intelligence”—able to solve IMO gold medal-level problems but stumped by elementary school math.

He admits that we are just beginning with agents. To reach AGI, there must be a system capable of actively solving problems for you—that’s the agent approach. But it’s still in experimental stages; in most scenarios, it’s just icing on the cake. He mentioned that no one has yet used AI tools to create a AAA game that tops app store charts. Theoretically, with current computing power and tools, it should be possible, but it hasn’t happened yet. This indicates some gaps in the process or tools. He expects to see such results within the next 6 to 12 months.

Interestingly, small models are changing the game. Their Flash model can achieve about 95% of the performance of cutting-edge models but at only one-tenth of the cost. Knowledge distillation was invented by DeepMind and remains one of the top methods worldwide. They are highly motivated to optimize—Google is integrating Gemini into every product, involving hundreds of millions of users. This means the models must be extremely fast, highly efficient, and very low cost. He doesn’t believe we’ve hit the limits of information theory; after a frontier model is released, its capabilities can be compressed into models that run on edge devices within six months to a year.

Regarding scientific applications, progress at Isomorphic Labs is promising. AlphaFold is just one part of the drug discovery process. Their ultimate goal is to create a complete virtual cell—a full-function cell simulator capable of applying perturbations. They estimate it will take about ten years to develop a full virtual cell, starting now with the virtual nucleus.

The most practical advice for entrepreneurs is that tackling hard problems and simple problems are actually equally difficult, just in different ways. Life is limited, so it’s better to focus your energy on things that truly won’t be done by anyone else if you don’t do them. Additionally, cross-disciplinary combinations will become more common in the coming years, and AI will make crossing fields easier. But most importantly, take the AGI timeline seriously—imagine what that world will look like, and then build something that remains useful when that world arrives.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin