Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Recently, I have been contemplating the fundamental limitations of AI. It's like the protagonist in the movie "Memento"—current large language models (LLMs) might also be suffering from a kind of forward-facing amnesia.
If parameters remain fixed, the model cannot truly learn from new experiences. It tries to compensate with chat history and search systems, but ultimately, it's just relying on external memory. It hasn't internalized the knowledge.
According to a16z's analysis, in-context learning (ICL) is just search, not genuine learning. Because it lacks compression, it cannot make creative discoveries or handle adversarial scenarios. For example, in problems requiring fundamentally new approaches, like proving Fermat's Last Theorem, LLMs can only recombine existing knowledge.
The solutions proposed by researchers fall into three paths. The first is enhancing the context layer, such as multi-agent systems. The second is modularization, like knowledge modules embedded into existing architectures, such as adapters or compressed key-value caches. The third is weight updates, enabling real learning at the parameter level through techniques like test-time training or meta-learning.
However, weight updates come with many challenges. Catastrophic forgetting, temporal decoupling, and security alignment degradation. Updating models after deployment is not just a technical issue; it also involves auditability and privacy concerns.
Future systems are likely to become hierarchical. ICL will handle rapid adaptation, modules will enable specialization, and weight updates will allow deep internalization. Escaping forward-facing amnesia requires more than just expanding a file cabinet; it demands compression, abstraction, and true learning mechanisms.
This field is attracting many startups, experimenting across layers such as context management, modular design, and parameter optimization. While no definitive winner has emerged yet, significant changes are expected in the coming years.