Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
CICC: By 2026, large models will achieve more breakthroughs in reinforcement learning, model memory, contextual engineering, and other areas.
Goldman Sachs points out that looking back at 2025, global large model technology capabilities have advanced steadily, gradually overcoming productivity scenarios, with significant progress in reasoning, programming, agentic functions, and multimodal abilities. However, the models’ general capabilities still have shortcomings in stability and hallucination rates. Looking ahead to 2026, Goldman Sachs believes that large models will achieve more breakthroughs in reinforcement learning, model memory, and context engineering, progressing from short context generation to long reasoning chains, from text interaction to native multimodal capabilities, and moving closer to the long-term goal of achieving AGI.
Full Text Below
Goldman Sachs | AI Ten-Year Outlook (26): Key Trends in Model Technology for 2026
Goldman Sachs Research
Reviewing 2025, global large model technology capabilities have advanced forward, gradually overcoming productivity scenarios, with notable progress in reasoning, programming, agentic functions, and multimodal abilities. However, the models’ general capabilities still face challenges in stability and hallucination rates. Looking to 2026, we believe that large models will make further breakthroughs in reinforcement learning, model memory, and context engineering, moving from short context generation to long reasoning chains, from text interaction to native multimodal capabilities, and further toward the long-term goal of AGI.
Summary
We expect that in 2026, the pretraining scaling laws will be reproduced, with flagship models reaching a new parameter scale. Architecturally, the Transformer-based model architecture will continue, with Mixture of Experts (MoE) becoming the consensus for balancing performance and efficiency. Different attention mechanisms will still be optimized and switched. In terms of paradigm, the combination of scaling laws, high-quality data, and reinforcement learning during pretraining will jointly enhance model capabilities. One of the expectations for 2026 is that, as NVIDIA’s GB series chips mature and are promoted, models will be trained on higher-performance multi-GPU clusters, further increasing model parameters and intelligence limits during pretraining based on scaling laws.
The importance of reinforcement learning will increase, becoming a key to unlocking advanced model capabilities. Reinforcement learning enhances the upper limit of model intelligence, enabling models to think and reason more logically and in line with human preferences. Its essence is “self-generated data + multi-round iteration,” with the key being large-scale computing power and high-quality data. Overseas model developers like OpenAI and Gemini attach great importance to reinforcement learning, and domestic companies like DeepSeek and Alibaba Qianwen are also following suit. We expect that by 2026, the proportion of reinforcement learning in domestic and international models will further increase.
New approaches such as continual learning, model memory, and world models will see core breakthroughs. Continual learning and model memory fundamentally address the problem of catastrophic forgetting in large models, enabling models to have selective memory mechanisms. Algorithms and architectures like Titans, MIRAS, and Nested Learning proposed by Google focus on allowing models to dynamically adjust their learning and memory based on task time span and importance, thus achieving continual or even lifelong learning. Additionally, models focusing on understanding the causal laws of the physical world, such as Genie 3 and Marble, have opportunities for breakthroughs through exploration of different paths.
Risks
(Source: People’s Financial News)