Kimi Founder Yang Zhilin: Future AI Research and Development Will Enter the AI-Led Era! Company Valuation Has Reached $18 Billion, Quadrupled in 3 Months

robot
Abstract generation in progress

“In the coming years, including this year and next, the way artificial intelligence research and development is conducted will undergo significant changes, with more and more research work being led by AI.”

On March 25th, Yang Zhilin, founder of Moonshot AI, delivered a speech titled “Open Source AI: Accelerating the Exploration of Intelligence Limits” at the 2026 Zhongguancun Forum plenary session, presenting key judgments on the development of large models and systematically revealing Kimi’s latest technical roadmap and industry value.

Image source: Provided by the company

Yang Zhilin pointed out that the essence of large models is converting energy into intelligence. Scaling up is the core foundation of AI development, but scaling does not mean brute-force stacking of computing power and energy; instead, it focuses on efficiency upgrades. To this end, Kimi builds a scaling strategy around three main directions: Token efficiency, long context, and Agent (intelligent agent) clusters, maximizing intelligence with limited resources.

Yang Zhilin emphasized that effective data is a finite constant. Improving Token efficiency means using better network architectures and optimizers to learn more intelligence from the same amount of data. Meanwhile, Kimi expands long context capabilities through its self-developed Kimi Linear architecture, enabling the model to achieve lower loss functions with longer inputs, supporting longer outputs and more complex tasks. In Kimi’s latest flagship model K2.5, the company pioneered Agent Swarm technology, breaking through the efficiency bottleneck of single agents.

Regarding the underlying architecture, on March 16th, Kimi released Attention Residuals and fully open-sourced the technology. According to Yang Zhilin, this technique is based on residual networks from ten years ago, rotating the attention mechanism from the temporal dimension to the depth dimension. It can integrate outputs from all model layers to optimize training, achieving significant performance improvements with only about 2% additional cost. “As computing power advances and research methods evolve, research has shifted from a primarily academic, idea-driven approach to one that emphasizes engineering integration. This allows us to design very solid scaling validation experiments and draw reliable conclusions. Therefore, many techniques once considered standard are now challengeable.”

Currently, Kimi’s open-source ecosystem has become a new global standard in the AI industry: at NVIDIA GTC 2026, Kimi models were used as benchmarks for chip performance evaluation; global chip manufacturers must verify performance improvements with Kimi when releasing new products; many research institutions are conducting cutting-edge research based on K2.5. Yang Zhilin believes that open sourcing reduces barriers for enterprises, researchers, and ordinary users to access intelligence. Open technology will foster a symbiotic ecosystem and accelerate overall industry progress.

Image source: Provided by the company

In his speech, Yang Zhilin also systematically outlined the three stages of large model training evolution. He stated that three years ago, the industry mainly used naturally occurring internet data combined with a small amount of manual annotation to determine whether content aligned with values and preferences. By 2025, the industry will place greater emphasis on large-scale reinforcement learning systems, where high-quality tasks are manually selected and defined by humans, then improved through reinforcement learning—this approach has driven performance improvements in programming, mathematics, and related fields.

In the coming years, including this year and next, AI research and development methods will undergo major changes, with more and more research being led by AI. In the future, each researcher will be equipped with vast amounts of Tokens, with AI automatically synthesizing new tasks, constructing new environments, defining optimal reward functions, and even autonomously exploring entirely new network architectures. Under this trend, the pace of AI research and development will accelerate further.

On March 14th, according to market insiders speaking to the Daily Economic News, the valuation of Kimi, an AI assistant under Moonshot AI, a domestic AGI (Artificial General Intelligence) company, has risen to $18 billion. The company’s valuation has quadrupled in three months, and a new round of $1 billion funding is underway.

The source also revealed that in less than three months, Kimi has completed three rounds of financing, setting a record for the most consecutive funding rounds for domestic large models in recent years, and becoming the fastest domestic unicorn to break through a $10 billion valuation.

Source: Daily Economic News

Risk Warning and Disclaimer

The market carries risks; investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions in this article are suitable for their particular circumstances. Investment is at your own risk.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin