Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Z.ai exec tests new GLM-5.1 against company's own services
Headline
Z.ai’s Product Director Tests Whether New GLM-5.1 Model Outperforms Company’s Own Services
Summary
Zixuan Li, Z.ai’s Director of Product and genAI Strategy, tweeted about switching to the company’s newly released GLM-5.1 model to see if it overwhelms their existing services. GLM-5.1 launched on March 27, 2026. It’s a refined version of their 744B-parameter Mixture of Experts model with DeepSeek Sparse Attention. The model scored 77.8% on SWE-bench Verified, putting it close to proprietary models like Claude Opus 4.5. Z.ai offers coding plans starting at $10/month.
Analysis
Li previously worked as an AI researcher at MIT before joining Z.ai. His tweet builds hype around GLM-5.1’s lower deployment costs and strong performance on complex engineering tasks. By framing this as a test that might “crush” their own infrastructure, he’s positioning GLM-5.1 as a serious upgrade for AI coding agents. The model works with tools like Claude Code and Cursor, which could pull users away from OpenAI and Anthropic.
Z.ai was founded in 2019 and went public in 2026. Like other Chinese AI companies, they’ve open-sourced their models to build developer communities outside China. The promotional tone here suggests this is marketing as much as technical commentary—possibly a response to user complaints about compute availability.
Impact Assessment