Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
OpenAI Codex Evolves Again: 40% Faster Reasoning Speed, Significantly Reduced Programming Latency
OpenAI is accelerating the upgrade of its AI programming ecosystem, attempting to regain market dominance from strong competitors like Anthropic.
OpenAI’s official developer account announced today that its core programming tool, Codex, has received a major upgrade. OpenAI has not only launched a standalone desktop application but also achieved breakthroughs in underlying model performance: GPT-5.2 and GPT-5.2-Codex models have achieved approximately 40% overall speedup while keeping parameter weights unchanged.
For developers and enterprise users, speed is the primary productivity factor for AI programming tools. OpenAI states that this 40% speed increase is entirely due to engineering optimizations of the inference stack.
This means all API customers can enjoy lightning-fast responses without changing any code or adjusting model architecture. In multi-agent collaboration scenarios, lower latency directly translates to a more seamless development experience, significantly reducing the wait time from logic generation to code deployment.
From “Chat Assistant” to “Command Center”
The company’s latest release of the Codex desktop application marks a paradigm shift in OpenAI’s programming strategy. Unlike traditional conversational interfaces, Codex is designed as an “Agent Command Center.”
It supports users in calling multiple AI agents to work in parallel. Through the “Worktrees” feature, developers can assign different agents to handle UI design, backend architecture, and bug scanning independently. Additionally, Codex integrates the MCP protocol, allowing AI agents to directly operate external software.
OpenAI CEO Sam Altman openly stated about the competitive advantage of AI programmers:
Targeting Anthropic’s “Billion-Dollar Big Deal”
This round of intensive updates by OpenAI is highly targeted. According to Reuters, although OpenAI leads in the general large model field, it has lagged behind Anthropic in the AI programming niche.
Data shows that Anthropic’s programming tool, Claude Code, reached an annualized revenue of $1 billion within just six months of opening to the public. To reclaim this high-value “piece of the pie,” OpenAI has not only significantly increased speed but also offered a “limited-time benefit”: free users of ChatGPT and the Go version can experience Codex, while speed limits for Plus and enterprise users will be doubled.
For the market, the evolution of Codex lowers the barrier to software development. As some observers say, this ushers in an era of “Vibe Coding”—even non-technical personnel can command a legion of agents through text instructions to “craft” a fully functional application within minutes.
Risk Warning and Disclaimer
Market risks exist; investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions in this article are suitable for their particular circumstances. Invest at your own risk.