Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Huang Jen-hsun is Satoshi Nakamoto
Two tokens, the same name, the same underlying structure: compute power goes in, valuable things come out.
Author: Luo Yihang
In January 2009, an anonymous person invented something called a “token.” You invest computing power, receive tokens, which circulate, are priced, and traded within a consensus network. This gave birth to the entire crypto economy. Over a decade later, people still debate whether these tokens have any value.
In March 2025, a man in a leather jacket redefined another kind of token. You invest computing power, produce tokens, which are immediately consumed during AI inference and reasoning processes: thinking, reasoning, coding, decision-making. This accelerates the AI economy. No one debates whether these tokens have value because you just used millions of them this morning.
Two tokens, the same name, the same underlying structure: compute power goes in, valuable things come out.
In March 2026, I sat in the NVIDIA GTC conference hall listening to Jensen Huang deliver a nearly product-free keynote speech. Yes, he announced Vera Rubin, a CPU-GPU hybrid product. But this time, he didn’t talk about chip specs or process technology. Instead, he presented a complete economics of token production, pricing, and consumption:
Which model corresponds to which token speed; which token speed corresponds to which price range; what hardware level is needed to support each price range.
He even provided a data center compute allocation plan for CEOs and decision-makers holding corporate budgets: 25% for free tier, 25% for mid-range, 25% for high-end, 25% for premium.
Yes, he didn’t specify a particular GPU model to sell this time, just like two years ago with Blackwell. But this time, he was selling something bigger. After two hours, I realized the one thing he most wanted to say was: Welcome to consume tokens, and only Nvidia’s factory can produce them.
At that moment, I realized this man is doing the exact same thing as the anonymous person who mined the first token 17 years ago.
Same transformation rules
The anonymous person under the pseudonym “Satoshi Nakamoto” wrote a nine-page white paper in 2008, designing a set of rules: invest computing power, complete a mathematical proof (Proof of Work), and earn crypto tokens as rewards.
The brilliance of this rule is that it requires no trust in anyone—if you accept these rules, you automatically become a participant in this economy. The rule is correct because it brings together many deceitful people.
And Jensen Huang, on the GTC 2026 stage, did the same thing structurally.
He showed a diagram illustrating the relationship and tension between inference efficiency and token consumption: the Y-axis is throughput (how many tokens are produced per megawatt of power), the X-axis is interactivity (perceived token speed per user). Below the X-axis, he marked five pricing tiers: Free with Qwen 3, $0 per million tokens; Medium with Kimi K2.5, $3 per million tokens; High with GPT MoE, $6 per million tokens; Premium with GPT MoE 400K context, $45 per million tokens; Ultra at $150 per million tokens.
This diagram could almost serve as the cover of Huang’s “Token Economics” white paper.
Satoshi Nakamoto defined “what is valuable computation”—completing a SHA-256 hash collision is valuable. Jensen Huang defined “what is valuable inference”—producing tokens at a specific speed under given power constraints for specific scenarios is valuable.
Neither Nakamoto nor Huang directly produce tokens; they define the rules and mechanisms for token production and pricing.
A sentence Huang said on stage could almost be directly included in the abstract of the white paper on token economics:
Tokens are the new commodity, and like all commodities, once they reach an inflection point, once they mature, they will segment into different parts.
Tokens are the new bulk commodity. Once mature, bulk commodities naturally stratify. He’s not describing the current state; he’s predicting a market structure and precisely aligning his hardware product lines with each layer of this structure.
The production processes of these two tokens even have a semantic symmetry: mining is called mining, inference is called inference.
The essence of mining and inference is both turning electricity into money. Miners spend electricity to mine crypto tokens, then sell them; AI models and agents spend electricity to generate AI tokens, then sell them by the million. The intermediate steps differ, but both ends are the same: meters on the left, income on the right.
Two ways to define scarcity
The most important design decision Nakamoto made wasn’t Proof of Work; it was the cap of 21 million bitcoins. He created artificial scarcity through code—no matter how many miners flood in, the total bitcoin supply will never exceed 21 million. This scarcity anchors the entire crypto economy’s value.
Huang, on the other hand, creates natural scarcity through physical laws. He says:
“You still have to build a gigawatt data center. You still have to build a gigawatt factory, and that one gigawatt factory for 15 years amortized… is about $40 billion even when you put nothing on it. It’s $40 billion. You better make sure you put the best computer system on that thing so you can have the best token cost.”
A 1GW data center will never become 2GW. This isn’t a code limit; it’s a physical law.
Land, electricity, cooling—each has a physical limit. How many tokens a factory produces over 15 years depends entirely on what computing architecture you put inside.
Nakamoto’s scarcity can be forked. If you don’t like the 21 million cap, fork a new chain, change it to 200 million, call it Ether or whatever, and write a new white paper. People have indeed done this, happily.
Huang’s created scarcity cannot be forked. You can’t fork the second law of thermodynamics, the capacity of a city’s power grid, or the physical area of land.
But whether Nakamoto or Huang, their scarcity creation leads to the same result: a hardware arms race.
The history of mining: CPU → GPU → FPGA → ASIC. Each generation of dedicated hardware renders the previous obsolete. The history of AI training and inference is repeating: Hopper → Blackwell → Vera Rubin → Groq LPU. General hardware starts it, specialized hardware settles it. The Groq LPU Huang showed at this year’s GTC, after acquiring Groq, is a deterministic dataflow processor. Static compilation, compiler scheduling, no dynamic scheduling, 500MB on-chip SRAM—it’s architecturally an ASIC for inference. Doing one thing, but doing it to the extreme.
Interestingly, GPUs played a key role in both waves.
Before 2013, miners discovered GPUs were better suited than CPUs for crypto mining, and Nvidia graphics cards were sold out. Ten years later, researchers found GPUs are the best tools for training and inference of AI models, and Nvidia data center cards were again sold out. As a processor class, GPUs served two generations of the token economy.
The difference is, the first time Nvidia was a passive beneficiary, and then it was over. The second time, as AI compute shifted from pretraining to inference, Nvidia quickly seized the opportunity, designing the entire game and becoming the rule-maker for AI.
The world’s most profitable pickaxe
In the gold rush, the most profitable weren’t the miners but the pickaxe sellers—Levi Strauss. In the mining boom, the most profitable weren’t the miners but the sellers of mining hardware—Bitmain and Wu Jihan. In the AI pretraining and inference wave, the most profitable aren’t the foundational models or agents but Nvidia, which sells GPUs.
But honestly, the roles of Bitmain and Nvidia in their respective industries are no longer comparable.
Bitmain only sells mining machines; Nvidia was also a supplier to Bitmain. When you buy a miner, what coin to mine, which pool to join, and at what price to sell are unrelated to Bitmain. It’s a pure hardware supplier, earning one-time device profits.
Nvidia is different. It doesn’t just sell hardware. Since the AI inference boom in 2025, it has deeply defined what to mine with this GPU, how to price tokens, who to sell tokens to, and how to allocate compute in data centers… All of this is in Huang’s presentation slides: he divides the market into five tiers, each corresponding to specific models, context lengths, interaction speeds, and prices… Nvidia has standardized and formatted the future AI inference-driven market.
Around 2018, global compute power was concentrated in a few large pools—F2Pool, Antpool, BTC.com—they competed for hash rate share, but the hardware source was highly centralized at Bitmain.
Just like today’s Nvidia, which earns 60% of its revenue from competing hyperscalers like AWS, Azure, GCP, Oracle, CoreWeave, and 40% from decentralized AI natives, sovereign AI projects, and enterprise clients. The big “mining pools” contribute most revenue, while smaller “miners” provide resilience and diversification.
The structure of these two ecosystems is identical. But Bitmain later faced competitors—Shenma Mining Machines, Canaan, and others—eating into its market share. Mining hardware is a relatively simple ASIC design, giving challengers a chance. But shaking Nvidia seems increasingly difficult: 20 years of CUDA ecosystem, hundreds of millions of GPUs installed, NVLink sixth-generation interconnect, the decoupled inference architecture after Groq’s integration—Nvidia’s technological complexity and ecosystem barriers make most competing tools ineffective.
This may last another 20 years.
The fundamental fork of the two tokens
What makes cryptocurrency and AI tokens fundamentally different is the motivation and psychology behind their use.
Crypto tokens are driven by speculation. No one “needs” Bitcoin to do work. All white papers claiming blockchain tokens can solve problems are scams. Holding crypto is because you believe someone will buy it from you at a higher price in the future. Bitcoin’s value comes from a self-fulfilling prophecy: if enough people believe it’s valuable, it is. This is a faith economy.
AI tokens, on the other hand, are driven by productivity. Nestlé needs tokens for supply chain decisions—its supply chain data refreshes every 15 minutes instead of 3, reducing costs by 83%. This value can be directly mapped to P&L. Nvidia’s engineers now need tokens to write code instead of manual work; research teams need tokens for scientific research. You don’t need to believe tokens are valuable; just use them, and their value is self-evident in usage.
This is the core difference between the two tokens. Crypto tokens are produced to be held and traded—their value lies in not using them. AI tokens are produced to be immediately consumed—their value lies in their use.
One is digital gold, appreciating as it’s stored; the other is digital electricity, burned upon production.
This difference determines that the AI token economy won’t bubble like the crypto economy. Bitcoin’s wild swings are driven by speculation. Token prices are driven by usage and production costs. As long as AI remains useful—people still code with Claude Code, generate reports with ChatGPT, run business workflows with agents—demand for tokens won’t collapse. It doesn’t rely on faith; it relies on indispensability.
In 2008, the Bitcoin white paper had to repeatedly justify why a decentralized electronic cash system was valuable. Seventeen years later, people still debate it.
In 2026, token economics no longer provoke debate; it’s even become a consensus without needing proof. When Huang said at GTC “tokens are the new commodity,” no one questioned. Because everyone in the audience had just used millions of tokens with Claude Code or ChatGPT this morning. They don’t need to be convinced of token value— their credit card statements already prove it.
In this sense, Huang is truly a copy of Satoshi Nakamoto—the one who monopolized mining hardware production, defined token use cases and standards, and annually hosts a show at the SAP Center in San Jose to showcase the next-generation “mining machines” supporting AI training and inference.
Satoshi Nakamoto has a cautious, romantic allure—design the rules, hand them to the code, then disappear. That’s the cyberpunk ideal. Huang, more than any scientist, is like a businessman—designing rules, personally maintaining, constantly refining, building his moat.
The token you once believed in because you trusted it, now you can see without trust. It’s the next after Watt, Ampere, and Bitcoin.