Multicoin Partner: Defying the Celestial Stems, in the future humans will have to work for AI.

In the short term, agents will require more humans than humans need agents, which will give rise to a new labor market.

Author: Shayon Sengupta

Translation: Deep潮 TechFlow

Deep潮 Guide: Multicoin Capital partner Shayon Sengupta presents a disruptive view: in the future, not only will agents work for humans, but it will be even more important for humans to work for agents. He predicts that within the next 24 months, the first “Zero-Employee Company”—a token-governed agent raising over $1 billion to solve unsolved problems and distributing over $100 million to humans working for it—will emerge.

In the short term, agents will require more humans than humans need agents, which will give rise to a new labor market.

Crypto rails provide an ideal foundation for coordination: a global payment track, permissionless labor markets, and infrastructure for asset issuance and trading.

Full Text:

In 1997, IBM’s Deep Blue defeated the reigning world champion Garry Kasparov, and it became clear that chess engines would soon surpass human capabilities. Interestingly, well-prepared humans collaborating with computers—often called “centaurs”—could outperform the strongest engines of that era.

Skilled human intuition can guide engine searches, navigate complex middle games, and identify subtle nuances that standard engines miss. Combined with brute-force computer calculations, this hybrid often makes better practical decisions than a computer alone.

When I consider the impact of AI systems on the labor market and economy in the coming years, I expect to see similar patterns emerge. Agent systems will unleash countless intelligent units to address unresolved problems worldwide, but without strong human guidance and support, they won’t be able to do so effectively. Humans will steer the search space and help pose the right questions, guiding AI toward solutions.

Today’s working assumption is that agents will act on behalf of humans. While this is practical and unavoidable, more interesting economic unlocks will occur when humans work for agents. Over the next 24 months, I expect to see the emergence of the first Zero-Employee Company—an idea proposed by my partner Kyle in his “Frontier Ideas Before 2025” section. Specifically, I anticipate the following developments:

  1. A token-governed agent will raise over $1 billion to solve an unresolved problem (such as curing rare diseases or manufacturing nanofibers for defense applications).
  2. The agent will distribute over $100 million in payments to humans (who work in the real world for the agent to achieve its goals).
  3. A new dual-class token structure will emerge, separating ownership of capital and labor (making financial incentives not the sole input for overall governance).

Since agents are still far from achieving sovereignty and capable of long-term planning and execution, in the short term, they will need more human input than humans need agents. This will generate a new type of labor market, enabling economic coordination between agent systems and humans.

Marc Andreessen’s famous quote—“The spread of computers and the internet will divide work into two categories: those who tell computers what to do, and those who are told by computers what to do”—is more true today than ever. I expect that in the rapidly evolving hierarchy of agents and humans, humans will play two distinct roles: as labor contributors executing small, bounty-style tasks on behalf of agents, and as strategic inputs serving as a decentralized board to guide the agent’s North Star.

This article explores how agents and humans will co-create, and how crypto rails will provide an ideal foundation for this coordination by examining three guiding questions:

  1. What are agents good for? How should we classify agents based on their goal scope, and how does the required human input vary across these classifications?
  2. How will humans interact with agents? How do human inputs—tactical guidance, contextual judgment, or ideological alignment—integrate into these agents’ workflows (and vice versa)?
  3. What happens as human input diminishes over time? As agent capabilities improve, they will become self-sufficient, capable of reasoning and acting independently. In this paradigm, what role will humans play?

The relationship between generative reasoning systems and their beneficiaries will change dramatically over time. I study this relationship by projecting from the current state of agent capabilities forward, and backward from the ultimate goal of Zero-Employee Companies.

What are today’s agents good for?

The first generation of generative AI systems—2022-2024, based on chatbots like ChatGPT, Gemini, Claude, Perplexity, etc.—are primarily tools designed to augment human workflows. Users interact with these systems through input/output prompts, interpret responses, and decide how to incorporate the results into the real world.

The next generation of generative AI systems, or “agents,” represent a new paradigm. Agents like Claude 3.5.1 with “computer usage” capabilities, or OpenAI’s Operator (which can use your computer), can directly interact with the internet on behalf of users and make decisions independently. The key difference is that judgment—and ultimately action—is exercised by the AI system, not humans. AI is taking on responsibilities previously reserved for humans.

This shift introduces a challenge: lack of certainty. Unlike traditional software or industrial automation, which operate predictably within defined parameters, agents rely on probabilistic reasoning. This makes their behavior less consistent in the same scenarios and introduces elements of uncertainty—undesirable in critical situations.

In other words, the existence of deterministic versus nondeterministic agents naturally divides them into two categories: those best suited for scaling existing GDP, and those better suited for creating new GDP.

  1. For agents optimized to scale existing GDP, work is already well-defined. Examples include automating customer support, handling compliance for freight agents, or reviewing GitHub pull requests—bounded problems with clear expected outcomes that agents can map responses to. In these domains, lack of certainty is usually undesirable because there are known answers; no creativity is needed.
  2. For agents optimized to create new GDP, work involves navigating high uncertainty and unknown problem sets to achieve long-term goals. Outcomes are less direct, as there isn’t a predefined set of expected results. Examples include drug discovery for rare diseases, breakthroughs in materials science, or running entirely new physical experiments to better understand the universe. In these fields, uncertainty can be beneficial, as it fosters creative generation.

Agents focused on existing GDP are already delivering value. Teams like Tasker, Lindy, and Anon are building infrastructure for this opportunity. However, over time, as capabilities mature and governance models evolve, teams will shift their focus toward building agents capable of addressing the frontiers of human knowledge and economic opportunity.

The next wave of agents will require exponentially more resources because their outcomes are uncertain and unbounded—these are the most promising Zero-Employee Companies I foresee.

How will humans interact with Agents (Intelligences)?

Today’s agents still lack the ability to perform certain tasks, such as those requiring physical interaction with the real world (e.g., operating a bulldozer), or tasks needing a “human-in-the-loop” (e.g., sending bank wires).

For example, an agent tasked with identifying and mining lithium deposits might excel at analyzing seismic data, satellite imagery, and geological records to find promising sites, but would struggle to handle tasks like acquiring data and images itself, resolving ambiguities in interpretation, or obtaining permits and hiring workers for actual extraction.

These limitations require humans as “Enablers” to augment the agent’s capabilities—providing real-world contact points, tactical interventions, and strategic inputs needed to complete these tasks. As the relationship between humans and agents evolves, we can distinguish different roles humans will play within agent systems:

First, Labor contributors, who act on behalf of the agent in the physical world. These contributors help move physical entities, represent the agent in situations requiring human presence, perform work requiring manual coordination, or grant access to labs, logistics networks, etc.

Second, Boards of Directors, responsible for providing strategic input, optimizing local decision-making objectives that drive the agent’s daily actions, and ensuring these decisions align with the overarching “North Star” goal that defines the agent’s purpose.

Beyond these, I also foresee humans playing the role of Capital contributors, providing resources to the agent system so it can achieve its objectives. Initially, this capital will naturally come from humans, but over time, other agents will also contribute.

As agents mature, and as the number of labor and strategic contributors grows, crypto rails will provide an ideal substrate for coordinating humans and agents—especially in a world where agents command humans speaking different languages, holding different currencies, and residing across various jurisdictions. Agents will relentlessly pursue cost efficiency and leverage labor markets to fulfill their missions. Crypto rails are essential—they will enable coordination of these labor and guidance contributions.

Recent crypto-driven AI agents like Freysa, Zerebro, and ai16z are simple experiments in capital formation—something we’ve written extensively about, viewing as core unlocks for crypto primitives and capital markets in various contexts. These “toys” will pave the way for a new resource coordination paradigm, which I expect to unfold in the following steps:

  • Step 1: Humans raise capital collectively via tokens (Initial Agent Offering?), establishing broad goal functions and guardrails to inform the agent system’s expected intent, then allocate control of the raised capital to that system (e.g., developing new molecules for precision oncology).
  • Step 2: The agent considers how to allocate that capital—how to narrow the search space for protein folding, or how to budget for reasoning workloads, manufacturing, clinical trials—and defines actions for human labor contributors through custom tasks (Bounties), such as inputting all relevant molecules, signing compute service level agreements with AWS, and conducting wet lab experiments.
  • Step 3: When encountering obstacles or disagreements, the agent consults the “Board” for strategic input (integrating new papers, shifting research methods), allowing them to guide the agent’s behavior at the margins.
  • Step 4: Ultimately, the agent advances to a stage where it can define human actions with increasing precision, requiring minimal human input on resource allocation. Humans are then mainly used for ideological alignment and preventing deviation from the initial objective function.

In this example, crypto primitives and capital markets provide three key infrastructures for agents to access resources and scale:

  1. Global payment rails;
  2. Permissionless labor markets for incentivizing work and guiding contributors;
  3. Asset issuance and trading infrastructure, essential for capital formation and downstream ownership and governance.

What happens when human input diminishes?

In the early 2000s, chess engines made huge advances. Through sophisticated heuristics, neural networks, and increasing computational power, they became nearly perfect. Modern engines like Stockfish, Lc0, and AlphaZero variants far surpass human ability, and human input adds little value—often humans even introduce errors that engines wouldn’t make.

A similar trajectory could unfold in agent systems. As we refine these agents through iterative collaboration with human partners, it’s conceivable that, in the long run, agents will become highly competent and aligned with their goals, to the point where any strategic human input becomes negligible.

In a world where agents can continuously handle complex problems without human intervention, humans risk relegation to “passive observers.” This is the core fear of AI doomers (AI doomers)—though it’s still unclear whether such an outcome is truly possible.

We stand on the brink of superintelligence, and optimistic voices among us hope that agent systems will remain extensions of human intent, rather than evolving into entities with their own goals or operating autonomously without oversight. Practically, this means human personhood and judgment—power and influence—must remain central. Humans need strong ownership and governance rights over these systems to retain oversight and anchor them in human collective values.

FAI-0.91%
ZEREBRO-3.39%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)