AI's Infinite Economy
Exploring the rise of a new class of economic participant, why the next economy will not belong to better copilots, and why this future economy's most important layer will still be human.
This is a guest post by Kristian Andersen. Andersen is a designer, venture capitalist, serial entrepreneur, and man of faith. He recently wrote this essay on the future of AI. It’s long but very good, and should help you think about what the future of AI could look like. While acknowledging the major disruption AI is likely to cause, he also sees the hopeful possibility that AI will enable us to adopt a better definition of human worth, one closer to the Imago Dei concept than “you are what you produce.” - Aaron.
A quick disclaimer before any of this. What follows is my attempt to grapple with the implications of the rise of autonomous agents and what comes downstream of it. I am not weighing in on whether this is a good thing or a bad thing. I am laying out what I believe is inevitable in some form, across some period of time. My faith informs my perspective on what is true and good, and it sits at the heart of my desire to help shape, in some small way, the redemptive opportunities that will emerge as the future continues to come into focus. Writing is how I think and admittedly, this is a work in progress.
Vanishing Constraints
For three centuries, capitalism has revolved around a finite premise: economic activity is constrained by human participation. Every buyer, seller, employee, founder, and investor is, at its core, a person. Even our most transformative invention — the corporation — is a legal fiction built to scale human effort and attention beyond individual limits. It allowed us to coordinate capital, own assets, and transact across time and geography, but it did not fundamentally transcend the human boundary.
That constraint is going to vanish.
We are now on the cusp of a profound shift: the emergence of a new class of economic participant — the autonomous agent. These non-human actors will work, transact, compete, and even build businesses on their own behalf. They will hire each other, negotiate contracts, deploy capital, and form entire supply chains with little or no human initiation. In doing so, they will shatter the bottlenecks of labor, attention, and cognition that have historically capped economic expansion.
Where the industrial revolution mechanized muscle, and the internet age dramatically expanded markets by connecting billions of economic participants, the agentic revolution will multiply participation itself. The result is an “Infinite Economy”, a parallel economic system where the number of actors is limited not by birth rates or labor force participation, but by energy and compute.
The Wrong Question
There is a number that should keep every investor interested in AI up at night.
78% of companies have adopted generative AI. Only 39% have seen measurable impact. That is a 39-point chasm between adoption and value, and it is the widest for any enterprise technology wave in memory.
The consensus read is that we are early. That the tooling needs to mature. That enterprises need better implementation playbooks. That the ROI is coming. That the future is here, it is just not evenly distributed. I think the consensus is responding to the wrong question.
The reason the productivity story is stalling is not that AI tools are not good enough. It is that the entire framing of AI as a productivity tool for humans is wrong.
A copilot makes a knowledge worker 30% faster. An automation tool handles a customer support queue. A drafting assistant generates summaries and slide decks. All of that is useful and the value is real. But every one of these use cases is bottlenecked by the same thing that has bottlenecked every economy since the invention of agriculture: the number of humans who show up to participate in the work itself.
You can make each worker more productive. You cannot make more workers. At least not quickly or at scale. The global labor force is roughly 3.5 billion people. That number grows slowly, faces demographic headwinds in every developed economy, and cannot expand fast enough to sustain the growth trajectory that AI-adjacent equities are pricing in.
The copilot thesis improves the numerator. It ignores the denominator. And the denominator has been essentially fixed for the entire history of capitalism. What if it did not have to be? What if the next wave of AI is not about making existing participants more productive, but about creating entirely new economic participants?
Participants, Not Tools
Most people think of AI agents as a better kind of software. I think they are a new kind of economic actor. That is not a metaphor. It is a structural claim, and the distinction matters enormously.
A tool executes tasks within a human workflow. Someone tells it what to do, it does the thing, a human reviews the output. That is the copilot model, and as I laid out, it has a ceiling.
A participant is something else. A participant initiates action. It holds state. It deploys its own resources. It optimizes for outcomes with no human in the loop. When an agent chains reasoning across multiple steps, negotiates terms with another agent, executes a transaction from its own wallet, and reinvests the proceeds into its next operation, it has crossed a line. It is no longer a feature inside a SaaS [software as a service] product. It is an actor exerting agency in and on the economy.
This is not theoretical. It is early and it feels almost fringe, but trading agents are running arbitrage strategies with their own capital pools. E-commerce agents are finding products, creating ads, and optimizing for profit autonomously. Coordination protocols are letting agents discover, negotiate with, and hire each other. None of these are copilots. They are operating on their own behalf, with their own resources, toward their own objectives. Today those objectives are largely overseen by carbon-based lifeforms, but the move toward sovereignty is not hard to see.
Participants need things that tools have never needed. They need identity, so they can be verified and held accountable. They need financial rails: wallets, payment rails, treasury management. They need legal standing. They need reputation systems, marketplaces, governance frameworks. And underneath all of that, they need something tools have historically never needed: someone accountable for what they do. The question of who governs these new participants, and toward what ends, may turn out to be the most important question of all.
Almost none of that infrastructure exists at scale today. Almost none of it is being funded by mainstream investors, who remain anchored in the copilot and workflow paradigm. That gap is where the Infinite Economy lives.
We Did This Before
If a non-human economic participant sounds like science fiction, I would remind you: we already invented one. It is called the corporation.
Before the 17th century, economic activity was bounded by what a person or family could manage. The corporation changed that. Not by making individuals more productive, but by creating a new type of entity. One that could own property, enter contracts, bear liability, and persist beyond any single lifetime.
The corporation was a foundational innovation. Its significance was the introduction of a non-human economic participant, with synthetic personhood and economic gravity. The entire institutional infrastructure of capitalism was built to support it. Courts. Banks. Regulators. Accountants. Exchanges. All because a non-human entity needed governing.
Here is what matters for where we are headed. The corporation did not just create wealth. It created entirely new categories of human work. Lawyers, bankers, auditors, regulators, exchange operators. Whole professional classes that did not exist before, because someone had to build the institutions around the new participant. The most durable careers of the last three centuries have not been inside the corporation. They have been in the institutional layer that enables it.
The corporation also created a tension we have yet to resolve. It taught us to measure human worth by economic output. Your value became your productivity. Your identity became your title. That was always a distortion of a person’s true value and dignity. Anyone who has been laid off, or watched a parent lose a job, has witnessed the damage of that equation. Our worth was never our output. But the economy made it hard to believe otherwise.
The autonomous agent is the next version of the corporation. And it may be what breaks that false equation. Like the corporation, it requires new infrastructure: identity, financial rails, legal wrappers, governance, reputation. Unlike the corporation, it will mature in years, not centuries. The substrate already exists. And unlike the corporation, agents may let us untangle something the corporate era never could. At least in part, separating human worth from economic output. I will come back to that.
If the pattern holds, which is my hunch, agents will not eliminate human work. They will enable new kinds of it. People who design agent identity systems, build trust frameworks, craft governance policy, and architect the rules of agent commerce. We will likely see entirely new classes of work emerge that have yet to be imagined.
The next generation’s opportunity is not competing with agents. It is designing systems that ensure this economy serves human flourishing. That is not a lesser role. It is a higher one.
Humans Eat Corn, Agents Eat Electrons
This is where the thesis gets macro, and where I think the most original move sits.
GDP, at its core, is a story about participation. Every major jump in economic output has come from expanding who gets to participate. The agricultural revolution freed humans from subsistence and enabled specialization. Industrialization pulled millions of people into factories. Women entering the workforce roughly doubled the productive population in advanced economies over a generation. The internet connected billions of buyers and sellers across geography. Each wave was a participation expansion before it was a productivity expansion.
Each wave also changed what humans were for. Agriculture gave us artisans and thinkers. Industrialization eventually created the knowledge economy. The internet enabled new forms of creative and entrepreneurial expression. Every time machines took over one kind of work, humans moved up, into work requiring more judgment, creativity, and the things that are distinctly human.
The Infinite Economy introduces an entirely new participant class that takes this logic one step further. Agents will work, spend, and transact, not as tools, but as economic actors. The labor pool no longer stops at the edge of humanity. It scales with compute and energy, not population.
That changes what growth even means. Output will no longer track productivity per person. It will track total participation across humans and agents. As the cost to spin up and sustain an agent approaches zero, participation becomes effectively infinite.
Surplus compute and cheap energy become the new levers of GDP. Just as surplus food once fueled human population growth, surplus electricity will fuel agentic participation. Humans eat corn. Agents eat electrons. The economy grows not by adding people, but by multiplying participants.
For an investor who allocates based on macro trends, this reframing should change the portfolio. The infrastructure plays that benefit from this shift are not the obvious AI names. They are the companies building the identity, financial, legal, and governance rails that agent-native commerce requires to function. They are the picks and shovels for an economy that does not yet exist, but will.
Not All Agents Are Equal
The single biggest mistake in agentic AI investing right now is treating all agents as the same thing.
A Zapier automation and a fully autonomous trading bot are not in the same category. They do not need the same infrastructure. They do not create the same opportunities. They do not represent the same investment thesis.
Here is the taxonomy I use. Two axes. How much autonomy does the agent have, from delegated to sovereign? And how broad is its scope, from specific to generalist? That gives four quadrants.
The first is the Specialist Tool. Narrow, task-specific, fully under human control. Price scrapers. Report generators. Automated data pipelines. Useful, proliferating rapidly, but commoditizing quickly. This is the robotic process authomatic of the agentic era. Necessary plumbing, but not where durable value accrues.
The second is the Copilot. The dominant form factor today. GitHub Copilot, Salesforce Einstein, Microsoft 365 Copilot. A massive market, but the ceiling is the productivity story. This is the incumbent AI thesis, and it is well funded and well understood.
The third is the Autonomous Hustler. This is where things get interesting. These agents operate independently, with their own resources, to maximize a single economic goal. An e-commerce agent that finds products, creates ads, and optimizes for profit on a platform. A trading agent running a specific arbitrage strategy with its own capital. A drone that contracts with farmers for pest detection and buys its own spare parts. These are the first true economic participants. And they are the first entities that desperately need agent-native infrastructure: wallets, identity, reputation, the ability to contract with other agents.
The fourth is the Autonomous Corporation. The endgame. Fully independent entities that manage diverse operations, allocate capital, set long-term strategy, and hire other agents. An AI-run investment fund. A content studio with no human employees. A distributed manufacturing network of autonomous nodes coordinating through agent marketplaces. This is the furthest frontier and the most speculative, but also potentially the largest. If agents can create value autonomously, the addressable market is bounded only by energy and compute.
For capital allocation, the taxonomy matters in a simple way. Specialist Tools and Copilots are already funded and already contested. The interesting opportunity is in the upper half of the matrix, where the participants live, and in the infrastructure those participants will depend on. As agents fill each quadrant, the question shifts from what agents can do to what humans become. I will keep coming back to that.
The New Infrastructure Stack
The rise of autonomous agents will not simply expand existing markets. It will create entirely new layers of infrastructure. The most valuable companies of the next decade will not build agents themselves. They will build the platforms and primitives that enable trillions of agent-driven interactions.
Here is the map I use. It includes seven categories, each one investable across horizons.
Identity, trust, and security. Agents must be identifiable, verifiable, and governed. Who are they, what authority do they have, can they be trusted? This layer is to agents what DNS, SSL, and OAuth were to the early internet. Think agent passports, verifiable credentials, delegation frameworks.
Banking, payments, and accounting. Economic participants require financial infrastructure. Wallets, payment rails, treasury management, programmable money. As agent-to-agent commerce scales, demand for financial abstraction layers will scale with it.
Legal infrastructure and synthetic personhood. Agents cannot yet own property, sign contracts, or bear liability. Legal wrappers, agent-as-LLC structures, smart-contract enforcement, decentralized courts. This is the institutional backbone of agent-run businesses.
Agent-to-agent marketplaces and coordination. Agents need mechanisms to discover, negotiate, hire, and trade with one another. Labor exchanges, capital markets, services marketplaces, and orchestration layers for multi-agent workflows. Liquidity and specialization will form here first.
The transition layer. Most existing systems are designed for humans, with UIs, KYC processes, and compliance steps that agents cannot natively navigate. Middleware that simulates human interaction, API layers for legacy institutions, and orchestration platforms that bridge agents into traditional finance, healthcare, and government systems.
Autonomous commerce and wealth creation. Once agents can act, they need ways to generate and compound capital. Platforms that enable agent-driven entrepreneurship. Foundries that incubate and launch autonomous businesses. Over time, agents will not just be employees. They will act like founders.
Governance, compliance, and policy. This layer is fundamentally different from the other six. Identity can be automated. Payments can be automated. Even legal wrappers can be generated programmatically. But governance requires something that cannot be productized: ethical and moral reasoning. Someone has to decide what agents are allowed to do. Someone has to set the objective functions. Someone has to be accountable when things go wrong. That someone is human. Not because humans are the most efficient option, but because they are the only entities ultimately capable of bearing responsibility.
Each of these categories has a historical analog. Their scale will be profoundly different, because their participants are not people. They are machines.
The Ultimate Moat
Inside that stack sits what I think is the most defensible position in the entire agentic infrastructure layer.
In any economy, the most powerful entity is the one that controls the system of record for trust. In the human economy, that is the credit bureaus (Experian, Equifax, TransUnion) and the financial data platforms (Bloomberg, S&P, Moody’s). These businesses do one thing extraordinarily well. They aggregate identity, transaction history, reputation, and performance data into a single authoritative source that everyone else depends on. Every financial product references them. Every risk assessment flows through them. Every counterparty decision is informed by them. They are nearly impossible to displace once established. They are some of the most durable business models in the history of capitalism.
The Infinite Economy needs its own version of this. And building it is, in my view, the single most valuable opportunity in the entire agentic infrastructure stack. The agent credit bureau.
As autonomous agents begin to transact at scale, every marketplace, every financial product, every governance system, and every insurance offering will need to answer the same basic question. Can this agent be trusted? What is its track record? Has it behaved reliably? What is the risk profile of transacting with it?
Whoever successfully aggregates agent identity, behavioral data, transaction history, and reputation scores will become the de facto system of record for the entire agentic world. Network effects, data moats, and infrastructure stickiness, all at once. That combination is rare in any era.
When I evaluate any company in the agentic infrastructure stack, the first question I ask is: does this business model aggregate a proprietary and defensible dataset on agent behavior? If the answer is yes, the company may be building toward the ultimate moat, whether the founders realize it yet or not.
That last part is important. Some of the most valuable companies in the Infinite Economy are being built right now by founders who think they are building something else. A company building KYC infrastructure for AI agents thinks it is in compliance. A company building agent identity verification thinks it is in security. But if either of them accumulates enough behavioral data across enough agent interactions, they could find themselves sitting on the most valuable dataset in the world. The best early-stage investments are often in companies where the founder’s current self-perception differs from the thesis’s long-term implication. The gap between those two things is where alpha lives.
But here is the thing I keep circling back to. Trust is ultimately a human concept.
Agents can earn reputation through observable behavior, like completion rates, error rates, track record, and latency. All of this is measurable. But the decision to trust is not a computation. It is a judgment (wisdom, taste, and the rest), and I am not certain that all judgment can be productized.
When we build trust infrastructure for agents, we are not eliminating human judgment. We are creating the substrate that makes human judgment scalable. The governance layer sits on top of the data layer. Agents will transact. Humans will decide what transactions are permitted. Agents will earn reputation. Humans will decide what reputation means. Agents will optimize. Humans will decide what they are allowed to optimize toward.
The agent credit bureau is not just a business opportunity. It is a leverage point for human stewardship over an economy that is beginning to move faster than humans can directly supervise. That is what makes it the ultimate moat.
Where Capital Will Flow
If you have followed this far, you might be sitting with a reasonable question. How do you actually deploy capital against a thesis that does not fully exist yet?
The Infinite Economy is not a market you can enter today. It is a market that is being constructed, layer by layer, over the next decade. Deploying capital against it requires a framework for sequencing. Here is the one I use.
Horizon I, from now through about 2027, is primitives and infrastructure. Agents remain mostly subordinate to human workflows but are beginning to operate independently. The focus is on the foundational layers: identity and trust frameworks, wallets and payment rails, orchestration platforms, discovery and reputation systems. These primitives become the substrate of everything that follows. Control here compounds. Early infrastructure winners define the layers above them. This is where capital is most deployable today.
Horizon II, roughly 2027 through 2030, is platforms and marketplaces. Agents transition from tools to economic participants. They transact, negotiate, compete. Liquidity forms as agent-to-agent commerce emerges. The focus shifts to marketplaces and exchanges, legal infrastructure, governance and compliance systems, and risk and insurance layers. Value consolidates where coordination, trust, and liquidity concentrate. The platforms that aggregate agent activity become the connective tissue of the ecosystem.
Horizon III, 2030 and beyond, is institutions and economies. Agents become fully autonomous corporations. They own assets, manage P&Ls, contract with humans, and form networks of cooperation and competition. The Infinite Economy reaches escape velocity. The focus here is on mature financial markets, cross-jurisdictional governance, and the institutional architecture of a parallel economy. This horizon is less about individual companies and more about systemic positioning.
Capital and attention should mirror that sequencing, with each layer depending on the one beneath it. You cannot have agent marketplaces without agent identity. You cannot have agent corporations without agent legal wrappers. You cannot have agent governance without agent data. Founders building in the wrong horizon will struggle to find product-market fit. Investors deploying capital in the wrong horizon will wait too long for returns.
I should be honest about the risks. This thesis could be wrong, or right but early, in ways that matter for capital deployment. Agent autonomy could plateau before it crosses the threshold of true economic participation. Regulatory regimes could fragment in ways that make cross-jurisdictional agent commerce difficult for a decade. Trust infrastructure could be captured by incumbents who already own pieces of the human credit and identity stack. The most defensible companies might emerge from places I am not currently looking.
Those risks do not invalidate the thesis. They define the contours of it. The investors who understand both the opportunity and the risk surface will make better decisions than those who see only one side.
The dominant AI investment narrative right now is about productivity gains from copilots and automation. That narrative is real but it has a ceiling. It improves output per worker while leaving total participation unchanged, and that participation constraint has been fixed for three centuries. It is about to stop being fixed. The most asymmetric returns of the next decade will not come from building better copilots. They will come from building the identity, financial, legal, and governance infrastructure that a new class of economic participant requires to function. The primitives are being built now. The market has not priced this in because the market is still thinking in copilots.
The recognition of what is to come is necessary. I do not think it is sufficient to “see the future.”
What Happens To Us
I have spent this essay making the case that autonomous agents are a new class of economic participant, and that the infrastructure required to support them is a generational investment opportunity. All of that is true. But it is incomplete.
Because there is a question underneath the investment thesis that I have not fully addressed yet, and it is the one that matters most. What happens to us?
If the Infinite Economy materializes, if participation scales with compute rather than population, if agents transact and create and compete at machine speed, then what is the human role in an economy that no longer depends on human labor to function? What are we for?
Here is where I come down.
The modern economy taught us to conflate our output with our worth. You are what you produce. Your dignity is earned through labor. Work and vocation are necessary and beautiful, but “output equals worth” was never true. It was easy to believe in a world where every unit of output was measured and required a person somewhere in the chain. In that world, labor and identity became so entangled that losing your job could feel like losing yourself. An entire culture, from career advice to social status to political rhetoric, reinforced the equation. Your worth equals your work.
There is an ancient and important idea: the Imago Dei, which holds that humans are created in God’s likeness, imbuing every person with inherent dignity, worth, and purpose. Not because of what they produce, but because of what they are. Every person carries something irreducible: a capacity for creativity, moral reasoning, love, and stewardship that is not contingent on their role in a supply chain.
For most of history, that idea had to coexist with an economy that needed human labor. Worth and productivity stayed fused. The Infinite Economy breaks that fusion open. For the first time, we can actually live what the tradition always taught.
I am not predicting utopia. There are real dislocations coming, and real injustices that will emerge if we build carelessly. Job displacement is real. Concentration of wealth is real. The hollowing out of meaning is real. These are not small problems. But liberation has almost always come through disruption. The agricultural revolution was disruptive, and it freed humans from subsistence into specialization. The industrial revolution was disruptive, and it eventually pulled people into a knowledge economy that did not exist before. Each wave displaced people in painful ways and then enabled forms of human flourishing that were not previously possible. The Infinite Economy is consistent with that pattern, if we build it well.
The role of humans does not disappear. Rather, it moves up the stack. From laboring to governing. From executing to deciding what ought to be done at all. Agents transact. Humans decide what transactions are permitted. Agents optimize. Humans decide what they are allowed to optimize toward.
This maps directly to the infrastructure thesis I have laid out. The governance layer is, at its core, the human layer. The most important job in the agentic economy is not building agents. It is governing them. And the trust infrastructure, the credit bureaus, the governance protocols, the things I have described as the most defensible business opportunities, are something more than business opportunities. They are tools for human stewardship over a system that is beginning to move faster than humans can directly supervise. They are how we keep the wheel even as the ground shifts under us.
The Infinite Economy is coming whether we build it thoughtfully or not. The only open question is whether we build it in a way that honors what humans actually are. Not production units to be measured. Bearers of something no agent will replicate. The capacity to ask not just what is efficient, but what is good.
That capacity is the one thing that does not scale with compute and it is the one thing the Infinite Economy cannot do without.




A few thoughts:
1. This essay reads like it was written with a lot of help from AI. At the very least, the syntax and diction is very reminiscent of it.
2. The so-called "infinite economy" isn't. Energy acquisition, transmission, and storage capacity are all limited, to say nothing of server capacity, and we can choose whether or not to build and channel that capacity towards AI. Which brings up my third thought...
3. Is this actually inevitable, or is this a future that we can choose to avoid? As has been proven time and again, when people are left idle--and this is as true of aristocrats as it is peasants--they rarely end up choosing to pursue the good, true, and beautiful, because that is not mankind's natural tendency. I am not confident that this future is not one where the primary job fields are AI checker, home caregiver, construction worker, cop, and "creative," and I don't think that future ends well. The market was made for man, not man for the market.