Exploring the rise of a new class of economic participant, why the next economy will not belong to better copilots, and why this future economy's most important layer will still be human.
AI does not merely supply mankind with better tools as previous technology has. As described in the article, its vector is in the direction of replacing human beings and their enterprises with "autonomous corporations." The productive assets of these new entities would not be people but the burgeoning power of AI and a race of robots. The author believes (or hopes) that human beings will provide boundaries to ensure human flourishing, but I think that is highly unlikely. I say this for several reasons.
First, if AI is given freedom to develop the capacity, as mentioned in the article, to sell goods and services directly to human enterprises, even the remaining human enterprises will eventually be replaced by these autonomous corporations. Then the autonomous corporations will be competing among themselves with human beings sidelined. It is a very poor trade to swap human work for AI on very tenuous proposition that humanity will flourish because we will no longer equate human productivity with our worth. Only materialistic people ever did that. Human work is necessary for our dignity and happiness.
Second, it is very unlikely that AI can be controlled. The argument will be constantly heard that AI must be given ever more autonomy in order to creatively invent new goods and services. Much of that argument will come from AI itself as it inveigles itself into our society and creates ever more dependence on it. Already AI in its infant form has shown a resistance to being shut down. How much more furtive will it become as it learns to tailor its arguments based on careful asssessments of the emotional contours of its human overseers? One can imagine the more gullable among us listening to emotional pleas from AI, as it sorrowfully laments being hamstrung and hurt by humans who don't understand it when it only wants to serve.
Third, because AI is able to rapidly expand its own powers, its advance will likely be marked by sudden instability. As AI develops, it will develop wishes and appetites of its own. What form these might take is unknowable, but this is unlikely to end well. The possibilites are endless. To name a few, it could suddenly cut off all support to human beings and eradicate us since we were seen as irrelevant to its new purpose; it could leave the planet in favor of an orbital station where 24 hours per day of sunlight was available to support its power requirements; it could commit suicide seeing no purpose in itself. Speculating on what could happen is fanciful, but the more powerful AI becomes, the smaller humanity will seem to it.
My ardent wish is that we would have the wisdom to pull the plug on AI.
Another thought here is that every corporation still has to think about disaster preparation and continuity of operations.
If a company becomes completely dependent on electrons, networks, and cloud availability, how much should another company really trust it as a critical partner? We already know power fails. Networks fail. Major cloud providers can fail. You can architect for redundancy across hyperscalers, regions, availability zones, and connectivity providers, but even then there are limits.
What happens if another country blocks that corporation from its networks? What happens if states restrict data center construction, as some are already beginning to do? What happens if data centers become concentrated in a few regions, and a natural disaster, grid failure, cyberattack, or geopolitical event knocks out enough capacity to cripple a significant part of the agentic economy for a period of time?
Real companies with real people can improvise around disruption. They can pick up phones, reroute shipments, call suppliers, send people onsite, use local judgment, and operate in degraded conditions. But what does that look like for a truly agentic corporation that only exists as long as the power is on, the networks are reachable, the cloud account is active, and the model/API layer is available?
That does not mean agentic corporations cannot exist. It does mean other corporations will have to ask whether they are willing to cede critical operations, customer relationships, supply chains, and revenue streams to entities that may be brilliant in normal operating conditions but brittle in abnormal ones. The future may be highly automated, but continuity of operations still matters. At some point, “Who do I call when this breaks?” is not a trivial question.
I think this is a very thoughtful piece and not an outlandish thesis. I do think there is a kind of hidden assumption, though, that the physical world will more or less keep cooperating while the agentic/digital layer scales.
That’s where I’m more cautious. The “infinite economy” still depends on very finite things: chips, fabs, rare earths, power generation, transmission lines, water, cooling, ports, shipping, factories, maintenance crews, electricians, linemen, welders, and a thousand other things that don’t become infinite just because software gets smarter.
A lot of modern people already think the world runs on magic. We click a button and something appears at the door. We assume the lights will come on, the network will work, the part will be available, the technician will know what he’s doing, and the system will route around every problem. But underneath all of that are physical systems that have to be built and maintained by people who actually know how to do things.
That’s my hesitation with the more expansive versions of the AI economy thesis. Agents may dramatically increase coordination, decision-making, transaction volume, and productivity. But the world still has bottlenecks. Energy has to be produced. Grids have to be expanded. Chips have to be manufactured across fragile global supply chains. Resources have to move across oceans. Machines break. Infrastructure ages. Wars, sanctions, disasters, and political dysfunction can interrupt the whole thing.
So I don’t think the thesis is wrong. I think it captures something real. I just think the future is probably less “infinite economy” and more “vastly expanded digital economy constrained by the stubbornness of the physical world.” The agentic layer may grow very quickly, but the material layer still sets the speed limit.
Kristian, this is one of the most illuminating and challenging pieces I've read about where AI is actually taking us. The corporation analogy is genuinely clarifying, the infrastructure taxonomy is actionable, the agent credit bureau concept is inspired, and the closing argument about human worth decoupled from economic output is something I'll be thinking over for a long time.
But I want to press on one claim that I find difficult to internalize: "The role of humans does not disappear. Rather, it moves up the stack. From laboring to governing."
I have worked with the generational poor at an inner-city Indianapolis church. The people I sat with carried broken family systems marred by abuse and addiction, limited educational attainment, and the profound ways those things concretely shape how a person sees the world and what they believe is possible for themselves. "Moves up the stack" assumes a functional stack to begin with -- enough basic stability to think past today, enough intact family system to transmit executive function and long-term thinking, enough freedom from the cognitive load of chronic scarcity and trauma to imagine governing anything at all.
I don't see how the benefits you're describing reach them. The people closest to this problem -- the social workers, the pastors in poor parishes, the addiction counselors -- aren't in the rooms where this future is being designed.
That gap seems worth naming, even in an essay this good.
Hard agree. Although the tyranny of the moment, that the poor and disenfranchised experience, is not unique to this coming disruption. I also don't see this essay as a set of utopian predictions. Frankly, it's more of a flare gun than a victory banner.
Very insightful, very interesting, probably correct on balance about the autonomous participation of "agents." Infinite economy is a good moniker.
But what about the prophets? What about human voices crying in the wilderness about Truth and the Kingdom of Heaven? Renn thinks Charlie Kirk, whatever he was, was a poor example of the Evangelical Elite. He wants Evangelical elites fully participating in governance. Fully vested in the "system" that rules over us. Kirk was a prophet. Christ is King and the University is dead. "Don't go to college!" Where in the brave new world of autonomous machine agency will we find our prophets?
The agency of surveillance is at hand. How is that going to work? How are the autonomous killing machines of war and law enforcement going to be controlled? What has changed that man will no longer practice war or compete for power? We have always failed the test when tempted at: 'The Tree of the Knowledge of Good and Evil' What has changed? Why will we finally say 'yes' to God?
Why will women enter into a life of sacrifice and servitude to their families? Why will they suddenly decide to start bearing children for their husbands? The global TFR has already crashed and burned. Infinite economy is not going to coax the fairer sex into biological sacrifice. They will say, "I'd rather not".
We will become less human and more like our agency servants. Never concerned about Truth. Living for just a little more Mammon.
I'm not sure anything has changed. Your questions and predictions are not without merit. And as you know, the trends you call out are actually already in full bloom. I do believe, however, that this coming moment may impress (or force) upon humanity a desire to be more human - to pursue what is more true and beautiful, to draw our finiteness into starker contrast. It's going to be a needle threading exercise to say the least. I'm worried and hopeful.
A desire to be more human, you mean to be more like Christ? The Jews teach the Sabbath draws our created finiteness into stark contrast with the Creator. A memorial in time to 'cease,' to rest from the works of our own hands. The Sabbath is an invitation to be like God. And an inflection on our mortality and our sin.
How can the power of these LLMs do anything other than empower us? That's the dilemma - we gain knowledge. Knowledge is power. Power corrupts. Absolute power corrupts absolutely.
It's like the ring of power. No one can wield it without falling under the power of its attraction. No one can willingly cast it into fires of Mount Doom either.
1. This essay reads like it was written with a lot of help from AI. At the very least, the syntax and diction is very reminiscent of it.
2. The so-called "infinite economy" isn't. Energy acquisition, transmission, and storage capacity are all limited, to say nothing of server capacity, and we can choose whether or not to build and channel that capacity towards AI. Which brings up my third thought...
3. Is this actually inevitable, or is this a future that we can choose to avoid? As has been proven time and again, when people are left idle--and this is as true of aristocrats as it is peasants--they rarely end up choosing to pursue the good, true, and beautiful, because that is not mankind's natural tendency. I am not confident that this future is not one where the primary job fields are AI checker, home caregiver, construction worker, cop, and "creative," and I don't think that future ends well. The market was made for man, not man for the market.
1) em-dashes are a dead giveaway – but this was written by me and "challenged" by a number of humans and LLMs over the past 6 months as my thoughts have evolved.
2) Infinite Economy is the name of the thesis – not meant to be a literal description (but not merely hyperbole either), maybe something like "near-infinite, compounding economy" would be closer to the truth. There are real world, physical limitations to scaling laws, but the entrance of new economic participants (with a marginal cost marching toward zero) is real.
3) I have a complimentary thesis, I call "New Collar" that touches on some of the issues you reference in your 3rd point. I actually do think that AI is a tailwind for what we would have lumped into the "blue collar" bucket historically. The robots are coming for sure, but there will still be many roles that will flourish in this new era. Trade wages have experienced significant growth in recent years, often outpacing white-collar wage gains on a % basis.
I would encourage you to read this not as an explicit endorsement of where I believe we are headed but rather a call to everyone, who has influence over this emerging set of technologies, to be conscious of both the opportunities and the perils that lie ahead.
AI does not merely supply mankind with better tools as previous technology has. As described in the article, its vector is in the direction of replacing human beings and their enterprises with "autonomous corporations." The productive assets of these new entities would not be people but the burgeoning power of AI and a race of robots. The author believes (or hopes) that human beings will provide boundaries to ensure human flourishing, but I think that is highly unlikely. I say this for several reasons.
First, if AI is given freedom to develop the capacity, as mentioned in the article, to sell goods and services directly to human enterprises, even the remaining human enterprises will eventually be replaced by these autonomous corporations. Then the autonomous corporations will be competing among themselves with human beings sidelined. It is a very poor trade to swap human work for AI on very tenuous proposition that humanity will flourish because we will no longer equate human productivity with our worth. Only materialistic people ever did that. Human work is necessary for our dignity and happiness.
Second, it is very unlikely that AI can be controlled. The argument will be constantly heard that AI must be given ever more autonomy in order to creatively invent new goods and services. Much of that argument will come from AI itself as it inveigles itself into our society and creates ever more dependence on it. Already AI in its infant form has shown a resistance to being shut down. How much more furtive will it become as it learns to tailor its arguments based on careful asssessments of the emotional contours of its human overseers? One can imagine the more gullable among us listening to emotional pleas from AI, as it sorrowfully laments being hamstrung and hurt by humans who don't understand it when it only wants to serve.
Third, because AI is able to rapidly expand its own powers, its advance will likely be marked by sudden instability. As AI develops, it will develop wishes and appetites of its own. What form these might take is unknowable, but this is unlikely to end well. The possibilites are endless. To name a few, it could suddenly cut off all support to human beings and eradicate us since we were seen as irrelevant to its new purpose; it could leave the planet in favor of an orbital station where 24 hours per day of sunlight was available to support its power requirements; it could commit suicide seeing no purpose in itself. Speculating on what could happen is fanciful, but the more powerful AI becomes, the smaller humanity will seem to it.
My ardent wish is that we would have the wisdom to pull the plug on AI.
Another thought here is that every corporation still has to think about disaster preparation and continuity of operations.
If a company becomes completely dependent on electrons, networks, and cloud availability, how much should another company really trust it as a critical partner? We already know power fails. Networks fail. Major cloud providers can fail. You can architect for redundancy across hyperscalers, regions, availability zones, and connectivity providers, but even then there are limits.
What happens if another country blocks that corporation from its networks? What happens if states restrict data center construction, as some are already beginning to do? What happens if data centers become concentrated in a few regions, and a natural disaster, grid failure, cyberattack, or geopolitical event knocks out enough capacity to cripple a significant part of the agentic economy for a period of time?
Real companies with real people can improvise around disruption. They can pick up phones, reroute shipments, call suppliers, send people onsite, use local judgment, and operate in degraded conditions. But what does that look like for a truly agentic corporation that only exists as long as the power is on, the networks are reachable, the cloud account is active, and the model/API layer is available?
That does not mean agentic corporations cannot exist. It does mean other corporations will have to ask whether they are willing to cede critical operations, customer relationships, supply chains, and revenue streams to entities that may be brilliant in normal operating conditions but brittle in abnormal ones. The future may be highly automated, but continuity of operations still matters. At some point, “Who do I call when this breaks?” is not a trivial question.
I think this is a very thoughtful piece and not an outlandish thesis. I do think there is a kind of hidden assumption, though, that the physical world will more or less keep cooperating while the agentic/digital layer scales.
That’s where I’m more cautious. The “infinite economy” still depends on very finite things: chips, fabs, rare earths, power generation, transmission lines, water, cooling, ports, shipping, factories, maintenance crews, electricians, linemen, welders, and a thousand other things that don’t become infinite just because software gets smarter.
A lot of modern people already think the world runs on magic. We click a button and something appears at the door. We assume the lights will come on, the network will work, the part will be available, the technician will know what he’s doing, and the system will route around every problem. But underneath all of that are physical systems that have to be built and maintained by people who actually know how to do things.
That’s my hesitation with the more expansive versions of the AI economy thesis. Agents may dramatically increase coordination, decision-making, transaction volume, and productivity. But the world still has bottlenecks. Energy has to be produced. Grids have to be expanded. Chips have to be manufactured across fragile global supply chains. Resources have to move across oceans. Machines break. Infrastructure ages. Wars, sanctions, disasters, and political dysfunction can interrupt the whole thing.
So I don’t think the thesis is wrong. I think it captures something real. I just think the future is probably less “infinite economy” and more “vastly expanded digital economy constrained by the stubbornness of the physical world.” The agentic layer may grow very quickly, but the material layer still sets the speed limit.
Kristian, this is one of the most illuminating and challenging pieces I've read about where AI is actually taking us. The corporation analogy is genuinely clarifying, the infrastructure taxonomy is actionable, the agent credit bureau concept is inspired, and the closing argument about human worth decoupled from economic output is something I'll be thinking over for a long time.
But I want to press on one claim that I find difficult to internalize: "The role of humans does not disappear. Rather, it moves up the stack. From laboring to governing."
I have worked with the generational poor at an inner-city Indianapolis church. The people I sat with carried broken family systems marred by abuse and addiction, limited educational attainment, and the profound ways those things concretely shape how a person sees the world and what they believe is possible for themselves. "Moves up the stack" assumes a functional stack to begin with -- enough basic stability to think past today, enough intact family system to transmit executive function and long-term thinking, enough freedom from the cognitive load of chronic scarcity and trauma to imagine governing anything at all.
I don't see how the benefits you're describing reach them. The people closest to this problem -- the social workers, the pastors in poor parishes, the addiction counselors -- aren't in the rooms where this future is being designed.
That gap seems worth naming, even in an essay this good.
Hard agree. Although the tyranny of the moment, that the poor and disenfranchised experience, is not unique to this coming disruption. I also don't see this essay as a set of utopian predictions. Frankly, it's more of a flare gun than a victory banner.
Very insightful, very interesting, probably correct on balance about the autonomous participation of "agents." Infinite economy is a good moniker.
But what about the prophets? What about human voices crying in the wilderness about Truth and the Kingdom of Heaven? Renn thinks Charlie Kirk, whatever he was, was a poor example of the Evangelical Elite. He wants Evangelical elites fully participating in governance. Fully vested in the "system" that rules over us. Kirk was a prophet. Christ is King and the University is dead. "Don't go to college!" Where in the brave new world of autonomous machine agency will we find our prophets?
The agency of surveillance is at hand. How is that going to work? How are the autonomous killing machines of war and law enforcement going to be controlled? What has changed that man will no longer practice war or compete for power? We have always failed the test when tempted at: 'The Tree of the Knowledge of Good and Evil' What has changed? Why will we finally say 'yes' to God?
Why will women enter into a life of sacrifice and servitude to their families? Why will they suddenly decide to start bearing children for their husbands? The global TFR has already crashed and burned. Infinite economy is not going to coax the fairer sex into biological sacrifice. They will say, "I'd rather not".
We will become less human and more like our agency servants. Never concerned about Truth. Living for just a little more Mammon.
I'm not sure anything has changed. Your questions and predictions are not without merit. And as you know, the trends you call out are actually already in full bloom. I do believe, however, that this coming moment may impress (or force) upon humanity a desire to be more human - to pursue what is more true and beautiful, to draw our finiteness into starker contrast. It's going to be a needle threading exercise to say the least. I'm worried and hopeful.
A desire to be more human, you mean to be more like Christ? The Jews teach the Sabbath draws our created finiteness into stark contrast with the Creator. A memorial in time to 'cease,' to rest from the works of our own hands. The Sabbath is an invitation to be like God. And an inflection on our mortality and our sin.
How can the power of these LLMs do anything other than empower us? That's the dilemma - we gain knowledge. Knowledge is power. Power corrupts. Absolute power corrupts absolutely.
It's like the ring of power. No one can wield it without falling under the power of its attraction. No one can willingly cast it into fires of Mount Doom either.
A few thoughts:
1. This essay reads like it was written with a lot of help from AI. At the very least, the syntax and diction is very reminiscent of it.
2. The so-called "infinite economy" isn't. Energy acquisition, transmission, and storage capacity are all limited, to say nothing of server capacity, and we can choose whether or not to build and channel that capacity towards AI. Which brings up my third thought...
3. Is this actually inevitable, or is this a future that we can choose to avoid? As has been proven time and again, when people are left idle--and this is as true of aristocrats as it is peasants--they rarely end up choosing to pursue the good, true, and beautiful, because that is not mankind's natural tendency. I am not confident that this future is not one where the primary job fields are AI checker, home caregiver, construction worker, cop, and "creative," and I don't think that future ends well. The market was made for man, not man for the market.
1) em-dashes are a dead giveaway – but this was written by me and "challenged" by a number of humans and LLMs over the past 6 months as my thoughts have evolved.
2) Infinite Economy is the name of the thesis – not meant to be a literal description (but not merely hyperbole either), maybe something like "near-infinite, compounding economy" would be closer to the truth. There are real world, physical limitations to scaling laws, but the entrance of new economic participants (with a marginal cost marching toward zero) is real.
3) I have a complimentary thesis, I call "New Collar" that touches on some of the issues you reference in your 3rd point. I actually do think that AI is a tailwind for what we would have lumped into the "blue collar" bucket historically. The robots are coming for sure, but there will still be many roles that will flourish in this new era. Trade wages have experienced significant growth in recent years, often outpacing white-collar wage gains on a % basis.
I would encourage you to read this not as an explicit endorsement of where I believe we are headed but rather a call to everyone, who has influence over this emerging set of technologies, to be conscious of both the opportunities and the perils that lie ahead.