14 Comments
User's avatar
Kevin's avatar

Aaron - I think now it would be good to explore the opposite possibility. For example, what if AI actually becomes a great decentralization engine? All the tech VC crowd says what they’re seeing is how AI is allowing startups to happen with vastly fewer employees, and thus vastly smaller amounts of capital required. And then there’s the physical impact on cities. If AI wipes out a lot of middle managers in both why public and private sectors (as I expect to happen), doesn’t that act to decentralize cities away from office-oriented centers? Curious as to how you see the possible upside

Expand full comment
Sean's avatar

Great article and i definitely agree!

So I think a great idea for a follow-up article is how should the managerial class respond or pivot? Find entire new professions? Work to find ways to be more efficient in the professions they’re in? I’m curious how an average worker should pivot.

Expand full comment
Chris Gast's avatar

Thank you. Many people have convinced themselves AI must be good, because all technology is good. We replaced the horses with cars, right? But if we replace physical labor with automation, and now mental labor with algorithms, what remains?

To even question it has become the new heresy.

Expand full comment
Brian's avatar
1dEdited

I've recently soured on the potential of AI, at least LLMs. I've been messing around with them for two years now. If you use AI for any length of time, it becomes readily apparent that they are simply unreliable, particularly on any specialized or advanced/complex task. Hallucinations are the Achilles heel of this technology. This problem actually renders it less useful than a human assistant. A human assistant would not simply invent the name of a party on a legal document; if he/she was unsure, they would leave it blank, or highlight it, or otherwise bring it to my attention. Even after spending dozens of hours developing instructions for tasks and feeding it dozens of examples, it still hallucinates and just makes stuff up. The key thing people need to understand is that "artificial intelligence" is NOT intelligent.

Expand full comment
Rich's avatar

Worth reading about the real limitations of LLMs right now. I'm skeptical about the most optimistic predictions (or pessimistic if you will): https://garymarcus.substack.com/p/a-knockout-blow-for-llms

Expand full comment
Aaron M. Renn's avatar

I have to say, I've already leveraged LLM models for some very useful things. They've gotten dramatically better in the last year. I would also not say that LLMs are the last work in AI technology either.

Expand full comment
Rich's avatar

I leverage them as well.

My only point is that LLMs are really the only approach that has shown some facility in assisting experts on some tasks but they lack the reasoning (or real learning) capabilities to replace a person who gets better as they learn a task.

There's also a tremendous amount of non-deterministic output that makes it difficult for Enterprises to trust them. They can't really even get LLM's to be effective call center agents and the bar for that is really low.

I know there is research in cognitive AI approaches, but it's speculative to see what they will develop.

I sometimes think the "optimism" some have for the future ability of AI to replace human intelligence (or surpass it) is based on naturalistic or mechanistic ideas about where human intelligence arises.

Either way, even those optimists think that we're at least 10 years away from AI getting to that point.

Expand full comment
Spouting Thomas's avatar

From what I've read of his stuff, Gary Marcus is too eager to deny LLMs' capabilities.

Zvi gathered some responses here:

https://open.substack.com/pub/thezvi/p/give-me-a-reasoning-model?r=1h6crc&utm_campaign=post&utm_medium=web

Zvi is too close to the rationalist sphere and hence has too-high expectations of what LLMs will achieve. I agree with you that many in that sphere are too naturalistic/mechanistic in their understanding of human intelligence. But Zvi's summaries and updates have been valuable for me to keep up on the space.

I also think your point about LLMs struggling to learn on the job is valid. Dwarkesh discussed that intelligently here:

https://www.dwarkesh.com/p/timelines-june-2025?r=1h6crc&utm_campaign=post&utm_medium=web

By all means, let's keep critically informed about the capabilities of LLMs and keep considering different perspectives as the space changes rapidly. But there's a luddite strand in conservative/reactionary thinking, possibly concentrated among an older audience, that dismisses LLMs out of hand based on cherry-picked stories. Or having a bad experience with Google Search's automatic AI suggestions expressing something stupid. Without recognizing the vast gulf in capabilities between that and the frontier models.

I'm urging every thinking person not to be that guy. Constantly test this stuff for yourselves, to internalize its capabilities and limitations at the state of the art. Which probably means paying for a $20 subscription somewhere (I'm paying for all of them but would probably still suggest ChatGPT as the baseline).

Expand full comment
Rich's avatar

Marcus responded to his critics FWIW:

https://garymarcus.substack.com/p/seven-replies-to-the-viral-apple

Expand full comment
Rich's avatar

I'm certainly not a luddite. It has great uses. We're also integrating its capabilities into customer solutions. I'm simply realistic about its limitations and skeptical about bold promises. I'm also not given to hysteria as a general rule - predictions that white collar jobs are going away or that machine intelligence can turn on its creators and kill them when we just need to cut the power source. :)

Expand full comment
Spouting Thomas's avatar

Let me be clear, I wasn't calling you a Luddite. Part of what I was saying was for the wider audience here. Or for Aaron or others to consider communicating to a wider audience.

I think Gary Marcus is writing really one-sided takes. From what I've read of his, I never see him saying, "Oh, wow, look at this thing I'm now able to do with the latest model!" All he ever seems to do with his space is take down the optimists. Maybe I've missed those pieces, but his writing just seems to be one-note.

Contrast maybe to that Dwarkesh piece I linked, which is more open-minded while still centered around a thesis that's fundamentally skeptical about LLMs' near-term productivity benefits.

And it's not that Marcus is ignorant, he seems to raise valid criticisms, but he also seems to have an axe to grind, or be emotionally invested in LLMs not amounting to much. Which means that if we take a reactionary-minded person who IS relatively ignorant on LLMs and ALSO emotionally invested in them not amounting to much, Marcus is writing the sort of confirmation bias nectar that I see passed around in that circle.

Expand full comment
Shawn Ruby's avatar

Gary is great. Very few techies who are honest about ai (without thinking pre chatgpt 3 is alive and... needs a lawyer). Silicon valley is too close to Hollywood, methinks.

Expand full comment
TorqueWrench10's avatar

AI is not really ready for prime time (for all the uses being contemplated) but we’re going to execute on it anyway is my concern. There’s too much nervous money to be made for caution to slow it down and we can look forward to unfixable problems while we blame anything but the AI, until it becomes obvious.

The managerial class will be the ones implementing AI, and the general attitude seems to be that they cannot wait to do so. Everyone is going with the “common wisdom” right now which tells me we’re in for a rough ride.

Expand full comment
Shawn Ruby's avatar

That's a great point, but I think it points more to non-ai algorithms getting rid of managers. The ai seems to add more tools to impersonality through replacing the workers under managers. I really think ai will be less impactful than algorithms were.

Expand full comment