AI’s permanent underclass problem
What happens when the jobs start disappearing at scale?
The “AI job loss” narratives are all fake. AI = massive ramp in productivity = massive ramp in demand = massive jobs boom. Watch.
Marc Andreessen, X, 6/4/26
jobs doomerism is likely long-term wrong. though of course there will be disruption/significant transition as we switch to new jobs, the jobs of the future may look v different, etc.
Sam Altman, X, 2/5/26
Coder Displaced By A.I. Told He Should Just ‘Learn To Mine Coal’
Government officials rolled out a new initiative to assist displaced coders as they transition into the modern age of manual labor in the mines. The program, called Code2Coal, reportedly includes a two-hour online seminar titled “So You’ve Been Replaced By A Machine: Now What?”
“It’s going to be a big change, but I really do consider myself lucky,” said Garrison. “Fortunately for me, I used to be a coal miner before Biden shut down the mines. He told me to ‘learn to code’, so I spent the past three years mastering Python and Java. It seems like a bit of a waste now, but at least I already know my way around the mines.”
At publishing time, Democrats had vowed to pass legislation to shut down coal-powered data centers
The Babylon Bee, 4/5/26
Cynical observers will see this all as just a messaging pivot… the AI industry’s sales pitch was basically “Our product’s purpose is to put you and your descendants on welfare forever, and it may also wipe out your whole species”.
That was a bad sales pitch, to put it mildly, and it’s not surprising that voters have reacted negatively to this message. Basically every recent poll shows the American public turning very strongly against AI… it makes sense for leading figures in the industry to alter the basic sales pitch and reassure anxious humans that they’ll still have jobs.
Noah Smith, Noahpinion, 5/5/26
I’ve spent the last 18 months or so attempting to warn people that Silicon Valley is bracing for a permanent underclass. So, I was happy to see this New York Times thinkpiece, entitled Silicon Valley Is Bracing For A Permanent Underclass, go viral over the last week.
The author, my fellow Substacker Jasmine Sun, made many sensible points about the entirely predictable outcomes of automating away jobs. Below, I’ll share her money quotes and add some commentary.
First things first – you and everybody you know is screwed
Here’s the bracing opening paragraph of Sun’s article:
Most people I know in the A.I. industry think the median person is screwed, and they have no idea what to do about it… While Silicon Valley has long warned about the risk of rogue A.I., it has recently woken up to a more mundane nightmare: one in which many ordinary people lose their economic leverage as their jobs are automated away.
There’s what people proclaim in public, and there’s what they share in private. You can usually put more trust in the latter than the former. What are Sun’s fellow San Franciscans saying about AI?
Whether you talk with engineers, venture capitalists, founders or managers, or with doomers, accelerationists, lefties or libertarians, the so-called San Francisco consensus on the impact of A.I. for workers is bleak… it would also displace millions of jobs as fewer humans are needed to make the economy run. The technology will depress economic mobility and exacerbate inequality, while ferrying power and wealth to the A.I. companies and the existing owners of capital.
As Sun points out, “This premonition is not a well-kept secret”. Indeed, many high-profile tech industry figures – including OpenAI’s Altman, Anthropic’s Amodei and Grok’s Musk – have spent years warning of just such a scenario.
We’ve seen this movie before
We’ve had a trial run of what happens when a shedload of jobs go away suddenly. Spoiler alert: it doesn’t end happily, at least for those left jobless.
The term “underclass” gained currency in the 1960s to describe the factory workers left behind by the postwar automation boom. Today, it has become repopularized as a viral term for a theory that posits that people have a limited window of time to build wealth before A.I. and robotics are advanced enough to fully replace human labor. At that point, we will get frozen in our current class positions: The rich will be able to deploy superintelligent machines to do their bidding, and everyone else will be rendered useless and unemployable, left to live off welfare scraps.
Getting welfare scraps might be the best-case scenario, but let’s not get ahead of ourselves.
The inescapable market logic
Everyone can see the game-theory logic powering the continual investment in AI and the resulting redundancy of human workers. Neither China nor the US can allow the other to gain a commanding lead in AI, as the recent brouhaha over Mythos illustrated. None of the big AI companies can afford to let their rivals steal a march on them. And any individual business that hesitates to automate away human labour will soon be driven out of the market by competitors lacking such sentimentality.
If left to its own devices, Silicon Valley may summon a permanent underclass through its own market logic. If you believe that human-substituting A.I. is inevitable, then every company should race to be the one to build it… Corporate executives accelerate layoffs and slow hiring because they don’t want to be the firm lagging behind… Sometimes, layoffs happen even before executives know how or whether A.I. will replace those roles.
Tech industry workers are now in a position not dissimilar to the physicists working at Los Alamos from 1943-1945.
Tech workers, for their part, are scrambling for lucrative A.I. jobs in hopes of securing financial freedom — even when they harbor ethical hangups. “People feel like there are not that many opportunities to make money in the future,” said Steven Adler, a former employee on OpenAI’s safety team who now writes a newsletter on A.I. policy. “Even if someone thinks it is personally distasteful to make money from building technology that companies say may literally kill everyone, many people are just cogs in the machine.”
Escaping the inescapable market logic
It hasn’t escaped the attention of AI companies that a society in which an ever-increasing share of resources is being funnelled to an ever-decreasing proportion of the population is a pre-revolutionary society. Whether or not a full-blown revolution is on the cards, there’s clearly an anti-AI backlash building in the US and the rest of the Anglosphere.
What to do?
OpenAI, Anthropic and Google DeepMind have now set up the equivalent of think tanks to look into how AI companies can keep making money without creating the type of mass unemployment that results in large AI companies getting nationalised.
As I wrote about a few weeks ago, OpenAI even released a white paper, Industrial Policy for the Intelligence Age: Ideas to Keep People First, calling for ameliorative measures. But while plenty of tech company founders will talk the wealth-redistribution talk, it’s an open question as to whether they walk the walk.
Many of the ideas listed in OpenAI’s white paper are radically progressive: a 32-hour workweek, higher taxes on corporations and capital gains and a “public wealth fund” that provides all citizens an equity stake in A.I. companies…
Still, the document is vague on implementation mechanics and whether OpenAI will advocate the policies listed. In an emailed statement, an OpenAI spokesperson declined to provide examples of specific legislation the company supports.
How do the ‘Useless Eaters’ react?
Mass enfranchisement is a recent historical development. Nonetheless, helots, slaves, peasants, serfs, proletarians and other non-elite actors have always had at least a modicum of political power. That’s because they could withdraw their labour (or at least try to). But what happens when elites no longer require that labour?
As Anthropic’s Amodei argues, “The balance of power of democracy is premised on the average person having leverage through creating economic value. If that’s not present, I think things become kind of scary.”
But as Sun predicts:
Anthropic’s coffers probably won’t be emptied in the service of public work-force programs unless politicians compel the company to do so. Anthropic has not yet released a set of economic policies that the company supports, either in broad strokes, as with OpenAI’s white paper, or by endorsing specific legislation…
When I asked Mr. Clark if the Anthropic Institute planned to lobby for the redistributive measures he alludes to, he demurred, describing policy advocacy as “the end of a very, very long chain of work.”
Let’s summarise where we’re currently at.
1. Those most knowledgeable about AI expect it will eliminate a lot of cognitive labour in the near term (1-3 years) and then physical labour in the medium term (3-8 years).
2. No nation, industry or business can take its foot off the AI accelerator lest it get outcompeted.
3. Severe labour market disruption could (kind of) be managed with redistributive policies.
4. Many prominent tech companies and their founders have called for such policies.
5. The tech companies and founders calling for redistribution game tax systems across the globe to make the minimal possible contribution.
6. While the Silicon Valley bigwigs talk a good fight about shifting things in a more Scandinavian direction, there’s little evidence they are lobbying to have the policies they putatively support introduced. (If you were the cynical type, you might even suspect they’re quietly lobbying against their introduction.)
Here’s the good news, such that it is. If the Left can pull its head out of its identity-politics-addled arse for five seconds, this could be its moment to shine.
Sun wraps up her article with an account of attending a ‘How to Prepare Our Politics for AGI’ event put on by Democratic pollster and strategist David Shor.
Shor argues that the AI-enabled automation threat has even constitutionally optimistic Americans considering the possibility they’re not just temporarily embarrassed millionaires.
While the American public ordinarily hesitates to support left-wing policies like a jobs guarantee or single-payer health care, A.I. seems to expand the political Overton window. “Right now, the argument is, ‘You’re all about to lose your jobs, and the choice is either you get nothing and starve, or we do something fair,’” Mr. Shor said. “People don’t want to be members of the permanent underclass…”
As many have before her, Sun observes that AI is now doing to the Professional Managerial Class what a combination of globalisation (exporting jobs to developing countries while importing migrants from them) and automation have been doing to the working class since the 1990s.
A.I. will look like an accelerated and expanded version of deindustrialization. But rather than companies outsourcing jobs to overseas workers, they will be outsourcing them to A.I. agents. “The China shock unfolded over several years, whereas this could happen over two years,” said Bharat Ramamurti, a former deputy director of the National Economic Council in the Biden White House.
Knowledge workers are now having to ask questions that never used to trouble them.
What happens if the invisible hand of the market decides that my skills are no longer valuable? Who will catch me if I fall? For once, a rarefied class of employees — those used to being the automaters, not the automated — is reckoning with their potential obsolescence.
Elsewhere in the passage, Sun muses that by facilitating this proletarianisation, “A.I.’s broad capabilities foster a rare class solidarity between white-collar and blue-collar workers.”
I’m sceptical of the notion “rare class solidarity” will develop between proletariat and bourgeoisie. The reason such solidarity is rare is because the two demographics don’t have much in common. Then there’s the small matter of the Professional Managerial Class treating the working class with thinly veiled, or completely unveiled, contempt for several decades. Not least during the High Woke Era of 2014–2024.
Sun suggests that either a restive electorate will force politicians to slam on the brakes or AI will wreak so much labour-market havoc that a business-shuttering, plutocrat-liquidating revolution will kick off.
If you hear that A.I. will entrench a permanent underclass, you’ll do anything to stop it. Across the United States, there are new proposals for bans on data center construction, on self-driving cars and on chatbots for broad consumer uses like therapy and law. In the extreme, populist rage can metamorphose into violence. In April, an attacker tried to firebomb Mr. Altman’s home…
In March, the Palantir chief executive, Alex Karp, spoke on a panel with the Teamsters president, Sean O’Brien. “The biggest challenge to A.I. in this country is political unrest,” Mr. Karp said. “If I were sitting here in private with my peers, I’d be telling them the country could blow up politically and none of us are going to make any money when the country blows up.”


> If the Left can pull its head out of its identity-politics-addled arse for five seconds, this could be its moment to shine.
lol
Can you folks stop drinking the kool-aid of the industry that is rushing as fast as it can to IPO and pass the buck on AI to the market before the bubble pops for like five seconds? "Everyone can see the game-theory logic powering the continual investment in AI and the resulting redundancy of human workers." - No, this is not true. Many people that work at large companies outside of San Fran actually understand the game-theory logic of a software solution which is spending trillions of dollars and has open sourced competitors that are practically at par and what the market implications of that are. You see, if you fire your staff and replace them with AI that you can basically run on open sourced models locally for very little capital investment and energy cost, the market will eventually give you practically zero value for that. That is how markets work, competitive factors drive down margins to cost+small markup for any commoditized good. So you are firing your workforce, introducing massive risk (future regulation, current inference cost that approaches the salary, errors, legal vulnerability etc. etc.) all so you can gain short term margins that will quickly be eroded to nothing. Oh wait - no one outside of the tech bubble is actually doing this because they are capable of actual game theory. The banks that are piloting it are the banks funding it and pushing the IPO - its all closed ecosystem and see through. I use AI extensively for personal use, its amazing in that context - in an enterprise context the bottleneck is not capability (it has been capable enough for half a year now and if you look at year-over-year headcount its basically exactly the same as last year across the board outside of tech) - the bottleneck is that this is clearly going to be regulated as its politically toxic, its not insured against (the insurers have actively backed out of it, which is a massive risk), the massive investment is generating pitiful returns in capability at this point (OpenAI spends $60B on compute training, Deepseek spent a fraction of that...the capability difference is practically meaningless outside of nonsense silicon valley benchmarks). If everyone could just call this bullshit already and pop this idiot bubble we could at least start moving in the longer-term adoption direction - software doesn't run the world outside SV, AI is not going to change that sorry - takes one bill adding human requirements to kill the entire model and the model never got adopted anyhow. Basically we are all better than this constant AI doomer shitty content - its so tiresome.