The Singularity is scheduled for 2027. (Early 2028 at the latest)
Things are about to start moving very quickly
This dynamic—inflation followed by sudden deflation—mirrors what is about to happen to the American economy under corporate techno-feudalism. The inflation of wealth and power among AI-driven monopolies has been building for decades, fueled by Silicon Valley venture capital, deregulated financial structures, and an economic model that prioritizes corporate efficiency over labor security…
The upcoming crisis isn’t just about job losses—it’s about who owns the future of productivity and innovation. It’s about whether AI will enrich the many or further entrench the power of the few. And, perhaps most consequentially, it’s about how President Trump’s economic nationalism is about to collide with the brutal efficiency of AI-driven dark capitalism—pitting government-led interventionism against an automated productivity war that mostly benefits the largest corporate players.
Joe Cook, The Common Sense Papers, 10/3/25
Shopify is saying the quiet part out loud: AI will replace new hiring—other CEOs just won’t admit it
Headline in Fortune, 10/4/25
We live in the age of the machine-mind. In just five years, artificial intelligence has vaulted from an obscure research topic to the blazing furnace at the heart of the global techno-economy… Artificial intelligence now writes our marketing copy, grades our students, answers our legal questions, draws our fantasy worlds, and simulates the friends we no longer have. And it’s getting better—fast. Each model is bigger, sharper, weirder. Each week brings rumors of breakthroughs or collapses. We are inside the inflection point, past the realm of stability. And yet—for all the drama, the discourse remains strangely flat, like a theater backdrop painted in grays.
Contemplations on the Tree of Woe, 12/4/25
And so unfolds the five-act drama OpenAI is staging: Chatbots. Reasoning. Agents. Innovation. Agentic Organizations. Each act sounds like a product line; in reality, each could be read as a progressive sidelining of human-centered work…
What happens when intelligence is commoditized? When capital owns cognition? Who gets to train the models? Who decides what counts as knowledge? What happens when OpenAI, Google DeepMind or Claude becomes not just a vendor of productivity tools, but the central nervous system of economic life…
We should be asking these questions not after deployment, but now, while the sales pitch is still warm. Because this future doesn’t arrive with fanfare. It seeps in through product demos…
Colin W.P. Lewis, The One Percent Rule, 14/4/25
China is rapidly automating its manufacturing base, and not just for shoes, smartphones, joggers, and Nerf guns. It’s also weapons of war. The country already runs a fully automated missile plant that can churn out a thousand new units a day. A significant chunk of its half-trillion-dollar annual defense budget is going toward automating weapons production…
Patriotism demands we go all-in on robots and AI in manufacturing. More than any treaty or tariff, automation is what will keep us ahead of the world’s foremost authoritarian menace. Fighting automation isn’t just bad economics — it’s a surefire way to ensure America loses the battle for the 21st Century.
Trae Stephens, Pirate Wires, 14/4/25
I just got laid off. I’m a software engineer and my company was a mature startup whose funders got spooked by the recent economic crash… I got into this industry because before that I was a professional photographer and the market became so saturated that I wanted to do something that would offer me real security…
The past few years something shifted. The jobs became more unstable, the layoffs happened more often, the constant threat of it all being ripped away was omnipresent. The “recession-proof” industry became anything but. I realize now that there is no safe place to hide from the whims of capitalism.
Scarlet, Dialectics of Decline, 15/4/25
Two noteworthy AI-related stories surfaced in recent days. You may have heard something about the first, which involved the CEO of a high-profile company “saying the quiet part out loud”.
The quiet part is that CEOs are now laser-focused on preventing any new hires.
The good news, such that it is, is that those CEOs aren’t yet making large swathes of their workforce redundant.
Well, no doubt some are but that’s currently the exception rather than the rule. What many CEOs have quietly or not so quietly done is institute hiring freezes. Overtly or tacitly, they’ve made it clear that any underling requesting more resources must have a convincing explanation for why a human should be employed rather than an AI deployed.
But it’s inevitable business leaders' attention will soon shift from stopping any new hires to ‘right sizing’ their workforces. In the months to come, much hostility will be directed towards CEOs. And maybe even at the boards demanding to know why those CEOs haven’t yet slashed their workforces by 30/50/100 per cent. Inevitably, some corporate big-dick-swinger will sound off about redundant workers being feckless losers and become the focus of a Tim Gurner-style, global two-minute hate.
But to paraphrase Michael Corleone, the layoffs won’t be personal, they will be business.
Even in industries where workers earn shit money, they still have to be paid some money. Even if paid the absolute minimum – as they frequently are in the retail and hospitality industries – a legally employed full-time worker in the US earns US$15,000. Given the choice of paying at least $15,000 a year for a high-maintenance human or $2,400 for a ChatGPT Pro subscription, any business owner – or executive/manager who wants to remain in good standing with the ownership class – will make the rational choice.
If the employee in question earns $150,000 or $1.5 million, the choice is 10X or 100X more rational.
Which brings me to the second, much less well-reported AI-related story.
But some background first…
The three traps of tech writing
There are two challenges when writing about technology for laypeople.
Tech industry types skew high IQ and autistic. (Or Elite Human Capital, as we prefer to style ourselves.) That means they frequently reference complex concepts, frameworks or phenomena – Boolean logic, for instance – that most of the population is unfamiliar with and would struggle to deeply understand.
You can usually overcome the complexity challenge if you’re a competent tech journalist/content creator. But you’ll then confront another issue – people who aren’t rain men Elite Human Capital aren’t interested in technology for technology’s sake. Reasonably enough, they are chiefly interested in whether a new technology will make their life better or worse.
Accordingly, the eternal temptation is to get clicks by busting out one of three narratives – (1) Technology X is going to make your life much better, (2) Technology X is going to make your life much worse, or (3) Only grifters are arguing Technology X is going to make your life much better/worse.
AI 2027
Some tech-industry heavy hitters recently came together to launch the AI 2027 initiative. Here’s the elevator pitch:
We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution [my emphasis]. We wrote a scenario that represents our best guess about what that might look like.
I presume the newspaper editors of the world chose not to splash that “best guess” scenario across their front pages because they, and their readers, have grown weary of pundits breathlessly proclaiming that AI will make the world incredibly better or worse overnight. Indeed, after many years of tech industry players – founders, CEOs, ‘industry experts’, PRs and, yes, academics and journalists – insisting that such-and-such a technology was the ultimate “gamechanger”, many of us default to dismissing such claims out of hand.
I sympathise. But it’s worth remembering that the climax of the boy who cried wolf story does involve a wolf appearing, albeit a bit later than initially advertised.
The crew behind AI 2027 – Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean and Jonas Vollmer – argue that AI is already scarily capable and poised to make the leap to ‘superintelligence’ within a 2-3 year timeframe.
Their probabilistic scenario is that AI will surpass human capabilities by early 2027 and that there will be an intelligence explosion in late 2027 or early 2028.
Once that happens, we are entering Skynet becoming self-aware territory.
To put it another way, the Singularity will have arrived. Machines will be able to create ever more sophisticated machines without human involvement.
It’s an open question whether the interests of humans and machines will long stay aligned.
What if this time it really is different?
As a dwindling sloth of AI bears continues to argue, the growing herd of AI bulls could be mistaken. It’s difficult to predict the future and, as scarily prescient as the forecasting record of a Daniel Kokotajla might be, past performance is no guarantee of future returns.
But perhaps it’s time for the citizens of technologically advanced nations and their leadership class to begin planning what they’ll do if Kokotajla et al. are more right than wrong.
As many others before them have, the AI 2027 crew notes both China and the US will want to get to the intelligence explosion before their geopolitical rival. Whichever nation first leaps from narrow/weak to strong AI (i.e. Artificial General Intelligence or Artificial Superintelligence) will be in the same position the US was in from mid-1945 to mid-1949.
Regardless of any attendant labour-market carnage and societal upheaval, it would seem neither the US nor China can afford to take their foot off the accelerator. Indeed, politicians will increasingly find themselves in the same tragedy of the commons bind as CEOs. While they may have profound misgivings about unleashing strong AI, they will understand that if they don’t do it, their competitors aren’t likely to be as scrupulous.
It's the end of the world as we know it
I recently interviewed an impressive young woman who’d just started an Arts/Law degree. We briefly discussed how different the labour market will be when she graduates in 2030. My interviewee readily conceded lots of legal work would be automated away by then. Nonetheless, she was convinced there would still be plenty of employment opportunities for grads like her.
At that point, I began feeling like Sarah Connor attempting to convince a sceptical psychiatrist that Skynet* is about to bomb humanity back to the stone age.
I feel like Sarah Connor a lot these days.
I fear I won’t feel, or appear, quite so Sarah Connorish in 12 months’ time.
But by then it will be 2026, and the Singularity may be nigh.
AI scepticism is smart, AI dismissiveness is dangerous
If I’ve acquired any wisdom during my half a century on the planet, it’s that what’s true is often less important than what people are willing to accept is true.
To take a timely example, many good-faith actors spent many frustrating years arguing identity politics would ultimately prove a disaster for the Left. Sure, gender/sexuality/race cause célèbres might be catnip to post-materialist professional-managerial class (PMC) types wanting to cosplay as heroic liberators without needing to think too deeply about their own often considerable class privilege.
But as anybody with eyes to see could readily discern, performative virtue signalling is always off-putting and frequently enraging to very-much-materialist lower-middle and working-class voters. Not least many of the sacralised minorities progressives fondly imagine themselves to be championing.
A small but ballsy band of old-school winter soldier Leftists struggled to make the point that continuing to double down on open borders, asymmetrical multiculturalism and the trans stuff would inevitably set off a “populist” counter-reaction that PMCers would find most unpleasant. However, few of their comrades wished to acknowledge that truth until Trump was re-elected and, well, here we are.
Similarly, only a handful of people currently want to seriously contemplate the prospect of being rendered technologically obsolescent within the next 12-36 months. That’s an entirely understandable and very human reaction.
But speaking as a former print journalist, I can warn that desperately not wanting something to happen doesn’t mean it won’t.
*The franchise doesn’t get into the weeds about exactly what type of AI Skynet is. But Wikipedia informs me, “Skynet is a fictional artificial neural network-based conscious group mind and artificial general superintelligence system… Skynet is an AGI, an ASI and a Singularity.”
Want your content to stand out in a sea of AI slop?
I’m not unaware of the bleak irony of hustling for work at the bottom of a post arguing that robots are about to take all the jobs. But until the buy-off-the-food-rioters-with-UBI stage of technofeudalism kicks in, we all still need to make a dollar.
As regular readers may recall, I recently ghostwrote a book. I’m keen on doing more ghostwriting – books, speeches, LinkedIn posts, op-eds, whatever – for anybody who needs to pump out attention-grabbing, personal-brand-burnishing content.
If that’s something you’re interested in, dear reader, you may like to check out my portfolio at www.contentsherpa.com.au and then email me (nigel@contentsherpa.com.au) if you like what you see there.
PS – You can get AI to generate your content. I’d recommend using Claude if that’s the path you want to go down. But while AI copy ticks the necessary boxes, it still struggles to shine. Especially when generated by a human without pre-existing content-creation experience. As far as I’m aware, no (entirely) AI-generated article has ever gone viral.
I hear you!