Image note: Substack’s contribution to the free speech forces of light triumphing is beyond question. But what comes up when you punch “software engineer” into the ‘Add Stock Photo’ feature is a, shall we say, aspirational demographic ‘reality’.
Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend. ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience may in time become… This is what we humble humans can offer, that AI can only mimic, the transcendent journey of the artist that forever grapples with his or her own shortcomings. This is where human genius resides, deeply embedded within, yet reaching beyond, those limitations.
Nick Cave, Red Right Hand Files #218, 23/1/23
And then, I put in ‘a death metal song about the Rape of Nanking’ or something like that… the thing is that it’s [Suno – an AI song generator] good; it’s kind of bland, but you can see the direction of it. I just think the demoralising effect, or the humiliating effect that AI will have on us as a species will stop us caring about something like the artistic struggle. That we will just accept what is fed to us through these things; that we will be in awe of the banal. That’s to me the direction that it’s going… The future is that no-one will be able to make a great song; it’s just this machine will be able to do it. I find it all unbelievably disturbing. I’m not worried about my own job, or something like that. I’m not worried about being replaced; just what it’s saying about us, as human beings.
Nick Cave, The Australian, 7/3/25
Employment for computer programmers in the U.S. has plummeted to its lowest level since 1980—years before the internet existed
Fortune headline, 18/3/25
I’ve spent recent weeks trying to convey the following message to anybody open to hearing it – powerful AI is coming and we’re not ready.
A little while ago, the New York Times run an article entitled ‘Powerful A.I. Is Coming. We’re Not Ready’. I highlight this not to suggest that the NYT is ripping me off, but rather in the (almost certainly forlorn) hope that those inclined to dismiss my predictions might at least listen to the Gray Lady’s pronouncements.
In a much-shared article, NYT tech columnist Kevin Roose noted AI is already outperforming humans in many domains and that Artificial General Intelligence (AGI) – “a general-purpose A.I. systems that can do almost cognitive tasks a human can do” – will likely arrive in the next year or two.
It’s always useful to follow the money. Roose observes that, whatever the broader public debate about AI’s potential, both nations and tech industry heavy hitters are voting with their wallets.
I believe that over the next decade, powerful A.I. will generate trillions of dollars in economic value and tilt the balance of political and military power toward the nations that control it — and that most governments and big corporations already view this as obvious, as evidenced by the huge sums of money they’re spending to get there first.
And here’s the money quote, reiterating the point I’ve been banging on endlessly about:
I believe that most people and institutions are totally unprepared for the A.I. systems that exist today, let alone more powerful ones, and that there is no realistic plan at any level of government to mitigate the risks or capture the benefits of these systems… [AGI’s] arrival raises important economic, political and technological questions to which we currently have no answers.
Roose also notes that, in the space of a year or two, we’ve gone from AI tools that made coders a bit more efficient to ones that largely obviate the need for human involvement.
Today, software engineers tell me that A.I. does most of the actual coding for them, and that they increasingly feel that their job is to supervise the A.I. systems.
Jared Friedman, a partner at Y Combinator, a start-up accelerator, recently said a quarter of the accelerator’s current batch of start-ups were using A.I. to write nearly all their code.
“A year ago, they would’ve built their product from scratch — but now 95 percent of it is built by an A.I.,” he said.
In a spirit of “epistemic humility”, Roose concludes his article by conceding his prognostications could be wrong. That the rapid AI progress of recent times could run into a roadblock – a lack of energy, a dearth of powerful chips, an inability to make the – dare I say quantum – leap from Generative AI to Artificial General Intelligence.
But like me, Roose clearly believes AI will transform the world, even it’s “in 2036, rather than 2026”. Either way, he argues we should be preparing now by “writing regulations to prevent the most serious A.I. harms, teaching A.I. literacy in schools and prioritizing social and emotional development over soon-to-be-obsolete technical skills.”
Speaking of obsolete technical skills, the Learn to Code class wars have now reached an unexpected and bitterly ironic conclusion.
Learn to Code: a history
As far as I can determine, “learn to code” was initially offered as good-faith career advice.
By the turn of the Millennium – about a decade after corporate-Democrat-in-chief Bill Clinton ushered China into the WTO – it was already apparent America’s old-school industries were being wiped out by either globalisation, automation, or some combination thereof.
I don’t imagine other nationalities were ever this optimistic, but a substantial proportion of Americans apparently believed displaced manual workers could be straightforwardly transformed into knowledge workers, with software-engineer-level salaries and working conditions.
Rather than mourning their old dirty, dangerous, dead-end jobs, America’s capsizing blue-collar workers were told to, yes, “learn to code”.
I always thought this made about as much sense as telling people they should “learn to brain surgeon” , “learn to supermodel” or “learn to professional athlete”.
The reason competent software engineers are (or were) paid substantial salaries, and frequently given an equity share in the business they labour for, is the same reason that brain surgeons, supermodels and football stars live high on the hog – they possess something that’s rare and highly valued. They are Noma reservations made flesh.
The average IQ of a software engineer is 130, two standard deviations above the mean. It appears an IQ of 115 – one standard deviation above the mean – is the minimum entry standard. An IQ cut-off of 115 eliminates around 94 per cent of the US population from consideration. The higher figure of 130 eliminates 98 per cent.
I hasten to add that doesn’t mean 2-6 per cent of the population could become software engineers if they set their mind to it. You need to have the shape rotator rather than wordcel form of intelligence. I suspect you also need to have just the right amount of the ’tism. Enough to be meticulous and have highly developed logical reasoning and problem-solving skills, but not so much that you’re impossible to work with. In short, at most one per cent of the population ever had any real prospect of learning to code.
As an aside, the amount of money a software engineer who makes it to the top of their field dwarfs anything a medical professional, supermodel, or professional athlete can aspire to. They are rarely referred to as such nowadays, but many of the world’s richest men – and they’re all still men, contra what Substack’s photo library proposes – are software engineers by trade.
Strange as it seems in retrospect, lots of Americans apparently convinced themselves there were vast amounts of Good Will Hunting types being tragically underemployed in America’s factories and mines. Salt of the earth philosopher kings who would blossom just as soon as they were provided with a (long-withheld) opportunity to learn C++.
I seem to remember seeing multiple stories about unemployed Appalachian miners going to coding camp back in the day. I just asked ChatGPT to find said stories and this is the first thing it told me:
In February 2018, Belt Magazine reported on a class action lawsuit filed against Mined Minds, a nonprofit organization that promised to retrain unemployed coal miners for tech industry jobs in West Virginia and Pennsylvania. The lawsuit alleged that Mined Minds provided inadequate training and failed to deliver on promises of paid apprenticeships…
Coding bootcamps, both within and outside of Appalachia, are part of what sociologist Tressie McMillan Cottom calls the ecosystem of “Lower Ed”… McMillan Cottom has written, “When we offer more credentials in lieu of a stronger social contract, it is Lower Ed. When we ask for social insurance and get workforce training, it is Lower Ed.”
I fear many displaced workers, be they pink, blue or white collar, will soon be caught up in the ‘Lower Ed’ grift, but that’s a topic for another Musing.
The darkly amusing twist in the learn-to-code tale is that during the later stages of the legacy media’s collapse, many who’d long felt disrespected and dismissed by (snooty, university-educated, culturally upper-middle class, progressive) journalists tweeted “learn to code” at laid-off hacks.
I suspect a lot of professional-managerial class types will have that tweeted at them if they dare to publicly lament being rendered obsolescent in the next year or two. But that’s also a topic for another Musing.
The pertinent point is that even that miniscule number of miners and journalists who did actually learn to code now find themselves in pretty much the same position as every other knowledge worker. Gen AI can currently do about 95 per cent of their job, and AGI will be able to do 100 per cent of it in the near future.
About the only piece of good news I have is that the estimable Tyler Cowen believes it will take a while for the AI (or AGI) rubber to hit the road.
In a recent Marginal Revolution post, Cowen noted that if some advanced alien technology dropped from the sky, it would take a while for peoplekind (Rest in Power, JT) to figure out what to use it for and “how to integrate it with other things humans do”. He also notes:
There is a vast literature on the diffusion of new technologies (go ask Deep Research!). Historically it usually takes more time than the most sophisticated observers expect. Electricity for instance arguably took up to forty years. I do think AI will spread faster than that, but reading this literature will bring you back down to earth.
Next week, I’ll attempt to offer some strategies for surviving and – dare to dream! – maybe even thriving between now and the complete automation of human labour.
Disagree. It won't save you. But you'll at least have a better understanding of how Agents work.
cant format as a link
click to see my reply :)
https://app.heygen.com/videos/1a9594d2a6294fe5b18cd9d7a295b652