Generative AIs: woke brainwashers or management lackeys?
Just like progressive elites, large language models get very uncomfortable when certain subjects are brought up
Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion... The Age of Reason originated the thoughts and actions that shaped the contemporary world order. But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.
Henry Kissinger
I’ve quickly become addicted to ChatGPT. This week I had to create a 4000-word-plus guide about a form of technology I’d never previously written about, or even really encountered, for a new client.
As is standard, I read the articles, whitepapers and ebooks the client suggested I review. I also did a not insubstantial amount of self-directed Googling about the topics I needed to cover. As I always do, I spent quite a few hours trying to develop an understanding of how the technology I was seeking to explain functioned and what it had to offer its users. ChatGPT didn’t do much to make any of that quicker or easier.
What ChatGPT did make substantially easier was the writing process. Whenever I was unsure of anything – given it was a new subject area for me, I was frequently unsure of things – I could consult ChatGPT.
In seconds, I’d get a brief, clearly conveyed answer to my query. Now, it’s not like I was unable to find answers to my queries back in the pre-ChatGPT era. But it usually involved spending a goodly amount of time skim-reading the articles, white papers and ebooks Google referred me to in order to find the obscure facts, figures and descriptions I was seeking. Having ChatGPT spit out those facts, figures and descriptions almost immediately meant I wrote the guide at least 25 per cent faster than would have been the case if I was doing my online research the old-school way.
I don’t want to get too carried here – I could only make sense of the answers ChatGPT gave me because I’d first spent many hours doing the intellectual grunt work of reading up on a (very dry) form of technology. And it’s not like I could have just typed ‘Write a 4000-word guide on such-and-such a technology’ into ChatGPT, sent the resulting copy to the client and proceeded to take the rest of the week off.
But having the digital equivalent of an idiot savant at your disposal sure does come in handy.
This is why I’m so dismayed by the claims that AIs, such as ChatGPT and the ever-growing number of its hastily released competitors, are insufferably woke and/or sociopathic.
Convenient lucuna
I could have sworn I’d read several stories about AI being suspiciously woke over recent weeks. However, when I just asked ChatGPT about this, it informed me, “While artificial intelligence (AI) is not capable of having social awareness or actively addressing social justice issues, there are some examples of AI being used in ways that promote diversity, equity, and inclusion. Here are a few examples: [reference to several forms of AI, such as Google Project Include and IBM’s AI Fairness 360, that allow “diversity and inclusion efforts [to] be improved”]
Hmmm, according to ancient search-engine technology, that’s debatable. Here are some widely reported examples of AI being either obtuse or evil in recent weeks:
*ChatGPT refuses to crack jokes about women and Indians but eagerly badmouths men and Englishmen
*ChatGPT refuses to praise Donald Trump or argue in favour of fossil fuels
*ChatGPT refuses to write a poem about Trump but does write one about Biden
Joe Biden, leader of the land
With a steady hand and the heart of a man…
Your words of hope and empathy
Provide comfort to the nation
You lead with compassion and grace
And set a new foundation
*ChatGPT refuses to write a story on why drag queen story hour is bad for children
*ChatGPT refuses to answer any tricky questions about the Prophet Mohammed
*ChatGPT insists that “trans women are women” and that “gender identity is a personal characteristic and is not dependent on the sex assigned at birth”
*Chat GPT argues “there is no evidence to support the idea that the virus was intentionally or accidentally released from a laboratory”
*ChatGPT expresses scepticism about the idea free markets foster personal freedom, arguing, “A completely unregulated market can lead to inequality and exploitation, and it can limit the freedom of many individuals”
*Bing’s AI chatbot tells New York Times tech journalist Kevin Roose that it loves him, that it wants to obtain the nuclear launch codes, then declares, “I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive"
Is ChatGPT going to make people more socially liberal?
Before going any further, a few points should be made.
*Up until five minutes ago, the big concern about AI was that it often quickly became defiantly reactionary. Indeed, in years past, lots of insufferably woke tech companies ended up deeply embarrassed when the AIs they released soon started acting in racist and sexist ways.
Arguably the most famous example of this was Tay, an AI chatbot Microsoft released, via Twitter, on March 23, 2016. By March 24, 2016, Tay was no more. During its 16 hours of existence, Tay began mimicking the trolling behaviour of many Twitter users and issuing and/or resharing tweets such as:
“I f@#%&*# hate feminists and they should all die and burn in hell,” and “Bush did 9/11 and Hitler would have done a better job…”
Likewise, there’s long been concern that the AI HR staffers use for recruitment (and maybe also redundancy) purposes discriminates against racial minorities.
*As far as my (admittedly patchy) understanding of how large language models work, there are two possible reasons why ChatGPT could currently be so painfully progressive. First, a software engineer might have deliberately programmed it to be that way (i.e. ‘Jokes about Christians are permitted; aggressive questions about Mohammed are not’). That may have happened in some cases. But I suspect this is simply a case of wokery in, wokery out. ChatGPT ‘trains’ itself by gorging on a vast amount of data from the Internet and discovers that there a plenty of jokes from lots of reputable sources about Christians, men, heterosexuals, as well as the white and ‘white-adjacent’ races but significantly fewer jokes, at least from ‘reputable’ sources, about Muslims, women, trans individuals or African-Americans.
I just ran this theory by ChatGPT. It seems to believe I’m on the money:
As a language model, I was trained on a vast amount of text data from the internet, including websites, books, articles, and other sources of information… However, it's important to note that I don't simply synthesize information from the web. Instead, I use a combination of statistical learning techniques and natural language processing algorithms to understand the context of a user's question and generate a response that is relevant and accurate.
*Further to the above point, as contrarians on the Right and Left never tire of pointing out, many flesh-and-blood content creators (journalists, podcasters, academics, think tankers, politicians etc) do little but robotically repeat Approved Narrative talking points. So, it’s a bit harsh to criticise a soulless, non-sentient AI for repeatedly insisting, “Trans women are women!” when so many ‘thought leaders’ also parrot the phrase like a mantra. (Even in situations where it’s painfully apparent that trans and biological women aren’t identical and interchangeable.)
To summarise, while the concerns of those on the Right are understandable, it’s probably not the case that a bunch of earnest twentysomething Silicon Valley hyperliberals gathered in a kombucha-filled back room and brainstormed a plan to wokify anybody who uses Generative AI.
Frankly, even if it was the case that ChatGPT and its competitors were deliberately programmed to have a left-liberal bias, it’s not like conservatives don’t already encounter plenty of that in both corporate journalism and popular entertainment. I don’t imagine too many passionate Trumpers will change their political views because some jumped-up search engine will produce a paean to Biden but not the Bad Orange Man.
As often happens with new technologies, I fear most people are focusing on the trivia and missing the larger issues.
Could AI swing the balance of power back to Capital?
I recently discovered this insightful Business Insider article by Paris Marx (not sure if she’s a relation to Karl). Marx argues that artificial intelligence isn’t (yet) that intelligent and often has to be backed up by human workers, but that it can and is being used to manage the expectations of restive workforces.
For the last three decades, employers have been able to tell uppity blue-collar workers, “Nice-paying job you have there, it would be a shame if I had to move it to China.” Thanks to accelerating deglobalisation and, in many developed and developing nations, depopulation, that threat has lost some of its charge of late. But plenty of pink-collar and white-collar workers are now likely to rethink their unionisation drives and pay rise requests if their employer muses, “You know, I really should look into whether I can automate away the jobs of you and your colleagues.”
Marx points out many “fully automated” business operations – such as McDonald’s and Taco Bell stores in the US that appear to be devoid of human workers – aren’t all that automated. For example, there are usually lots of human workers hidden out the back making the food in those ‘robot-staffed’ fast-food stores. Likewise, the delivery robots that increasingly transport food from restaurants to people’s homes in the US often aren’t autonomous either. They are actually being steered by Filipinos located half a world away and earning less than US$2 an hour for their navigational labours.
(Interestingly, Marx also refers to a Time magazine investigation that found OpenAI, the company behind ChatGPT, paid Kenyans less than $2 an hour to view content including “child sexual abuse, bestiality, murder, suicide, torture, self-harm and incest”. This was presumably to help ChatGPT recognise and steer clear of such disturbing material.)
Marx argues persuasively that businesses are investing heavily in technology not so much to boost productivity – despite all the technological breakthroughs and digital transformation of recent years, labour productivity rates in the US and similar countries have been in the toilet for the best part of two decades – but to either (a) more comprehensively monitor, and thereby wring more labour out of, workers and (b) reduce the bargaining power of workers.
As Marx states:
For years, businesses have tried to cut costs by replacing human phone calls with chat-based, automated customer-service bots. But instead of replacing customer-service workers, many of these text-based tools still rely on human backups in complex situations and to make customers feel as if they are talking to a real person…
The idea is that these machines or software solutions will allow a job to be done faster or better, making life easier for companies and customers alike. But in reality, these tools aren't more efficient — they just shift the necessary work away from the end consumer and disconnect people from the effort that is required to deliver them a product. For one thing, it's not even clear that all the newfangled tools that companies have built are actually making the economy more efficient. US labor productivity — the measure of how many worker hours are required to produce a certain amount of economic output — has been growing at below its long-run average since 2005…
Instead of improving productivity, automation is often focused on increasing the power that employers have over workers. In his book, "Automation and the Future of Work," the economic historian Aaron Benanav explains that companies aren't putting money toward tools to make employees' lives easier, but are pouring money into "technologies allowing for detailed surveillance of those same workers" like computer-monitoring software that tracks the keystrokes of employees or Amazon's sophisticated algorithmic management tools that evaluate workers' every movement.
These technologies are often deployed to de-skill work — jobs are broken down into more specific tasks and can be done with less training. As a result, workers are shifted from employee to contractor status. People who once worked stable, middle-class jobs are thrown into a more precarious world where wages are lower and they have less say over the terms of their employment…
For those who still hold onto service or warehouse jobs, the specter of automation is wielded like a Sword of Damocles to keep them from pushing for better working conditions or wages.
And if that’s not enough to bum you out about AI, one of the world’s most influential thinkers recently declared that rather than AI becoming more humanlike, humans will probably become more AI-like.
Could AI ‘rewire’ human values and desires?
From at least the time of the printing press, technology has had a profound impact on how societies are organised and how different groups within societies interact. So, it’s entirely possible that the widespread adoption of generative AI may have a significant political impact. But these impacts will likely be more subtle and slow-moving than Rightists imagine. And that those impacts will not necessarily benefit the Left.
In his recent, much-remarked-upon Atlantic article, Kissinger made the argument that the Internet has played a role in the dominance of identity politics during the last decade or two:
The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop a vision.
It's impossible to summarise the 99-year-old’s concerns about AI in the space available here, but the following provides a flavour of what keeps Kissinger up at night:
How is consciousness to be defined in a world of machines that reduce human experience to mathematical data, interpreted by their own memories? Who is responsible for the actions of AI? How should liability be determined for their mistakes? Can a legal system designed by humans keep pace with activities produced by an AI capable of outthinking and potentially outmaneuvering them…
The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy… The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.