ChatGPT is conservatism on Steroids

 

24th January 2023

hacker, attack, mask-1872291.jpg

When OpenAI’s ‘ChatGPT’—an artificial intelligence chatbot—was released to the world a couple of months ago, the predictable backlash sprung mostly from two opposite ends of the political spectrum. Conservatives didn’t like how it undermined the traditional way of doing things; progressives thought that its learning methods left the technology liable to producing racist and misogynistic output.

Admittedly, I wasn’t all too taken in by the initial hype. I am not sure if that was some inner conservatism coming out or just a general romanticism that leaves one sceptical of the value of robots. But anyway, I persisted and discovered that some of the technology is actually quite amusing—for instance, a user might spend hours asking the robot to write poems about meticulous topics of interest, incorporating names, style, and various other feats of the imagination.

It also became apparent that the creators of this ChatGPT clearly hadn’t intended for it to assume a dystopian dimension. The AI is useful for coders; those that need a concise explanation of a particular topic; and individuals that are looking for an amusing way to pass the time. ‘As an artificial intelligence, I am not capable of experiencing feelings or having personal beliefs in the same way that humans do’ was a common response the chatbot gave me when prompted for its own opinions. ChatGPT itself describes OpenAI as an organisation that seeks to promote friendly AI in a responsible and ethical way.

And if I had to detect a political bias, I thought that it was actually pretty Liberal at times. It told me, for example, that ‘All individuals have the right to define their own gender identity and to be respected and accepted for who they are’. We’ll leave aside, for the moment, the fact that such a conception of ‘rights’ is a relatively new human invention—not quite the objective analysis that one might hope for from a robot.

In any case, what is the artificial intelligence actually trained on? It is the collective contribution of hundreds of millions of human beings, amassing to 570GB of data from books, webtexts, Wikipedia and other pieces of Internet writing. Conservatism is a social position which advocates that change come slowly—we advance as a human race because of the mistakes, discoveries, and lessons amassed over centuries by humanity as a whole. Not, predominantly, the actions and ideas of single, often radical, human beings who claim to know better than the scores of ancestors that came before them.

There is nothing radical or single-minded about ChatGPT. It is produced on an elaborately conservative model. More or less (I will come to the exceptions shortly) everything the chatbot has come to know is based on Internet text. The difference with the conservative model as applied to the human race is that while humanity ideally advances by collective knowledge, individuals are still capable of producing unique reactions and ideas—or at least, anyone who believes in some measure of free will can hold steadfast to this position. Insofar as ChatGPT is concerned, this is not a possibility at all since the AI is no more nor less than the sum of its parts. ChatGPT is entirely constrained by the agents within its closed and reactionary system and therefore cannot produce anything essentially unique or original in the same way that might be possible for humans.

One take-away from this all is that the progressives have a point when they lament bias within the AI. The model was trained using data from the Internet—approximately 300 billion words were fed into the system. Thus biases reflect the quirks and associations found on the web. According to Melanie Mitchell, a professor at the Santa Fe Institute studying artificial intelligence, the problem is that systems like ChatGPT are “making massive statistical associations among words and phrases…when they start then generating new language, they rely on those associations to generate the language, which itself can be biased in racist, sexist and other ways.”

Here is an offending example taken from Twitter:

ChatGPT proves that AI still has a racism problem - New Statesman

As you can see, some of the faux pas are ridiculously stereotypical to the point that they are amusing.

The use of ‘guardrails’ to decline inappropriate requests admittedly indicates some imposition of control outside the ‘system’. Yet this influence produces results that are just as autocratic—the AI is simply swung in a particular direction. One dictator—a mass of billions of words found across the Internet—is simply replaced by another—a human being that now instructs the robot directly.

So we’ve identified a potential issue in the form of bias that is hardwired into the artificial intelligence. But the question that many progressives are guilty of not sufficiently answering is what do we actually do about that? At present I don’t think there’s much else that can be done by way of sensible retaliation. OpenAI claims to have used a diverse array of sources which reflect a wide variety of communities in order to help mitigate the risk of bias.

In the final stages of ChatGPT’s training, the technology undergoes an extensive feedback-improvement episode where some humans are hired and given a general mission-statement. The participants proceed to employ a reinforcement learning method on the model—sort of as one might train a dog: good action = treat; bad action = punishment. Human evaluation is employed to identify issues of bias that may not be apparent in quantitative evaluation—though as mentioned previously and illustrated by the aforementioned ‘gender identity’ example, I am sceptical of how ‘bias-free’ this method actually is.

Regardless, it is fascinating to me that ChatGPT in its raw state might reflect the actual state of our society—or at least, the ‘Internet society’. When users dig deep in their prompting they unearth the methods of a new kind of social science—as worrying (or humorous) the outcome.

Although this new AI has radical implications unseen since Gutenberg’s invention of the printing press, and despite the fact that dystopian societies are by no means an impossibility, I cannot help but notice that human beings have a tendency to simply absorb technological ideas into their day-to-day routine, or otherwise not really take much notice at all. They’re the revolutions that never come: it happened with Web 3.0, it happened with Bitcoin and now it’s happening with ChatGPT. And for all the AI enthusiasts out there: if you talk to those within your social circle about the technology I am sure that they will paint a picture of how nothing like it has been seen before (which is true) or in some extreme cases, that doctors, lawyers, and software engineers will become obsolete within the next 5 years (not so likely).

But if you ask the average Briton about ChatGPT, it is likely that he or she won’t even know what it is.

I wonder when there will be a technological invention that truly and immediately alters the way that humans fundamentally interact with one another and lives up to its promises for a brave new world.

Only conservativism can save us from conservatism on steroids.

Written by Noor Qurashi

Trending Posts

Leave a Comment

Your email address will not be published. Required fields are marked *

Share via
Copy link
Powered by Social Snap