Tech

Everybody Please Calm Down About ChatGPT

The panic and hype around the surprisingly dumb chatbot is stopping us from talking about real issues with AI.
Everybody Please Calm Down About ChatGPT
NurPhoto / Contributor

Over the past week or two, there's been a new AI panic brewing: this time, it's about whether ChatGPT, a chatbot created by OpenAI, is poised to render vast swaths of our society obsolete.

Sam Altman, co-founder of OpenAI (and creator of the seemingly stalled-out WorldCoin crypto project that sought to scan eyeballs for crypto but went silent after multiple investigative reports uncovered a dysfunctional operation with numerous labor and privacy concerns) himself doubts that ChatGPT is actually worthy of the reaction it's eliciting right now.

Advertisement

“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” Altman said in a tweet on December 10. “It's a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”

That doesn’t seem to have stopped anyone from predicting the imminent demise of college, professors, journalism, and much more, however.

One Guardian piece opens with the warning that "professors, programmers and journalists could all be out of a job in just a few years" and points to examples of ChatGPT and other chat bots mimicking the prose of Guardian opinion pieces, as well as generating essays for assignments made by a journalism professor at Arizona State University.

Similar concerns were echoed in a Nature essay, which raises concerns about the ability for students to submit ChatGPT-generated essays. Though the article concedes "essay mills" that allow students to outsource essay writing to a third party already exist, it still cautions that ChatGPT may be a unique threat without laying out exactly why.

Others have been more bombastic in their claims. "I think we can basically re-invent the concept of education at scale. College as we know it will cease to exist," tweeted Peter Wang, chief executive of Anaconda, a data science platform.

Advertisement

The only problem, however, is that none of this seems to be true, or even possible. ChatGPT is a large language model that effectively mimics a middle ground of typical speech online, but it has no sense of meaning; it merely predicts the statistically most probable next word in a sentence based on its training data, which may be incorrect. This has led to Stack Overflow, a forum that serves as one of the largest coding resources, recently banning ChatGPT because it would not stop giving the wrong answers. 

"Overall because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers," the site's moderators wrote in a forum post. "The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting."

The Atlantic's Ian Bogost honed in on this in his piece "ChatGPT Is Dumber Than You Think," which emphasized that the chatbot was not much besides entertaining. Bogost argues that it may pass off text as persuasive, but only because we have come to expect so little from text. One blog showed how you can use ChatGPT to dispute parking fines, but another way to dispute a parking ticket is to simply show up. You can use ChatGPT to craft a script for customer service representatives to refund you, but they’re also going off a script and will usually give you a refund or credit if you simply ask for it. Here the chatbot is being used for an incredibly mundane task that someone can’t be bothered to deal with and that an unintelligent artificial system could easily do—is this persuasion or sloth?

Advertisement

ChatGPT is more about bullshitting than creativity, which serves as a neat metaphor for what has happened with our technology sector and how it "feels like a giant organ for bullshittery―for upscaling human access to speech and for amplifying lies." When Bogost sat down with ChatGPT, he found something similar to what moderators at Stack Overflow did.

"In almost every case, the AI appeared to possess both knowledge and the means to express it. But when pressed—and the chat interface makes it easy to do so—the bot almost always had to admit that it was just making things up,” Bogost wrote. There are a few ways this manifested in his experiment with ChatGPT: sometimes the chatbot relied on template responses that appeared across various prompts, other times its response was riddled with factual and technical errors when asked to reproduce a certain style, and in more than one instance Bogost found that the chatbot’s answers more closely resembled “an icon of the answer I sought rather than the answer itself.”

It’s important to remember we’ve already been here before: back in June, when Google engineer Blake Lemoine told the Washington Post in an interview that Google’s LaMDA large language model was alive―sentient, even. By Lemoine’s own account, that model seemed far more persuasive and convincing than ChatGPT. Lemoine insisted LaMDA was able to talk to him about rights and personhood, and change his mind on Isaac Asimov's third law of robotics ("A robot must protect its own existence as long as such protection does not conflict with the First or Second Law"). 

Advertisement

His claims were looked into and dismissed by Google executives, including the author of an Economist article written earlier by Google vice-president Blaise Aguera y Arcas. In that article, the Aguera y Arcas conceded that chats with LaMDA left him feeling “the ground shift under my feet. I increasingly felt like I was talking to something intelligent” but that this was not consciousness, only a step towards it.

The response to these claims was unified in its ridicule and backlash. How could someone claim a chatbot, even a very convincing one, was intelligent? Don't we all know what's really going on here? Fast forward a few months, and suddenly there are legions of Lemoines all dazzled by ChatGPT. 

What distinguishes ChatGPT from LaMDA? One major factor may be that the public is interacting with it and manipulating it to generate all sorts of interesting outputs, while Google hasn't released its model. From there, the larger and more public discussion of ChatGPT gives way to a longstanding tradition in our technology sector to use “artificial intelligence” as cover for financing anything but. Many instances of supposedly algorithmic or automated technologies are, when you look into the inner workings, actually human labor disguised as digital artifice—what researcher (and co-host of our joint podcast This Machine Kills) Jathan Sadowski calls ”potemkin AI.” Venture capitalists and entrepreneurs eager to rationalize and profit from inflated valuations, are quick to boost features that purport to reduce labor costs or optimize efficiency, but in reality simply rely on humans subjected to deplorable labor conditions in service of this mirage.

Advertisement

“Autonomous vehicles use remote-driving and human drivers disguised as seats to hide their Potemkin AI. App developers for email-based services like personalized ads, price comparisons, and automated travel-itinerary planners use humans to read private emails,” Sadowski wrote in Real Mag. “The list of Potemkin AI continues to grow with every cycle of VC investment.”

ChatGPT doesn’t seem to be just another human system in disguise, although it is merely parroting the efforts of humans who wrote all of the text it was trained on. The fervor around it is also indistinguishable from hype around Potemkin AIs because of the eagerness with which financiers hunt for functional products that appear to compute without humans. Which brings us back to Bogost’s essay, because it’s there he makes an important point that I think also hits the problem on the head:

“GPT and other large language models are aesthetic instruments rather than epistemological ones. Imagine a weird, unholy synthesizer whose buttons sample textual information, style, and semantics. Such a thing is compelling not because it offers answers in the form of text, but because it makes it possible to play text—all the text, almost—like an instrument,” Bogost wrote. 

What do you get when you take a public instance of an easy to use and manipulate chatbot that does interesting things with language, then release it into the world of tech financiers and uncritical hype-men?  These groups are constantly looking for ways to trick investors and the public into supporting AI systems that are not really AI systems. Mix in the creators of said chatbot (specifically Sam Altman at OpenAI) both meekly admitting the system is not impressive while also offering no real vision for what end it will be deployed towards, and you get a lot of people opining about how this will upend civilization, a lot of people confusing mimicry with sentience, and a chance to have more interesting conversations slipping away.

Frankly, it doesn’t really matter if we create intelligent chatbots or even want to—that’s a distraction. We should be asking instead questions along the lines of: “Are large language models justifiable or desirable ways to train artificial systems?” or, “What would we design these artificial systems to do?” or if unleashing a chatbot to a public whose imagination is constantly under siege by propagandistic and deceptive depictions of artificial intelligence (and using that public to further train the chatbot) is something that should be allowed to happen.

This is not to say these questions are never asked by anybody. It’s been little over a year since Timnit Gebru was fired from Google for asking them, after all. But ChatGPT sucks all the air out of the room and instead centers silly questions like, “How do we save homework?” or entertains silly delusions, like the idea that you can generate an entire AAA video game using systems like ChatGPT that don’t exist. These are silly because there are real concerns locked away in there, behind the hysteria and unhinged optimism, namely what roles algorithmic and automated systems should have in our education system and our cultural production. 

That’s something worth discussing and negotiating, but the spectacle of ChatGPT, the way AI is presented and talked about by financiers and tech companies, and the way the public has been misled about its capabilities leaves us hard pressed to find the space to have that talk.

Until then, we’re stuck with potemkin AI obscuring the labor exploitation that powers most of our technology, and chatbots obscuring the larger questions about the role digital systems should have in our lives, especially when they are privately owned and run, anti-democratic, and riddled with bias.