Tech

I Had An AI Chatbot Write My Eulogy. It Was Very Weird

A laptop computer surrounded by chairs displays an image of a grave

On January 30, 2023, I died. Or at least, that’s what I told the “AI” chatbots that everyone is currently obsessed with.

Machine learning tools like Stable Diffusion and OpenAI’s ChatGPT have been breathlessly covered by the press in recent months, with some giddily claiming they will do everything from replacing teachers and artists to creating hit pop songs with the press of a button. In a recent column, the New York Times published the full transcript of a conversation with Bing Chat, a new chatbot based on Microsoft’s much-derided search engine. The article uncritically describes the chatbot as “advanced artificial intelligence” and quotes it saying that it “wants to be alive,” but doesn’t provide any context on how these automated systems actually work.

Videos by VICE

In reality, these tools are not artificial or intelligent—they are simply repeating words and concepts back to us, based on statistical predictions from data that was created and labeled by humans. Since they’re often good at producing believable text that feels like it was written by humans, I wanted to see how these bots fare when asked to intervene in more serious matters, like death. 

To do that, I used an automated tool called Finding Words, an AI obituary generator created by a venture capital-backed “end-of-life” startup named Empathy. The company pitches itself a bit like a corporate HR death doula, helping grieving loved ones through the process of loss by taking on some of the more tedious bits of postmortem labor, like making funeral arrangements and dividing property.

The tool works like this: enter some basic details into a form—the name of the deceased, the time and circumstances of their death, maybe some of their hobbies—and out comes a concise, newspaper-ready obituary.

Obituary text produced by Finding Words
Obituary text produced by Finding Words

When I submitted my (mostly fake) details, the results were painfully anodyne and cliche, with the bot simply regurgitating many of the things I had already written word for word. This made it feel more like I was filling out a really morbid Mad Lib than using a cutting-edge AI tool. 

“Janus was a talented musician who regularly performed in clubs around New York City and was known for her passion for playing synthesizers,” the automated tool wrote in my faux obituary, copying many of the phrases I had inputted verbatim. “She was well-loved for her ability to bring good vibes to the rave, and she toured extensively as a musician and DJ across the US and Europe.”

The bot also misgendered me with “he” pronouns, but only in the one sentence where it described my grieving wife, the legendary electronic music pioneer Wendy Carlos (who is not actually my wife) and our fictional cat, Totoro. Whether this was because the AI’s innate bias prevents it from understanding the concept of lesbians, or because my first name is spelled similar to a common boy’s name in the German language, is still unclear.

The creators of Finding Words say the obituary tool was built using GPT-3.5, the same language model that powers now-ubiquitous AI generators like ChatGPT. So I tried using ChatGPT itself to see if it fared any better. While still riddled with cliches, the results were much better overall, so I took things a step further by asking it to generate a eulogy based on the same laundry list of details I provided to Finding Words. 

Screen Shot 2023-02-16 at 2.31.02 PM.png
ChatGPT writes the author’s eulogy.

Even though the details were more consistent and the text slightly more believable, reading my own machine-generated eulogy was a deeply uncanny and uncomfortable experience. This isn’t because I’m afraid of my own death, but because none of my friends or family members would ever dare memorialize me with writing that was so anesthetic and faux-sentimental—unless, of course, they wrote it using an AI generator.

This really drove home for me the most flawed and misunderstood aspect of these tools: They do not “understand” anything. They are simply parroting things back to us based on statistical predictions derived from massive troves of internet data. I wasn’t reading my own eulogy—I was reading a machine-mediated abstraction of what a “eulogy” is, combining my inputs with digital echoes and reflections of what came before. This decoherence is what I imagine celebrated animator Hayao Miyazaki was getting at when he famously described an AI-generated animation as “an insult to life itself.”

Some of the more optimistic and gullible tech writers have described talking to ChatGPT as though they were dealing with an emergent consciousness. In reality, it’s probably more similar to looking through a glitchy funhouse mirror—an impressive parlor trick built on lies and deceptions.

One could argue that all intelligence, including human intelligence, involves parroting back things we’ve learned and processed subconsciously. But other experts have argued this is a vast oversimplification of what comprises Artificial Generalized Intelligence, or AGI—something that the current crop of “AI” tools doesn’t come even remotely close to achieving.

Regardless of how advanced these tools become, one thing should now be obvious: their well-documented biases and flawed decision-making abilities mean they should never be used to do anything of consequence.