The way we create music is changing. Since enforced isolation in the pandemic, people’s appetite for remote collaboration has accelerated; artists are making entire albums with producers in different continents, and TikTok has made collabs second nature, from Billie Eilish jumping on a duet with an English singing tutor to drummers laying down beats over literal birdsong. Now AI is providing musicians with another kind of collaborator – in bot-form.
But as thrilling as this might sound to some, many musicians and producers are outraged by the idea of AI being used in music creation and terrified that these developments might render them as obsolete as a VHS tape at a VR convention.
Videos by VICE
Still, AI didn’t just strut onto the music scene in 2023 ready to steal your dreams of rock stardom. Its roots go as far back as the work of scientist Alan Turing in 1951 and the Bach-inspired “Illiac Suite” in 1958, which became the first score composed by an electronic computer. Celebrated U2 producer and godfather of ambient music Brian Eno, has been tinkering with generative music (meaning created by a system) for decades. But recently, conversations around AI in music have pumped up the volume – is AI a threat to music creators or is there a way to use it to our advantage?
As a musician and songwriter myself, my initial reaction to AI music oscillated between scoffing at its probable ineptitude and fearing it might actually get so good that the entire landscape of music will change irrevocably.
When I casually mentioned the idea of using AI for music in a recent Facebook post, one musician friend said, “I would rather set myself on fire”. I, too, found myself gazing into space conjuring up all kinds of Black Mirror-esque hellscapes, but teetering on the tightrope of my own ego, I set about learning about what AI in music actually means for us as artists.
Many musicians seem to be of the opinion that using AI to create music is cheating, but once you start discussing who should be allowed to make art and how, other kinds of ethical questions around ableism and classism arise. Advancements in technology are leading to instruments being developed that can be played by people with disabilities. An eye harp controlled with eye movement alone has allowed people to create music whose bodies normally wouldn’t allow them to do so – is that cheating, too?
Many people are deprived of the privilege of creating art, not only for reasons of ability but accessibility, too. Not everyone has the option of music lessons or can afford to buy an instrument to practise on – maybe one benefit of AI music apps is that they democratise songwriting.
So is AI, like, creating entire songs on its own now? Not exactly: These TikTok guys who claim to have created a catchy AI song in three hours, by typing a short brief into a generator tool, were probably telling porkies to make their song go viral. While AI is already being used to create stock music for video content, it’s not yet writing and performing entire pop songs without human intervention. For a start, teams of humans (often including composers themselves) have to build and programme the machines in the first place – think of AI as a collaborative tool.
Still not convinced? That’s probably because you’re remembering all the weird non-human pop stars we’ve been exposed to over recent years – enough to make anyone get twitchy about a dystopian droid-filled future.
“AI isn’t coming, it’s already here”
The hologram Hatsune Miku, developed in 2007, used “Vocaloid” software and “performed live” all over the world gaining millions of fans. Then there’s influencer-singer-fashion-sweetheart Lil Miquela, who “attended” Prada’s AW18 fashion show and has nearly 175,000 monthly listeners on Spotify; singer-songwriter Yona who performed at Montreal’s MUTEK music festival in 2019; and a rapper called FN Meka whose problematically trope-ridden cartoon image meant his career was over faster than Liz Truss’s. These are the kinds of AI programmes that tend to be feared (or at least ridiculed) by musicians afraid of being replaced by bots, but there are some musicians already using AI to their advantage in creative ways.
In 2017, Taryn Southern released an entire album called I AM AI using Amper software to create all of the backing instrumentals, which she arranged herself, behind her own voice and lyrics. In 2019, Holly Herndon created an AI system called Spawn which she used as an extra band member on her experimental third album Proto – she’s even developed an AI version of her singing voice and released it as an open tool that anyone can collaborate with. British band Feral Five have also used an AI version of lead singer Kat Five’s voice, alongside her real voice, on their new record Truth Is The New Gold.
Countless tools for creating music with AI now exist, including Amper, Jukebox, Soundraw, Soundful, Aiva, and Google’s open-source machine learning research project Magenta. Just this month, Google announced new software that creates entire backing tracks for a melody line sung into it, called SingSong. Google also recently unveiled the programme MusicLM, which purportedly has the capability to do things like creating an entire song from a text prompt. Their tests do an OK job of replicating video game-style music, but it doesn’t seem capable of creating the kinds of songs that’ll put your favourite recording artists out of business – so far.
“It gives me the ick to think about using a generator to come up with lyrical ideas but I tried it anyway”
But what about lyric writing? You’ve probably seen that ChatGPT can create lyrics in the style of particular artists. When a fan sent Nick Cave Chat GPT’s attempt at writing a song in the style of him, Cave dismissed the efforts as “a grotesque mockery of what it is to be human” and “replication as travesty”. He went on to say songwriting is a struggle that forces us to confront our own “vulnerability” and “smallness” – in short, to write a really good song (one that can be considered “art”) you need to have experienced mortal pain.
But is using a lyric generator that different to David Bowie creating his 1995 Verbasizer program, the computerised sentence randomiser inspired by the cut-and-paste lyrical technique?
It gives me the ick to think about using a generator to come up with lyrical ideas but I tried it anyway – my ick wasn’t unfounded. The suggestions Wave AI’s LyricStudio churned out were the kind of tripe I might’ve written when I was 12.
If you’re brand new to songwriting and want to play around with lyrical ideas, this might be a fun tool, but it didn’t generate anything I’d ever want to put in a song. For example, five of the 12 suggested rhymes for a line ending in the word “fear”, also ended with the word “fear”, confirming there’s nothing to fear in an app like this but fear itself.
It’s clear the lyrical capability of AI isn’t a huge worry at this stage, but if these tools become better at churning out music or lyrics in the style of certain artists, what copyright implications will that have?
Last year, VICE reported that the Recording Industry Association of America (RIAA) had serious concerns about software such as RemoveVocals and Songmastr taking copyrighted works and deriving new material directly from these source materials without any credit – or, crucially, royalties – going to the original artists and their record labels.
Award-winning veteran producer Garth Richardson, known for working with bands from Biffy Clyro to Rage Against The Machine, has concerns about this. “There’s always someone like the guy that keeps suing Led Zeppelin for ‘Stairway to Heaven’,” he tells VICE, “but who do you sue when someone gets AI to write a song like ‘Stairway to Heaven’?”
Like Nick Cave, Richardson has romantic notions of traditional songwriting, too. “My fear is that we’re losing being in a room together; talking about a song; talking about a drumbeat; talking about a feeling,” he says. “With AI creation it’s completely gone – it’s like COVID-19 times ten.”
Richardson believes the touring band’s struggle (years of blown tires and cockroach-ridden hotel rooms) which pays off when they finally get a hit song, is an experiential loss in a world where everything is available at the push of a button. He likens it to his sentimentality of using analogue products. “Analogue is a pinball machine and digital is a PlayStation5,” says Richardson. “There’s something about the romance of being able to play the ball, hit it with the flippers and, you know, use your hips to bang the ball.”
“Back in the day, things like sampling were shot down as ‘cheating’”
Kat Five agrees: “Somehow the more digital we go, the more reverence there is for something that’s more physical, something that’s more of a connection, that kind of fan engagement.” She’s not wrong – even in these hyper-tech days, I know loads of people in their early 20s who’re obsessed with vinyl and vintage clothing. There will always be a market for analogue sounds – same as it ever was.
Back in the day, things like sampling, synths and Pro Tools were shot down as “cheating” or disastrous for creativity and musicianship. “We were like, ‘This is shit, this is not going to last, get this shit out of here!’” says Richardson of first using the Pro Tools audio editing software. “I remember I was in a San Francisco studio, working on a record, and they were tracking vocals into Pro Tools and I went, ‘Have you lost your mind?’ Now it’s normal.”
In 2022, one in five hits on the Billboard Top 100 were sample-based: Technology advances with or without the consent of its haters and just like with sampling, AI is likely to become just as common. But is that really a bad thing? Much of the AI being created for music is essentially advanced productivity tools, which humans still ultimately control. Who doesn’t want to speed up the songwriting and recording processes a little? A lot of the work AI can currently do is the fiddly, time-consuming, tedious tasks that eat up time which we could spend on the uniquely human creative side of things.
Drew Silverstein, CEO of Amper Music, previously told CNN Tech that his software doesn’t mean we lose human musicians, but rather it benefits them by saving time and money. “We’re making it so that you don’t have to spend 10,000 hours and thousands of dollars buying equipment to share and express your ideas,” he says.
“The power lies in the person using the technology”
The artist still makes choices about style, instrumentation, and BPM; they can edit, cut and paste, delete and play with the stems the AI generates; they’re still the creator, just using a modern springboard for ideas.
Perhaps the careful use of AI can enable us to break out of lifelong patterns and push our creativity into new realms. AI can consume entire musical canons – we, more’s the pity, cannot. An assistant composer with that breadth of musical knowledge could be a pretty badass collaborator though, no?
The power lies in the person using the technology. Of course, some ruthless moneymen will use it to avoid paying artists without considering the moral implications, but the industry has never been kind to artists and savvy musicians can use it to their advantage, too. Don’t be (completely) afraid of it. Learn about it at least – if you don’t, you may find you’re left trailing behind. AI isn’t coming, it’s already here.
In a poignant obituary of Television guitarist Tom Verlaine, Patti Smith describes his writing process as “exquisite torment”. For many artists, writing songs is a process, a therapy, a journey, and speeding up that process to the extent of churning out an entire song from a brief text prompt surely can’t be as satisfying – nor will it have the same intrinsic value. Can a machine ever truly feel such torment to the extent that it will create the kind of song that saves lives? I don’t think so. AI isn’t going to replace living artists that bleed for their art, the type whose work derives from the horror of the human condition because robots can’t feel pain – yet.