The Psychology of Human vs. AI Music: Authenticity, Emotion and the Pursuit of Imperfection The Psychology of Human vs. AI Music: Authenticity, Emotion and the Pursuit of I...
Australian artist, producer and DJ HAAi was working in the studio on her latest full-length album when she noticed a common function on a vocal harmoniser plug-in. For the first time, the name struck her: “Humanize,” it said. A feature that adds random variance to each note, making it slightly out of time, or different in volume or pitch, to the last. In short — more human. “The idea of something completely synthetic trying to make an actual person sound more human is crazy,” she says.
In the most recent episode of The Artists of Sound, HAAi describes how her album — in an act of reclamation, called HUMANiSE — explores the interplay between the human and the synthetic. “HUMANiSE, for me, is sonically meant to sound like a battle between what’s real and what’s not. Human error is really important in music and something that I feel can’t really be manufactured.”
Her album prompts several questions. What does it mean for us as listeners to detect non-human sounds in music? How does AI-generated music make us feel? And why has this tension become so important to HAAi?
Conflicting Reactions
As AI continues to influence, manipulate and produce music, both musicians and music lovers are learning to listen a little closer — with a spectrum of responses.
Some studies suggest that we tend to react negatively to music we believe is created by AI. We don’t see value in it. We appreciate it less. It often feels soulless, inorganic and robotic — missing the imperfections that make human-made music so authentic and relatable. We find it harder to connect emotionally to AI music and we even experience symptoms of physiological stress: we’re more alert and our hearts beat quicker. These effects, this research suggests, are often stronger among those of us who believe creativity to be a uniquely human endeavour.
Other research complicates this.
A recent paper on the emotional impact of AI-generated versus human-composed music found that, while we work harder mentally to process and decode AI-generated music, we can still respond to it emotionally. The music feels less familiar, and therefore requires greater brain resources to understand than human-composed patterns, which we pick up instantly. But emotionally, this blind study indicates, we often find AI-generated music to be “more arousing”, more exciting, more stimulating. Some of it, in other words, enthralls us.
Perhaps, our perception of authorship shapes our judgment more than the music itself. And as AI-generated music improves, we might find ourselves increasingly caught up in the human versus AI battle that HUMANiSE unpacks.

Leaps and Evolutions
Throughout her boundary-defying career, HAAi has always embraced and experimented with the latest tech, consistently bending it to her will. This was particularly the case in her earlier work, and her 2022 debut Baby, We’re Ascending was full of the “glitchy, techno-y” sounds she was drawn to at the time. She looks back on it as a real exploration in production, and a demonstration of some of the skills she’d picked up along the way.
Elements of this technical curiosity, and her more recent interest in AI, can be found in HUMANiSE, too. She’s digitized voices, layering them on top of one another, and used AI text-to-speech to generate her voice. Her love for the odd glitch is still apparent. Technology has a presence here; it’s part of the battle, after all.
But the album also demonstrates a significant leap in the evolution of her sound. In developing HUMANiSE, she was compelled to write more. “I felt like maybe I had a little bit more to say, which is why I started singing,” she explains. “And then, once you start writing stuff, you’re thinking about melodies and lyrics rather than how to make something sound as glitchy as posible.” The result is a very human counterpart to the album’s synthetic elements — infusing it with a captivating dose of intimacy and vulnerability.
HAAi’s voice, haunting and aching, isn’t the only one. HUMANiSE also features Alexis Taylor of Hot Chip, Obi Franky, KAM-BU, Kaiden Ford and James Massiah, as well as two choirs, TRANS VOICES with choir leader ILĀ and a gospel choir led by Wendi Rose. In the spirit of what HAAi describes as “true collaboration”, everyone who sang on the album wrote their own lyrics, too.

How it Plays Out Live
In her live performances, HAAi is aware of the additional energy that an audience brings with it. “There’s something about the way that you can [perform live] and feel like you’re really close to people,” she says. “One of the things I really have enjoyed is looking at an audience and seeing how engaged everyone is.”
In her use of L-Acoustics L-ISA suite of audio tools, HAAi has taken this further. She’s started to experiment with immersive sound in her live shows and, in this way, has brought people even closer, cocooning them in her music. Ironically, this use of technology seems to dismantle the boundaries between the artificial and the human, and instead show how we can use them together to heighten live performances.

Co-existance
For many of us, AI-generated music inevitably lacks the authenticity and emotions that humans create. It often feels too perfect, too precise. Every beat, on time. Every volume and pitch setting, just right. And yet, despite ourselves, we still occasionally have an emotional reaction to it. Perhaps it has its place.
For HAAi, we need to learn to co-exist with tech and AI. To observe how they play out when we place them against our very human sounds, imperfections and vulnerabilities, which are more important than ever. For her, there’s magic to be found in that alchemy. It’s how we humanize our lived experiences in a technological world. And how we connect to those around us.
Subscribe to Our Newsletter
Get the lastest news and find out about listening events.