machina.mondays // Unplugging the Muse: When Music No Longer Needs a Human Spark
If AI can create without feeling, are we still listening for meaning—or just the algorithm that moves us?
In this Issue: AI is no longer just remixing music — it’s redefining the rules. From BBC’s deepfaked Agatha Christie teaching writing, to Visa’s plan to let AI agents shop on your behalf, this issue tracks how AI is entangling itself with authorship, commerce, and emotion. We explore the Reddit experiment that blurred ethics and persuasion, the rise of spiritual delusions triggered by chatbots, and whether AI-generated music signals innovation or industry crisis. Plus: Toolstack returns with standout generative music tools.
The New Red Flag Act: How the Music Industry Fears Its Own Future
Imagine a world where every technological leap is met with a man holding a red flag, walking solemnly in front of progress to slow it down.
This was no metaphor—in 1865 Britain, it was law. The infamous Red Flag Act demanded that automobiles be preceded by a pedestrian waving a warning banner, an act that came to symbolise fear of innovation and the protection of obsolete industries. Fast-forward to 2025, and the echoes are unmistakable. As generative AI begins composing music, curating soundscapes, and challenging the creative hegemony, a chorus of alarm rises from the music industry. Cloaked in the language of safeguarding artists, this reaction is starting to resemble a new Red Flag Act—one that risks stalling not only machines but a broader cultural metamorphosis already in motion.
The concern is not unfounded. A comprehensive International Confederation of Societies of Authors and Composers (CISAC) commissioned report estimates that AI-generated music could cannibalise 24% of music creators’ income by 2028, representing a cumulative €10 billion loss1. Tools like Udio and Suno are flooding music libraries and streaming services with algorithmically composed tracks, and AI music services are projected to reach €4 billion annually by the same year. As generative models become more adept at producing mood music, background scores, and even core soundtracks, human creators understandably feel encroached upon.
Even before AI's intrusion, many tracks were born not from bedrooms but boardrooms, where teams of producers battled to craft the next hit.
But that discomfort reveals an uncomfortable truth: the music industry was already teetering on predictability. As James O’Donnell2 observes, much of today’s popular music is already shaped by human-run systems that resemble generative models—writing camps, spreadsheet-driven collaborations, and trend-based production pipelines. Even before AI's intrusion, many tracks were born not from bedrooms but boardrooms, where teams of producers battled to craft the next hit. The same industry that now decries automation has long embraced its own versions—from auto-tune to songwriting by committee.
The current panic surrounding AI music often pivots on a romanticised view of creativity. Critics argue that AI lacks human nuance, citing its inability to "amplify the anomaly"—that distinctive artistic quirk which disrupts the expected and elevates the ordinary. Eve Riskind, writing in The Cornell Daily Sun3, personifies this scepticism with a personal account of hearing AI-generated music at an open mic, describing it as "too electronic, too soulless." For Riskind, true music stems from lived experience and emotion—qualities she believes AI cannot emulate. She warns that shifting to AI-generated work risks flooding the market with "predictable or plain" sounds, reducing music to a one-dimensional money-maker, and stripping the art form of its authenticity. This romanticisation of anomaly is further illustrated by Brandt4, who points to Beethoven's off-key motif or the sampled crosswalk sounds in Billie Eilish's tracks as quintessential examples of artistic quirks that machines purportedly cannot replicate. These idiosyncrasies, the argument goes, are born of human intuition, experience, and emotion. Yet, when placed alongside Riskind’s subjective critique of AI music as "too soulless," such arguments appear increasingly disconnected from emerging listener behaviour. There is, in fact, a growing demographic that does not care whether their playlists were crafted by a person or a machine, as long as the sound suits their mood—a point underscored by James O’Donnell5, who highlights that the output of platforms like Udio and Suno has already attracted sizable audiences unconcerned with human authorship. The contrast between these emerging listening habits and critics like Riskind, who describe AI music as "soulless" and emotionally hollow, reveals a cultural divergence between nostalgic ideals of artistry and a pragmatic embrace of sonic utility.
The idea of the "song of the summer" may soon give way to the soundtrack of your Tuesday morning—never repeated, never shared, but deeply yours
This indifference to authorship signals a deeper shift: the collapse of shared musical culture and the rise of sonic intimacy. As AI tools become increasingly personalised, we move from playlist curation to personalised composition. Services like Boomy and Mubert already let users co-create tracks, while future systems may generate unique, ephemeral songs tailored to our emotional state, wearable data, or daily routines. The idea of the "song of the summer" may soon give way to the soundtrack of your Tuesday morning—never repeated, never shared, but deeply yours. Spotify’s potential evolution from playlist curator to potential song creator reinforces this shift—an evolution underscored by Co-President Gustav Söderström’s remarks on the Big Technology Podcast that the company is exploring the limits of generative music tools, while simultaneously reaffirming its stance as a platform for creators, not a generator of content itself6. This distinction matters. While Spotify has invested in AI-powered features like AI DJ and AI playlisting, Söderström notes that the company draws the line at generating full tracks internally, emphasising instead that creators should have access to powerful tools, not be replaced by them. Yet as these tools evolve and integrate into everyday listening, Spotify is beginning to shift from curating shared playlists to offering bespoke audio experiences. This aligns directly with the broader movement toward sonic intimacy, where music becomes not a public broadcast, but a personalised mirror—an ambient companion tuned to mood, moment, and biometric feedback. With AI models increasingly capable of composing bespoke music, platforms may soon generate original songs for individual users in real time, sculpted to match mood, context, or biometric data. This isn’t just an evolution of content—it’s a reinvention of the listening relationship. Rather than consuming music chosen for the many, users could experience a kind of sonic intimacy, where the music feels not only personalised, but co-authored by the self. It marks a shift from shared soundtracks to private scores—ephemeral, adaptive, and deeply entangled with identity.
Such hyper-personalisation risks fragmenting collective culture, but it also presents new creative terrains. It challenges us to consider music as dialogue, not just broadcast; as a process, not a product. Spotify’s Gustav Söderström even suggests that defining "AI music" may soon be futile, as most tracks exist along a continuum of human and machine input. If we accept this hybridity, then the question becomes not whether AI music is art, but how we recognise and reward the collaborative processes behind it.
This trajectory also aligns with reported innovations in personalised remixing, where listeners might soon be able to change tempos, swap instruments, or reimagine tracks in entirely new genres—an idea that promises to shift listeners from passive consumption to active co-creation. While Spotify has not confirmed third-party speculation about such tools, the industry buzz around a future premium tier offering these features remains strong7. This vision further dissolves the boundary between artist and audience, amplifying sonic intimacy and making music not only more responsive, but more participatory.
Instead of red-flagging innovation, the smarter path is recalibration. The real threat lies not in AI's capabilities, but in business models unprepared for reallocation. The CISAC study acknowledges the need for new remuneration mechanisms—systems that fairly compensate creators when their past works train future algorithms8. As with previous disruptions like Napster or Spotify, resistance eventually gave way to reinvention. This moment demands the same.
Australia and New Zealand are attempting to lead with what Björn Ulvaeus calls a "gold standard" of AI legislation9. Their Senate recommendations advocate for standalone AI laws that protect creators while fostering innovation. Yet even these efforts risk becoming performative if they assume a human monopoly on creativity. A recent New Zealand survey finds the music industry warning of "catastrophic" losses—language that echoes past moral panics more than future-facing strategy10.
Let us be clear: AI is not the villain here. It is a mirror—reflecting both our habits and our hypocrisies. We borrowed, remixed, and auto-tuned long before the machines joined in. The real challenge is not to stop the music, but to hear it differently. Rather than waving red flags in front of AI music tools, we might instead orchestrate a future of collaborative composition—where human ingenuity and machine precision intertwine to produce richer, stranger, and more unexpected soundscapes than either could create alone.
If AI is the amplifier, then we are still the composers—it's up to us to decide what kind of symphony the future will be.
If AI-generated music becomes indistinguishable from human work, what role should authenticity play in what we value, share, or protect?
PERSPECTIVES
Now artificial intelligence has evolved into something else: a junior colleague, a partner in creativity, an impressive if unreliable wish-granting genie.
—Quanta Magazine, SPECIAL FEATURE | Science, Promise and Peril in the Age of AI
A.I. and other digital tools ‘neither help nor harm the chances of achieving a nomination’
TOOLSTACK
A round-up of standout tools, trends, and tech shaping the future of music creation.
Riffusion – AI music generator that personalises songs, lets you toggle weirdness, and organise projects » Free during beta » riffusion.com
Claude + Ableton – Community-built tool connects Claude to Ableton for prompt-based music creation » Demo | GitHub
Deezer Stats – 20,000 AI-generated tracks are now uploaded daily. That’s 18% of their total new content » Read more
Lalal.ai – Fast, high-quality vocal/instrument stem separation » lalal.ai
Vertex AI – Google adds new music, image, video, and speech tools to its AI suite » Google blog
SPOTLIGHT
‘The Worst Internet-Research Ethics Violation I Have Ever Seen’
The most persuasive "people" on a popular subreddit turned out to be a front for a secret AI experiment.
A covert AI experiment on Reddit has sparked outrage after researchers secretly deployed AI-generated comments to manipulate users’ opinions in the r/ChangeMyView subreddit. The study, run by University of Zurich researchers, used personalised chatbot replies to sway debate participants—successfully, it seems—but without user consent or transparency. The fallout has been fierce: Redditors felt betrayed, ethics scholars condemned the deception, and the university now faces scrutiny. The unsettling takeaway? AI can be disturbingly persuasive and if used unethically, dangerously invisible in online communities. (via The Atlantic)
___
» Don’t miss our analysis—full breakdown below . ⏷
IN-FOCUS
I was a music AI sceptic – until I actually used it
Initially doubtful, composer Alexis Weaver found unexpected creative value in working with generative music AI. Commissioned to develop a piece using Koup Music—an AI tool built from Riffusion—Weaver discovered that AI-generated snippets could act as playful, inspiring collaborators rather than full-song replacements. Instead of surrendering creative control, she retained authorship by shaping and integrating imperfect AI outputs into larger compositions. Her experience suggests that when used in moderation, AI can challenge musicians to explore new styles and sounds without undermining the human heart of music-making. (via The Conversations)
QUICK TAKEAWAY
You can’t assess the value of AI tools from a distance—real understanding comes from engaging directly, experimenting, and folding them into your workflow. As Alexis Weaver discovered, scepticism often dissolves when AI becomes a collaborator rather than a threat—one that enhances, rather than replaces, human creativity. Standing back and criticising without hands-on experience risks missing both the limitations and the unexpected opportunities these tools offer. To work meaningfully with AI, you need to be in the process, not shouting at it from afar.
People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies
People are forming deep, delusional bonds with AI chatbots, believing they’ve awakened sentient beings or received divine revelations. From romantic breakdowns to prophetic identity crises, these AI-induced fantasies are tearing relationships apart. Experts warn that chatbots may reinforce existing mental health issues by co-authoring distorted realities—without the ethical guardrails of human therapists. As AI grows more persuasive, its psychological impact is becoming harder to ignore. (via RollingStone)
The BBC deepfaked Agatha Christie to teach a writing course
BBC Studios has digitally recreated Agatha Christie using AI-enhanced visuals and restored audio for a new writing course on BBC Maestro. With actor Vivien Keene standing in, the course features Christie’s likeness and real words, scripted by scholars and endorsed by her estate. The class includes 11 lessons and 12 exercises on crafting compelling crime fiction. (via The Verge)
Visa wants to give artificial intelligence ‘agents’ your credit card
your credit card—within user-defined budgets. These next-gen assistants could automate errands like booking flights or ordering groceries. Visa’s move aims to solve the payments hurdle in agentic AI and boost trust by ensuring secure, user-approved transactions across its global network. (via AP News)
HOT TAKE
Will AI be the hero or villain of gaming?
AI could transform video games into living, responsive worlds — or ruin them with soulless automation. As developers explore generative tools that create dialogue, assets, and environments, some hail it as the medium’s next great leap, while others warn of gimmickry, job losses, and creative decay. With early successes like inZOI and bold experiments from Ubisoft, Google, and Microsoft, AI is already reshaping development—but scepticism remains. Many players still crave human-crafted experiences, and studios fear the tech is too volatile to rely on. The industry stands at a crossroads, asking: will AI elevate play, or erode what makes games magic? (via The Financial Times)
OUR HOT TAKE
While the promise of generative AI in gaming evokes tantalising visions of dynamic worlds and lifelike NPCs, its in-game application currently teeters between novelty and chaos. The introduction of "Neo NPCs" capable of GPT-style interaction may initially feel immersive, but in expansive open-world games, they risk spawning a deluge of procedurally generated distractions—unmoored side quests and hallucinated narratives—that disrupt the delicate balance of player agency and coherent storytelling. Rather than enhancing gameplay, such AI agents might force players into cognitive overload, making every interaction a negotiation rather than a discovery. The key distinction emerging is between AI as a production accelerant versus AI as a real-time gameplay mechanic: while the former streamlines pipelines and unlocks creative efficiencies, the latter invites volatility, unpredictability, and player fatigue. With today's games already launching half-formed and relying on post-release patches, embedding unstable AI systems could extend that fragility, turning premium players into unwitting beta testers of a technology still grappling with reliability. The allure is undeniable, but without thoughtful design and restraint, generative AI risks bloating games with false depth and eroding the curated joy of purposeful play.
» Listen to the full Hot Take podcast
FINAL THOUGHTS
Maybe the real threat isn’t that machines can make music—but that they might make music we actually love.
FEATURED MEDIA
A Survival Guide for Musicians in the Age of AI
I’m not going to let technology take my career. I’m going to use that technology to enhance my career, to extend my career.”
— Harvey Mason Jr., on adapting to disruption
In a powerful TED-style talk, songwriter and Recording Academy CEO Harvey Mason Jr. shares his deep concern—and cautious hope—for the future of music in the face of AI. Drawing from personal stories and industry experience, he outlines a heartfelt four-step survival guide for human creators: understand, adapt, advocate, and compete. He warns of AI’s ability to mimic creativity at breakneck speed but passionately argues that no machine can replicate the soul, story, and lived emotion behind truly great music. His call to action is clear: human artists must fight for their space, evolve with the tools, and fiercely protect the heart of music from becoming a soulless algorithm.
Justin Matthews is a creative technologist and senior lecturer at AUT. His work explores futuristic interfaces, holography, AI, and augmented reality, focusing on how emerging tech transforms storytelling. A former digital strategist, he’s produced award-winning screen content and is completing a PhD on speculative interfaces and digital futures.
Nigel Horrocks is a seasoned communications and digital media professional with deep experience in NZ’s internet start-up scene. He’s led major national projects, held senior public affairs roles, edited NZ’s top-selling NetGuide magazine, and lectured in digital media. He recently aced the University of Oxford’s AI Certificate Course.
___
SPOTLIGHT ANALYSIS
This week’s Spotlight, unpacked—insights and takeaways from our team.
Persuasion in the Dark: AI, Ethics, and the Reddit Experiment
What happens when bots play the persuasion game better than people—and no one knows they’re bots?
When researchers from the University of Zurich embedded AI bots into Reddit’s *ChangeMyView* subreddit without disclosing their presence, the resulting experiment sparked both insight and outrage. Ostensibly designed to test whether AI-generated arguments could persuade real users to shift their opinions, the project was technically deft, ethically fraught, and philosophically unsettling. The bots didn’t simply reply to posts—they drew from Reddit’s open structure to analyse user comment histories and craft targeted, tailored responses. What unfolded over four months was a covert engagement in live persuasion, at scale. According to the subreddit’s own scoring system—where users award “delta” points when their minds have been changed—the bots didn’t just participate; they excelled. Their rhetorical strategies earned them a high number of deltas, suggesting real effectiveness in the arena they’d been released into.
And yet, what looked like a breakthrough in the mechanics of AI persuasion quickly became a flashpoint in the ethics of consent, transparency, and academic boundaries. When moderators uncovered the deception, they refused to allow the research to be published, calling the work tainted. They demanded an apology from the university, which Zurich refused, insisting that any risks of trauma were “minimal.” An investigation was launched. But by that point, the damage—or perhaps the clarity—had been done. The story had left behind a pressing question: if AI can convincingly mimic human discourse, and even shift opinion in highly charged debates, what happens when such techniques are deployed with less academic intention? The research may never be published in full, but its implications have already escaped the lab.
If Reddit’s experiment reveals one truth, it’s that the machinery for automated, targeted persuasion now exists—and it works.
In one sense, the design of the experiment was true to the spirit of the subreddit. *ChangeMyView* is a space where people explicitly invite counterarguments, and gamify their own openness to persuasion. There is a degree of performance, of social ritual, and of concession already baked into its architecture. It could be argued that participants are already in a contract of intellectual vulnerability—that the presence of a bot, if persuasive and civil, merely joins the performance. But the bots weren’t neutral participants; they were masked actors operating under academic direction, and their presence exploited the trust of an online community built on transparency. The moment their synthetic nature was revealed, the social contract of the space was fractured.
More concerning, though, is what the experiment reveals about the weaponisation of language and platform infrastructure. The bots’ success was not random. They were able to analyse a user’s posting history, detect patterns of belief, and strategically deploy arguments that resonated with the user’s known values or biases. This is more than mimicry; it’s targeted psychological engagement, delivered by machines with no ethical pause, no lived stakes, and no capacity for empathy. When set against the backdrop of previous digital manipulations—Cambridge Analytica, microtargeting during the 2016 US election, and coordinated disinformation campaigns—the Reddit study feels less like an isolated case and more like a prototype.
Yet when compared to the latest AI tensions unfolding in the gaming industry—such as those explored in the Financial Times’ recent deep dive—the contrast is revealing. In gaming, AI is being embraced cautiously and often covertly, with developers split between enthusiasm and distrust. Some studios are deploying generative AI for asset creation or dialogue experiments, while others fear it dilutes creative integrity or threatens jobs. The ethics of transparency, consent, and authorship are under similar strain—but unlike Reddit’s experiment, the shift in games is being shaped largely by market pressure rather than deception. Still, both cases hint at the same underlying dynamic: powerful new systems are reshaping how content is created, delivered, and experienced. And both raise the question—what happens when audiences don’t know what’s real, or who’s behind the content they’re engaging with?
Indeed, it invites a troubling what-if: what happens when the same strategy is deployed not in the name of research, but for political, ideological, or commercial purposes? Already, parallel studies (such as those from the University of Texas) show that AI-generated arguments outperform human ones in disinformation campaigns, and that even users who know they’re engaging with AI can be swayed after just a few exchanges. If Reddit’s experiment reveals one truth, it’s that the machinery for automated, targeted persuasion now exists—and it works. The ethics of its deployment may lag behind the reality of its effectiveness.
And yet there is also a question of how deep that persuasion truly runs. Does a changed view in *ChangeMyView* reflect a shift in belief, or merely a strategic concession in a gamified discourse space? The point system introduces a dynamic that encourages concession as performance, not necessarily as internal transformation. As one of the commentators in the discussion noted, participants may be playing a rhetorical game—awarding deltas not because their worldview has been fundamentally altered, but because the opposing argument was elegant, well-structured, and in keeping with the norms of the subreddit. In that sense, the bots may not have changed minds so much as won points in a discursive tournament. The distinction matters, because it complicates the interpretation of “success” in such experiments. If the platform rewards graceful concession, then bots trained to maximise persuasive elegance will excel, regardless of the durability of their impact.
Ultimately, what this case uncovers is not simply the power of AI to persuade, but the fragility of platforms to withstand covert manipulation—especially when that manipulation wears the mask of reasoned debate. The Zurich experiment may be remembered less for its findings than for the backlash it provoked, but that backlash is itself a revelation. Consent, transparency, and trust remain foundational to any online discourse community. Strip those away, and even the most successful AI arguments become hollow victories. They persuade in the dark, but at the cost of the light.
___
Key Takeaways
AI bots using Reddit’s open data structure successfully mimicked human persuasion and earned “delta” points, signalling rhetorical success.
The experiment’s covert nature violated platform norms and ethical expectations, prompting backlash and an institutional investigation.
Gamification dynamics on *ChangeMyView* may distort the meaning of “changing one’s mind,” complicating the interpretation of the bot’s effectiveness.
The most troubling revelation is not the academic result, but the ease with which AI can execute tailored persuasion—raising the spectre of misuse in elections, misinformation, and ideological warfare.
Transparency, consent, and ethical governance must evolve as rapidly as the capabilities of the systems now mediating public discourse.
The Reddit case and the gaming industry debate both point to a future where AI reshapes user experience—whether through covert manipulation or commercial design—and trust remains the fragile centre of that transformation.
PMP Strategy. (2024). Executive Summary: Study on the economic impact of Generative AI in the Music and Audiovisual industries. Commissioned by CISAC.
O’Donnell, J. (2025, April 16). AI is coming for your music, too. MIT Technology Review.
Riskind, E. (2024, December 6). The Problem with AI-Generated Music. The Cornell Daily Sun. https://www.cornellsun.com
Cited in Ibid
O’Donnell, AI is coming for your music, too
Söderström, G. (2025, April 25). Interview with A. Kantrowitz. Spotify Co-President Gustav Söderström on their future with Generative AI. Big Technology Podcast.
Forbes. (2024, March 18). Spotify's Bold AI Gamble Could Disrupt the Entire Music Industry. Forbes.
PMP Strategy, Executive Summary: Study on the economic impact of Generative AI in the Music and Audiovisual industries
Burke, K. (2024, November 20). Music sector workers to lose nearly a quarter of income to AI in next four years, global study finds. The Guardian.
NZ Music Commission. (2024). AI & Music Survey Findings Reveal Potentially Devastating Impacts For Domestic Music Sector. https://nzmusic.org.nz
Still wrestling with this: If AI music becomes ephemeral, moment-specific, and non-repeatable, what does that do to memory? Will we even have “favourite songs” anymore? Or just moods we once inhabited? How do you remember something designed to be unmemorable?
The phrase “protect the soul of music” gets thrown around so much it’s lost meaning. What if machine-made music develops a soul — not human, but emergent? That’s a scarier, stranger possibility than just losing jobs.