Field Notes // July 9 / Smells Like Machine Spirit
A popular new band on Spotify The Velvet Sundown isn't just a prank but a warning shot. AI can now pass as a band, rack up plays, and even fool the music media for a time. This is an important test of
Welcome to Field Notes, a weekly scan from Creative Machinas. Each edition curates ambient signals, overlooked movements, and cultural undercurrents across AI, creativity, and emergent interfaces. Every item offers context, precision, and an answer to the question: Why does this matter now?
FEATURED SIGNAL
Should we stop stressing about whether something is AI created?
A psych-rock band called The Velvet Sundown quickly racked up half a million Spotify streams and a growing fanbase before finally revealing the truth after some fans wondered: it wasn’t a band at all. The music was entirely generated using AI tools like Suno, with human guidance but no human performers. Weeks later, the band has finally admitted: the project was a deliberate hoax by a group of artists and technologists aiming to provoke a reaction—and it did.
The music sounded familiar- one track could have Dire Straits. They created two albums Floating On Echoes and Dust And Silence and a bio that said “The Velvet Sundown aren’t trying to revive the past. They’re rewriting it. They sound like the memory of a time that never actually happened… but somehow they make it feel real.”
Their Instagram has obviously AI-generated photos of the supposed band.
On YouTube, Rick Beato did do a good job of trying to dissect the music to confirm whether it was AI:
Why it matters
Listeners believed The Velvet Sundown was a real band. That trust was broken when it turned out to be an AI experiment. As AI-generated content proliferates, this raises ethical questions about disclosure: should platforms like Spotify label AI-made music clearly? Many fans are complaining they felt duped not because the music was bad, in fact many people love it, but because the context was false. The band’s success shows how convincingly AI can mimic human creativity. Without the reveal, no one could completely confirm the music wasn’t made by real musicians. This blurs the line between human artistry and machine mimicry, making it harder for audiences to distinguish genuine expression from synthetic output.How it affects you
This is a crucial debate about whether we should be fretting about what is real and what is AI generated when we experience content. Music is an important consideration because until now, much of what has been AI-created music has been rather obviously so or not great music. No so much with this group. AI bands like this can flood streaming services with polished music at low cost posing economic and cultural threats to human musicians. If a fictional AI band can go viral without touring, performing, or even existing, where does that leave struggling real-life artists? If music becomes more about output and less about origin, do you risk losing the emotional and cultural depth that comes from authentic human experience? Does this also risk turning music into just another algorithmic product optimised for engagement, not meaning? Or is this just a good listen so we should just chill and enjoy?
» Read our earlier discussion on AI music :
SIGNAL SCRAPS
Cloudfare, the American-based cloud infrastructure provider that serves 20% of the web, notes how publishers, content creators and website owners currently feel like they have a binary choice to either leave the front door wide open for AI to consume everything they create, or create their own walled garden. On its company blog, it said it found a common desire for sites that to actually allow AI crawlers to access their content, but they’d like to get compensated. So it has created a tool, presently in private beta, that pops up with a demand for payment every time an AI crawler requests content.
Pay per crawl grants domain owners full control over their monetisation strategy. They can define a flat, per-request price across their entire site. Publishers will then have three distinct options for a crawler:
Allow: Grant the crawler free access to content.
Charge: Require payment at the configured, domain-wide price.
Block: Deny access entirely, with no option to pay.
Major publishers like The Associated Press and TIME have partnered with Cloudflare to block AI crawlers. This could introduce a new option of “micropayments” for the web.
SIGNAL STACK
This AI ‘thinks’ like a human — after training on 160 psychology studies
Researchers have built an AI model trained on data from 160 psychology experiments. This AI accurately predicts human decision-making across a wide range of scenarios—outperforming longstanding psychological theories. The researchers who developed the system, called Centaur, fine-tuned a large language model (LLM) using a massive set of data from 160 psychology experiments, in which 60,000 people made more than 10 million choices across many tasks. The team that created the system thinks that it could one day become a valuable tool in cognitive science. Scientists have long struggled with using task-specific models to simulate broad aspects of human behaviour because the tools cannot generalise to a multitude of tasks. (via Nature)
Other scientists are sceptical about the results of the research. They argue the model doesn’t meaningfully mimic human cognitive processes, and that it can’t be trusted to produce results that would match human behaviour. (Science.org)
Why it matters
Can this really new AI mimic how we think? Is this a new age of data-driven edge over classic psychology? How could this change fields like public policy, therapy, or consumer engagement and as we discuss often, is this why we need more ethical guardrails around AI? This tool offers a data-driven alternative to traditional theories, potentially reshaping how we understand human choices. From policy design to mental health and marketing, anticipating human decisions with this AI could lead to smarter, personalised interventions. As AI increasingly influences decisions, ensuring these models respect human values, privacy, and fairness becomes crucial.
» Read the research paper published in Nature “A foundation model to predict and capture human cognition.”
AFTER SIGNALS
A quick pulse on stories we’ve already flagged—how the threads we tugged last time are still unspooling.
Chinese students are using AI to beat AI detectors: Chinese universities are using AI-detection tools to screen student theses, sparking panic during graduation season. Some students report having to “dumb down” their writing style just to pass AI checks, reducing expressiveness out of fear of being flagged.
A new and rapidly expanding industry has sprung up in response: software and services explicitly designed to circumvent AI detectors. Both detectors and bypass tools are now profiting from this escalating “cat-and-mouse” dynamic . The rise of “anti‑detection” tools highlights how reactive approaches to AI governance can quickly spawn new, potentially more damaging problems. Though focused on China, this scenario mirrors a worldwide dilemma: how to responsibly integrate AI in education, balancing integrity, innovation, and fairness. (via Rest of the world)
» Read our earlier discussion on AI and education:
Read our earlier discussion on AI and education: Four individuals who used AI to infringe on the copyrights of online artworks have been sentenced to up to 18 months in prison, along with fines, according to a recent Beijing court ruling.
The case began when one of the affected illustrators discovered her artwork being sold online without permission and alerted police wholater transferred 56 CDs and three external hard drives containing electronic data to prosecutors saying that if this data were to be printed as text, it would be equivalent to millions of online novels.
Prosecutors cited Chinese criminal law, which states that reproducing and distributing another person's work without permission for profit when the gains are substantial or other serious circumstances existcan result in imprisonment of up to three years and a fine, or a fine alone.
Prosecutors said they plan to issue recommendations to e-commerce platforms to strengthen oversight of AI-generated content and help ensure the healthy development of the technology within the framework of the law. (via China Daily)
FIELD READING
Half of US managers are using AI to determine who gets promoted and who gets fired.
According to a new Resume Builder survey of 1,342 U.S. managers with direct reports, a majority of those using AI at work are relying on it to make high-stakes personnel decisions, including who gets promoted, who gets a raise, and who gets fired.
Key findings include:
6 in 10 managers rely on AI to make decisions about their direct reports
A majority of these managers use AI to determine raises (78%), promotions (77%), layoffs (66%), and even terminations (64%)
More than 1 in 5 frequently let AI make final decisions without human input
Two-thirds of managers using AI to manage employees haven’t received any formal AI training
Nearly half of managers were tasked with assessing if AI can replace their reports.
DRIFT THOUGHT
We’re upset the band wasn’t real—but the feelings were. So what’s the fake part, exactly?
___
YOUR TURN
Would this story bother you if the music had been terrible instead of good?
We often react differently to deception depending on the quality of the outcome.