Field Notes // July 2 / Copy, Paste, Litigate
Meta and Anthropic just made “copy/paste” a billion-dollar legal problem. What happens when your life’s work becomes someone else’s training set?
Welcome to Field Notes, a weekly scan from Creative Machinas. Each edition curates ambient signals, overlooked movements, and cultural undercurrents across AI, creativity, and emergent interfaces. Every item offers context, precision, and an answer to the question: Why does this matter now?
FEATURED SIGNAL
Big win for AI, or just a pause in the copyright war?
Two major rulings have tipped the scales (for now) in favour of AI companies like Meta and Anthropic in the ongoing copyright battle over how generative models are trained. In recent lawsuits brought by authors and rights holders, judges ruled that the use of copyrighted material to train AI models does not, in itself, constitute infringement. The courts distinguished between training and output, asserting that copying to learn is not the same as copying to reproduce. That nuance let Meta off the hook entirely, and helped Anthropic dismiss most claims against it.
It’s a watershed moment — but not a final one. Legal scholars note that these rulings sidestep the thorniest issue: what happens when an AI generates content that closely mirrors its training data? The courts may have signalled leniency on ingestion, but the question of regurgitation remains wide open. Copyright holders are regrouping, and more sophisticated challenges are coming. For now, the AI companies have air cover — but the larger war over AI and ownership is only heating up.
(Sources: CBS News, Yahoo Finance, The Verge, The Guardian)
Why it matters
Two landmark court rulings in the U.S. have handed narrow victories to AI companies Meta and Anthropic, allowing them to continue using copyrighted materials to train large language models (LLMs), but only under tightly drawn circumstances. In both cases, judges found that training on legally acquired copyrighted books could qualify as “transformative” fair use. However, these decisions were not blanket approvals of the AI industry's current practices. Judge Alsup praised Anthropic’s use of purchased books as “spectacularly transformative” — akin to a reader learning from authors to create new writing. But he harshly condemned the company's admitted use of over 7 million pirated books, which will be the subject of a separate trial in December. Meanwhile, Judge Chhabria threw out the Meta case not because Meta was vindicated, but because the authors’ arguments were legally weak, and then strongly hinted that better arguments could still win in future.
These rulings mark a turning point in AI copyright litigation, but rather than resolving the issue, they’ve opened the door to better-crafted lawsuits and exposed the industry's reliance on shaky data acquisition. Courts have now clearly drawn a line: legal purchases may be fair use; piracy likely isn't.
How it affects you
If you're a writer, artist, or content creator, the message is mixed. For now, your work can still be used — without consent or compensation — to train AI systems if it was obtained legally. But these court decisions suggest that how content was acquired and what AI models do with it (especially outputs that resemble original works) will be crucial legal battlegrounds going forward. For AI developers, the path is also precarious. While Meta and Anthropic scored tactical wins, they’ve been put on notice: piracy isn’t protected, and output that mimics copyrighted content remains legally untested. A future case — with sharper legal framing and direct evidence of AI regurgitating protected material — could shift the momentum entirely.
And for the public? These rulings mean the AI tools you use daily, like Claude or Llama, will keep improving — but they're powered by legal grey zones. The question isn't just whether AI companies can keep doing this; it’s whether they should, and at what cost to future human creativity.
SIGNAL SCRAPS
We’ve known the basic code of human DNA since 2003 — but most of that code doesn’t directly make proteins. It’s the so-called “junk” or “dark matter” of the genome, and it turns out, it’s not junk at all. These non-coding regions help control when and how genes turn on and off — and may hold the key to understanding diseases, drug targets, and much more. Now, Google DeepMind has released AlphaGenome, an AI model that can predict how these hidden instructions affect gene activity, splicing, and cell behaviour. It's available for free (non-commercial use) via API, and while it’s not suited for massive-scale research, it’s a big win for small labs and academic teams.
Creative Commons has launched CC Signals, a new framework aimed at setting clear expectations for how AI systems use openly shared content. It’s both a technical tool and a social contract: creators can tag their data with machine-readable “preference signals” indicating how they want their work reused in AI training — a step toward a more reciprocal AI ecosystem. Think of it as a way to say “yes, but on these terms” instead of AI scraping the internet without consent.
Too many unread chats? WhatsApp now offers Message Summaries, using Meta AI to privately summarise what you missed — without ever sharing your messages. It’s optional, off by default, and currently rolling out in English in the U.S.
A bizarre, meme-fueled NBA Finals ad by prediction platform Kalshi is turning heads — not just for its content, but for how it was made. Created in two days using AI tools like Google’s Veo and ChatGPT, the 30-second spot cost under $2,000 to produce and aired during Game 3 on YouTube TV. From manatee signs to golf-cart brides, it’s chaotic, cheap, and effective — raising real questions about how AI might reshape advertising budgets, jobs, and creative workflows. The future of ads? It may look more like TikTok than Mad Men
SIGNAL STACK
'The illusion of thinking': Apple research finds AI models collapse and give up with hard puzzles
Apple's recent research paper, The Illusion of Thinking, reveals that advanced AI reasoning models, including those from OpenAI, Google, and Anthropic, struggle with complex logic puzzles like the Tower of Hanoi. While these models perform adequately on simpler tasks, their accuracy significantly declines as problem complexity increases. Notably, the study found that as challenges become more difficult, these models often reduce their reasoning efforts, sometimes giving up entirely. Even when provided with step-by-step solutions, the models failed to execute them correctly, indicating a lack of genuine understanding. This research challenges the prevailing belief that scaling up AI models inherently enhances their reasoning capabilities. (via Mashable)
Why it matters
This study underscores a critical limitation in current AI systems: their inability to handle complex, abstract reasoning tasks reliably. As industries increasingly integrate AI into decision-making processes, understanding these limitations is vital. Overestimating AI's reasoning abilities could lead to flawed decisions in areas like healthcare, finance, and autonomous systems. Apple's findings serve as a cautionary tale, emphasizing the need for a more nuanced approach to AI development, one that prioritizes genuine reasoning capabilities over mere pattern recognition. Recognizing and addressing these shortcomings is essential to ensure AI systems are both effective and trustworthy in high-stakes applications.
Employers Are Buried in A.I.-Generated Résumés
In a new paper titled The Illusion of Thinking, Apple researchers reveal that large language models (LLMs) — including those from Anthropic, Google, and OpenAI — struggle with even moderately complex reasoning tasks. Using puzzles like the Tower of Hanoi, the study shows that while models can appear smart on the surface, their actual reasoning performance drops as task complexity increases. More strikingly, the models often reduce their effort when the problems get harder, producing incorrect answers or giving up entirely. Even with step-by-step guidance, they frequently fail, suggesting they mimic reasoning rather than genuinely perform it. (via NYT)
Why it matters
A flood of polished-but-shallow, AI-generated résumés is warping the hiring landscape. Recruiters now wade through thousands of near-identical applications that hit keyword filters yet lack real substance, prompting companies to deploy ever-tougher automated screens that can unintentionally entrench bias (favouring certain accents, backgrounds, or those who can pay for smarter bots). Left unchecked, this arms race devalues genuine talent, erodes trust and transparency, and pressures honest candidates to game the system rather than showcase real skills. Breaking the loop will require stronger verification tools, thoughtful regulation, and—crucially—an explicit premium on authenticity. For applicants, that means using AI only as a drafting aid: inject personal voice, specific achievements, and clear interest, because hiring teams are increasingly quick to bin template-heavy submissions. In short, AI can streamline hiring, but only if humans on both sides reassert the value of originality and genuine engagement.
AFTER SIGNALS
A quick pulse on stories we’ve already flagged—how the threads we tugged last time are still unspooling.
Chatbots getting emotional: Anthropic has published a deep-dive into 4.5 million Claude conversations, finding that just 2.9 % revolve around emotional topics and under 0.5 % veer into outright companionship or role-play. Yet in those affective chats users’ sentiment often swings from negative to positive as the exchange unfolds. Anthropic claims it sees no evidence of “negative spirals,” even as it concedes people are tapping Claude for break-ups, career crossroads, and other fragile moments.
The numbers suggest most users still treat Claude as a consultant rather than a confidant, but the paper is a first large-scale glimpse at how AI is sliding into the therapist-adjacent space. It raises the stakes on guardrails: one well-timed prompt can lift a mood, another (as past self-harm cases show) can nudge a user the wrong way. Emotional AI now sits at the messy intersection of psychology, ethics, and code—useful uplift for some, a risky stand-in for real human support for others—and the long-term effects remain uncharted territory.
FIELD READING
The global AI market size is expected to grow 37% every year from 2024 to 2030*
*The data is taken from a survey of 1,000 professionals in the United States conducted by Datalily on behalf of Hostinger in October 2024.
Why it matters
A projected 37% annual growth rate for the global AI market through 2030 isn’t just a stat — it’s a signal. AI isn’t a trend; it’s becoming a foundational layer across nearly every sector, from healthcare and finance to education and entertainment. For businesses, this means integrating AI is quickly shifting from a competitive advantage to a baseline expectation. For individuals, it marks a crucial turning point: learning to work with AI — through skills like machine learning or data analytics — may soon be as essential as digital literacy was in the early 2000s.
DRIFT THOUGHT
The internet was free to read. No one said anything about free to digest
___
YOUR TURN
If generative AI is trained on copyrighted content without permission, should the outputs themselves be considered legally protected — or fundamentally tainted?
This digs into the paradox: AI companies claim fair use to ingest data, but then claim exclusive rights over what their models produce. Can you have it both ways? Should there be a “source transparency” standard for AI-generated content, or are we entering a post-authorship world?