Field Notes // June 25 / Panic Now, Peer Review Later
Dubious neuroscience meets cultural anxiety in this viral hit-piece on AI-assisted writing.
Welcome to Field Notes, a weekly scan from Creative Machinas. Each edition curates ambient signals, overlooked movements, and cultural undercurrents across AI, creativity, and emergent interfaces. Every item offers context, precision, and an answer to the question: Why does this matter now?
FEATURED SIGNAL
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
A new study, “Your Brain on ChatGPT,” claims that relying on AI tools like ChatGPT for essay writing leads to “cognitive debt” — a decline in brain engagement, memory recall, and neural connectivity. Through EEG scans, NLP analysis, and interviews, the authors argue that users who relied on LLMs became less cognitively active and retained less ownership over their writing. The media ran with it — CNN, NYT, Salon. But behind the flashy conclusion is a study with serious flaws: a tiny participant pool, weak experimental controls, and a misunderstanding of how AI changes the cognitive task of writing. The idea is provocative — but the evidence doesn’t hold. The result is less a scientific insight and more a cautionary tale about early AI anxieties disguised as research.
(Sources: ResearchGate, BrainOnLLM, PsychologyToday)
Why it matters
This study is a textbook example of a research paper gaining attention not because of its robustness, but because it affirms cultural fears about AI. It uses only 18 participants to make its key claim — far too small a group to draw any meaningful neurological or behavioural conclusions. Despite this, the study confidently links low EEG activity to cognitive laziness, ignoring the fact that EEG is a limited, surface-level measure of brain activity that cannot meaningfully capture complex mental states like attention, critical thinking, or idea ownership. More importantly, the authors completely overlook the fact that using AI transforms the task of writing — from generating ideas from scratch to evaluating and refining generated content. That’s a cognitive shift, not a decline. The framing here is misleading: it pathologises new forms of thinking simply because they look different from traditional ones. Instead of exploring what new kinds of mental labour AI invites — such as synthesis, modulation, or critique — the study leans into a reductive moral panic. It even presents LLM-driven writing homogeneity as inherently negative, without asking whether AI might level the playing field for struggling or marginalised writers. And though the authors admit the study isn’t peer-reviewed and is highly limited in scope, their language suggests broad, worrying conclusions that have already been picked up by major news outlets. Bottom line: this study is being treated as evidence of AI’s harms in learning environments, when in fact, it’s an underpowered, oversimplified look at a complex cognitive shift.
How it affects you
If you're a student, educator, writer, or professional who uses AI tools to assist with thinking or writing, this study could shape the narrative around your work in misleading ways. Schools and universities may start citing studies like this as justification to ban or limit AI usage, without understanding what’s actually happening when people collaborate with these tools. You might start to feel guilty or intellectually inferior for using ChatGPT, even though your process may involve more critical evaluation, strategic prompting, and editing than ever before. These tools don’t make people think less — they make people think differently. But this study doesn’t acknowledge that. Instead, it reinforces the idea that AI use equals disengagement, potentially pushing educational institutions, funders, and media narratives toward a false binary: human cognition good, AI-assisted cognition bad. That kind of thinking is dangerous because it ignores the reality of how knowledge work is evolving — and risks stifling innovation in how we teach, write, and create.
So what should you do?
Rather than rejecting AI tools out of fear, this is the moment to get intentional about how you use them. The takeaway from studies like this shouldn’t be to abandon AI altogether — but to sharpen your awareness of how these tools change the way we think, write, and learn. The real danger isn’t cognitive decline — it’s passive use. If you’re active, reflective, and in control, AI becomes a powerful amplifier, not a crutch.
Here are three things to keep in mind:
Think critically about AI studies: Don’t accept headlines or claims at face value. Ask: how many participants? What were they actually measuring? Are the conclusions overstated? Treat studies like “Your Brain on ChatGPT” as early-stage, not conclusive.
Use AI deliberately, not passively: Don’t just accept whatever an LLM gives you. Prompt with purpose. Challenge its output. Edit and reframe. That active engagement is what keeps your thinking sharp.
Advocate for better frameworks: Whether you're in education, research, or creative industries, push for richer ways to measure cognitive activity and learning. AI changes how we think — and we need studies, policies, and tools that reflect that shift, not fear it.
SIGNAL SCRAPS
Sesame is a new online service being developed for companies that want AI to help with developing their brand using graphics, motion, sound, and creative. The website says: “Campaign creative can be templatised to accelerate and augment ideation, production, and iteration. The result is increased speed-to-market and higher engagement. And it's fun to build with.”
Chinese company MiniMax has rolled out Video Agent, the latest AI-powered tool that generates high-definition videos from simple text prompts, joining an alreadycrowded market. The tool currently supports 720p resolution at 25 frames per second, with a maximum video length of 6 seconds. The company plans to extend this to 10 seconds in future updates. It claims to set itself apart from rivals by ease of use and a facial consistency feature.
Queensland University robotics researchers have developed a new robot navigation system that mimics the neural processes of the human brain and uses less than 10% of the energy required by traditional systems. Reporting in ScienceRobotics, neuroscientist Dr. Adam Hines says to run these neuromorphic systems, they designed specialized algorithms that learn more like humans do, processing information in the form of electrical spikes, similar to the signals used by real neurons. Energy constraints are a major challenge in real-world robotics, especially in fields like search and rescue, space exploration and underwater navigation. By using neuromorphic computing, their system reduces the energy requirements of visual localisation by up to 99%, allowing robots to operate longer and cover greater distances on limited power supplies.
SIGNAL STACK
Building Your City’s AI Strategy with the United States Conference of Mayors and Google
While 96 % of mayors across the US told Bloomberg they’re interested in generative AI, only 2% have actually deployed it. So Google together with the US Conference of Mayors is publishing Building Your City’s AI Strategy, a free guide that walks municipal leaders through drafting and rolling out a city-wide AI plan. A step-by-step kit lowers the activation energy and could turn curiosity into pilots and policies.
Why it matters
Cities are at risk of falling into an “AI divide,” where well-funded urban centres adopt advanced tools while smaller, resource-strapped towns get left behind. A shared framework helps level the field, giving every city a chance to benefit from AI—not just the digital front-runners. Google also stands to gain. By offering free or low-cost tools, it encourages wide adoption while collecting valuable real-world data to improve its models. Public-sector use becomes a powerful feedback loop that fuels Google’s broader AI ecosystem. The guide promotes more than just efficiency. It frames AI as a way to free up staff for complex, human-facing tasks—improving services like bin collection or permit processing, not just cutting costs. In the end, Google’s play is simple: make AI easier to adopt, grow its user base, and create mutual value—better services for cities, stronger products (and profits) for Google.
AI knows a good beer when it tastes one!
Belgian researchers have brewed up an AI that can predict beer ratings and even help improve recipes, potentially transforming not just brewing, but how we develop food and drink altogether.. The scientists developed AI models that can predict how consumers will rate a beer and even suggest which aroma compounds to tweak to improve it. After five years of research, the team chemically analysed 250 beers and cross-referenced the data with expert tasting notes across 50 criteria. The result: an AI that can forecast flavour quality and guide brewers in crafting better beers with no human samplings required. (via Eurekalert.org).
» Read the Research Paper “Predicting and improving complex beer flavor through machine learning”
Why it matters
Beer reviews usually rely on personal taste and vague language—terms like “crisp” or “funky” that mean different things to different people. This AI system replaces that subjectivity with chemical-level analysis, offering a consistent, data-backed flavour profile. It brings a scientific foundation to an industry still driven by gut feel. The real breakthrough isn’t just analysis—it’s impact. By tweaking a commercial Belgian ale’s aroma profile, the AI actually improved the beer’s taste in blind trials. That shows it’s not just crunching data, it’s shaping products people genuinely prefer—an early proof of concept for flavour optimisation.But it doesn’t stop with beer. The model has broader potential across food and beverage design, especially for alcohol-free drinks. These often lack the complexity or satisfaction of their alcoholic counterparts, and this kind of AI could close that sensory gap—making alternatives feel more like the real thing. More broadly, this marks a shift in food R&D. Instead of relying on endless trial and error, producers could use AI to home in on optimal formulations faster and more precisely. It’s a glimpse of how machine intelligence might start co-authoring the future of taste itself.
UPDATES ON PREVIOUS COVERAGE
Movie industry and AI: Blogger Stephen Follows says that AI tools are already reshaping how films are written, sold and made in big ways - but the biggest changes are still to come. For example AI tools could conceivably shift the role of the screenwriter from a purely generative one to something more akin to an editor, curator, or systems designer. The “writer” might be shaping and refining content from machine outputs rather than always starting with a blank page. “And, rather terrifyingly, these tools may just be a brief intermediary steps to the point when AI models can “write” entire screenplays tuned to tone, structure, and cast without a human insight. Whether we want that or not is different topic entirely.” (via Stephen Follows blog)
Music copyright update: Music streaming service Deezer will start flagging albums with AI-generated songs, part of its fight against streaming fraudsters. Deezer, based in Paris, is grappling with a surge in music on its platform created using artificial intelligence tools it says are being wielded to earn royalties fraudulently. The app will display an on-screen label warning about “AI-generated content” and notify listeners that some tracks on an album were created with song generators.
Ads in ChatGPT update: OpenAI’s San Altman says he is open to having adverts in the tools but is aware of the potential backlash. He told AdWeek that modifying the model’s output based on who pays for the ad would be “a trust-destroying moment” for its users. He preferred pursuing the idea of showing ads outside the large language model’s output stream. Altman didn’t specify what form those ads might take or where they might appear, such as a sidebar or footer.
FIELD READING
US small businesses use AI but don't want to pay for it
A U.S. Bank survey of 1,000 small businesses found 36% are already using generative AI, with another 21% planning to start soon. But nearly all are sticking to free or cheap options. It shows 68% spend under $US50/month (NZ$83.60)
» Read the Full Survey from US Bank “The Small Business Perspective: Leading Through Change, Shaping a Legacy”
Why it matters
Most small businesses are using AI but sticking to free tools — a common pattern in the early stages of a new tech cycle. That doesn’t signal a dead end for monetisation; it highlights a market still figuring out where the real value lies. As the space matures, the difference between free access and paid capability will become sharper. Once businesses start seeing clear returns — faster workflows, better outputs, actual competitive edge — the willingness to pay will grow. What feels like resistance now is really hesitation in an immature market.The real opportunity lies ahead, as AI providers start drawing clearer lines between what’s free and what’s worth paying for. This isn’t a sign that monetisation is failing — it’s the early quiet before sustainable revenue models take hold.
DRIFT THOUGHT
If I can’t remember something I wrote with an AI… is that proof my brain’s disengaged — or proof the work didn’t need remembering in the first place?
___
YOUR TURN
Is using ChatGPT making us lazy — or just differently smart?
As AI becomes more embedded in how we write, think, and solve problems, the line between delegation and disengagement is getting harder to define.