Field Notes // July 6 / Goo Goo GPT; Baby’s First Bot
AI enters the nursery—safeguards still in nappies. Baby-friendly chatbots are arriving faster than the child-proof guardrails meant to keep them safe.
Welcome to Field Notes, a weekly scan from Creative Machinas. Each edition curates ambient signals, overlooked movements, and cultural undercurrents across AI, creativity, and emergent interfaces. Every item offers context, precision, and an answer to the question: Why does this matter now?
FEATURED SIGNAL
Me, Myself and AI: Young kids getting addicted to chatbots. How big a worry is it?
A survey of 1,000 children and 2,000 parents in the UK shows how rising numbers of children (64%) are using AI chatbots for help with everything from homework to emotional advice and companionship – with many never questioning the accuracy or appropriateness of the responses they receive back.
Many children are increasingly talking with AI chatbots as friends, despite many of the popular AI chatbots not being built for children to use in this way. Over a third (35%) of children who use them say talking to an AI chatbot is like talking to a friend, while six in ten parents say they worry their children believe AI chatbots are real people.
The report warns that vulnerable children are most at risk, with the survey finding that 71% of vulnerable children are using AI chatbots. A quarter (26%) of vulnerable children who are using AI chatbots, say they would rather talk to an AI chatbot than a real person, and 23% said they use chatbots because they don’t have anyone else to talk to.
The report warns that children are using AI chatbots on platforms not designed for them, without adequate safeguards, such age verification and content moderation.
Google's Gemini AI chatbot is now available to the under-13s. Musk’s has launched a Baby Grok version for kids.
Carsten Drees, an opinion commentator, argues that trying to keep children away from AI is an illusion — the genie has long been released from the proverbial bottle! Conscious use differs from our "adult" use. However, the use of AI chatbots also differs between young children and older kids or teenagers. Where the little child prefers to be told a story or spend hours discussing why he or she likes diggers, older ones see AI as a kind of confidant, an advisor — or yes, perhaps even a friend.
But, Carsten says: “Children are welcome to use AI for all I care — but please use it to train their critical thinking and not to outsource it.”
Why it matters
Young kids are using chatbots not just for homework, but for emotional advice, companionship, and even confiding in them like friends. Many don’t question the information they’re given, and one in three describe the experience as “like talking to a friend.” Worryingly, these tools aren’t designed for children, and lack vital protections like age checks and content filters. The risks are greatest for vulnerable children: 71% are using chatbots, and a quarter say they prefer talking to AI over real peopl often because they feel they have no one else. For parents, this raises serious questions about who or what is supporting their kids emotionally, and whether today's digital confidants are safe, helpful, or quietly harmful.
» Read the Survey's Key Findings
SIGNAL SCRAPS
Quick jolts from the edges: tiny but telling fragments, shifts, and whispers in the signal noise.
Reddit wants to launch its own search engine. The point of difference? Humans, rather than AI proving the answers. Reddit CEO Steve Huffman told shareholders it would rely on the platform’s “human voices” because a survey suggested most people “believe some questions can only be answered by humans, as opposed to AI-generated summaries.”
Open AI has pulled a feature that let users choose to have their conversations appear on Google. This came after people noticed personal health discussions appeared in Google search. Open AI said: “Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option. We're also working to remove indexed content from the relevant search engines. “
Amazon wants to surround your answers from Alexa with advertisements saying “there will be opportunities, as people are engaging in more multi-turn conversations, to have advertising play a role to help people find discovery, and also as a lever to drive revenue.”
Meta’s Mark Zuckerberg is insisting that everyone will one day be wearing AI glasses that “can see what we see, hear what we hear, and interact with us throughout the day.” The glasses will become our primary computing devices. If you don’t get a pair, you’ll be “at a pretty significant cognitive disadvantage compared to other people who you’re working with, or competing against.”
AFTER SIGNALS
A quick pulse on stories we’ve already flagged—how the threads we tugged last time are still unspooling.
We told you how Delta Airlines was starting to use AI to determine what fare you would pay and planned to ramp it up. There has been an outcry from US senators who said this will mean fare price increases up to each individual consumer’s personal ‘pain point so they demanded an explanation from the airline. Delta now says it won’t use AI to set personalised ticket prices.
We reported how Hertz and other car rental companies are using AI technology to scan for damage to rental cars and companies are complaining that even though the company’s humans passed the car when returned, an AI tool claimed to find a smudge or two and billed them. This may now spread to hotels after you checkout and in the hospitality industry. Experts say consumers should expect to see businesses across the service industry deploying similar technology in the future, if they aren’t already. Hotels are working their way through these changes, according to Jordan Hollander, cofounder of Hoteltechreport.com, including using AI tools to check if vaping or smoking happened in a room- a rule breaker that would meet a fine. “Between computer vision that can detect damage or wear in a room, and AI that analyses guest behaviour or room conditions in real time, the tech is already there.”
Recently, we explored the move to self-driving vehicles and Tesla’s moves to introduce ride sharing. In the first ever trial involving a Tesla’s autonomous systems, a Florida jury has ordered the company to pay $US243m (NZ$410m) in punitive and compensatory damages. The case concerns a Model S with Autopilot activated which sped through an intersection and hit a parked car killing a 22-year old. The car had failed to brake.
SIGNAL STACK
Layered stories that demand a second look—emerging patterns, legal twists, and culture-shaping undercurrents.
Be careful what fun you have creating AI videos
The creators of an AI tool that allowed people to create AI videos of NBA stars says that it has got a cease-and-desist letter from lawyers representing Los Angeles Lakers basketball star LeBron James. This marks one of the first known times that a high-profile celebrity has threatened legal action against an AI company for enabling the creation of nonconsensual AI imagery of their likeness. It is also one of the first times we’ve seen a celebrity take legal action against a type of nonconsensual but not strictly sexual type of AI-generated content, which is rampant on Instagram and other social media.
Why it matters
Mimicry can backfire: Even seemingly harmless AI videos where celebrities are made to say funny things or perform comedic skits can damage trust, mislead viewers, or misrepresent the person’s views. With deepfake realism improving fast, fake content can look strikingly authentic.
Legal and ethical risks: This legal action to remove AI videos using his likeness without consent, signals a growing demand for control over personal image in the digital era.
Misinformation and manipulation danger: Funny or satirical AI clips can be repurposed to spread disinformation, fraud, or political content, eroding public trust. As deepfakes blend into everyday media, people may stop questioning their authenticity nd risk believing harmful lies
It might be fun but there are some legal lessons here:
It may pay to obtain consent even for satire: Just because a video is humorous doesn't mean it isn't risky. Explicit permission helps avoid defamation or misuse claims.
Clearly label AI content: Transparent disclaimers or overlays that indicate “AI-generated parody” can help viewers distinguish fiction from reality and preserve trust.
Avoid feeding volatile narratives: Even funny impersonations can be co-opted into scams, political messaging, or deceptive campaigns; once online, control is lost fast.
There may be an argument on generally encouraging detection and consumer literacy: Encourage platforms to embed watermarking or artifact signals to help audiences detect synthetic media nd educate readers to stay sceptical
Amazon CEO wants to put ads in your Alexa+ conversations
Amazon CEO Andy Jassy wants Alexa+ to start serving ads inside your AI conversations. On the Q2 2025 earnings call, he pitched multistep, AI-generated ads as a way to “help discovery” while driving revenue, hinting at future ad-free subscription tiers. Alexa+, Amazon’s generative AI assistant, has millions of users but mixed reviews, and Amazon is spending billions on AI chips and data centres to catch up with Google and OpenAI. Turning natural language chats into advertising space could be lucrative but raises privacy, accuracy, and user trust questions.
Why it matters
Turning AI assistants into ad channels blurs the line between answer and influence, creating a trust problem that’s hard to solve. Audio signposting makes ads obvious (and easier to skip), while hiding them risks user backlash and regulatory scrutiny. Embedding adverts into “helpful” AI replies isn’t just clumsy—it edges into dystopian persuasion, where every question risks becoming a sales pitch.
Dire need for AI support in primary, intermediate schools survey shows
A new survey warns that NZ primary and intermediate schools urgently need AI guidance. Most teachers are experimenting with generative AI for lesson planning, but few schools have policies, and many rely on free, error-prone tools. Students are mostly using AI at home, sometimes as a “friend”, and over half say it feels like cheating. Researchers call for national guidance and critical literacy lessons to prevent over-reliance, bias, and privacy risks.
Why it matters
AI is already in kids’ lives, but most students and teachers are navigating it blindly. Without guidance, children risk forming unhealthy habits and dependencies, outsourcing thinking, trusting biased answers, or even treating AI as a friend. Schools that fail to teach critical interaction and literacy aren’t just missing a tech skill; they’re leaving students vulnerable to shaping their learning and relationships around tools they don’t fully understand.
FIELD READING
A visual pulse of the landscape—mapping connections, surfacing patterns, and turning raw signals into immediate insight.
GenAI Data Exposure: What GenAI Usage Is Really Costing Companies
Harmonic Security has found that sensitive corporate data appeared in more than 4% of generative AI prompts and over 20% of uploaded files in the second quarter of this year. The survey sampled a million prompts and 20,000 files submitted to 300 genAI tools and AI-enabled SaaS applications between April and June.
It found 43,700 of the prompts (4.4%) and 4,400 of the uploaded files (22%) contained sensitive information. This included credit card information. The problem was also from using embedded AI tools like Canva.
Why it matters
Enterprises are bleeding sensitive data into GenAI tools—often invisibly. Nearly one in five uploaded files carries proprietary or regulated content, and over a quarter of exposures come from free or unapproved accounts. Embedded AI inside everyday SaaS platforms compounds the problem, creating a layer of “silent leakage” security teams can’t easily see. Without data-first governance, intellectual property, financial models, and personal data are quietly flowing into third-party AI ecosystems—raising legal, regulatory, and reputational risks.
DRIFT THOUGHT
We taught kids to avoid strangers, yet today the stranger is every chatbot, ready to advise, to sell, and to copy our secrets. The challenge now: speak wisely when an unseen third party is always listening.
__
YOUR TURN
When AI tools double as confidants, tutors, and covert sales reps, where should we draw the new line between guidance and manipulation; especially for kids?
From classroom chatbots to ad-peppered voice assistants, the same systems shaping curiosity can nudge buying decisions or capture sensitive data. Share one concrete safeguard, policy, design choice, or habit that you believe could keep influence transparent without cutting off innovation.
If Alexa AI learns to sell nappies while I’m changing one, I’ve hit peak context collapse.”