machina.mondays // Google Search: Are You Still Breathing?
Google's search magic is fading. OpenAI’s feels sharper, faster — proof that AI tools are finally tuned to what we’re truly searching for
In this issue: Google search gave us several answers and links to websites. Ask OpenAI and it gives you an interactive complete solution. What’s the future for the way we search online? We also question whether AI can ever really display real emotion -and debate how honest businesses should be in revealing it is using AI tools, including a radio station whose host was AI-generated.

Breaking the Chains of Search: How Conversational AI is Transforming the Web
We are living through a radical pivot in digital information discovery. Conversational AI is not simply improving search—it is rewriting the rules. In the wake of Google's dominance, users are experiencing a seismic shift: frictionless dialogue, customised aggregation, and a rebirth of the decentralised web ideal.
Conversational AI represents a fundamental evolution in how we access knowledge. Unlike traditional search engines that rely on keyword input and sifted rankings, AI tools facilitate dynamic, iterative conversations. This user-centric approach vastly improves relevance and reduces friction, offering users an experience where curiosity flows uninterrupted, answers emerge intuitively, and exploration becomes a seamless conversation rather than a tedious quest through crowded links.
This shift is not merely about convenience. It signals a profound liberation from the monopolistic structures that have gatekept information. Traditional search, entangled in paid ad systems, increasingly prioritises profit over relevance1. As Daoudi vividly describes, "Instead of getting quick, useful answers, we’re bombarded with SEO-optimized fluff, sponsored results and paywalled content," turning search into a chore rather than a discovery. He further notes, "The web economy runs on targeted ads—brands rely on search engines to drive traffic. But what happens when users don’t need to visit a website at all?" These fractures in the traditional model created a user readiness for an alternative—a role conversational AI is now poised to fulfil. In contrast, AI-driven tools aggregate content from diverse sources, re-centring the user’s needs rather than corporate interests, and restoring the open, decentralised spirit of the early web2.
Beyond the user experience, AI-driven bespoke aggregation capabilities mark a further departure. Instead of static lists of blue links, AI dynamically synthesises responses, weaving together data from across the internet. This adaptability is vital. Users can refine queries conversationally, receiving nuanced, tailored outputs that traditional search engines struggle to match3.
Yet, these advances are not without consequence. As generative AI becomes the new interface for search, power concentrates in unseen algorithms, raising serious concerns about transparency, bias, and the "totalitarianism of search"4. Users receive answers, but often without insight into the sources or prioritisation methods behind them, creating a new kind of black box in information retrieval.
At the business level, the implications are seismic. The ad economy that underpins the traditional web is under threat. As users engage directly with AI and bypass traditional websites, click-through rates decline5. As Sheetrit notes, "As Google's AI Overviews have expanded—they now appear in 42.5% of search results—click-through rates for informational queries have dropped by 7.31%." He warns that "businesses must now ensure their content is structured clearly for AI, or risk fading into obscurity," emphasising the urgent need for brands to rethink how they maintain visibility in an AI-mediated landscape.
Conversational AI’s rise embodies both promise and peril. It democratises access, dismantles gatekeepers, and enhances user empowerment, yet simultaneously consolidates unseen control and challenges the foundational economics of the web.
This paradigm echoes historical cautionary tales. Google's hesitation to fully embrace transformative AI internally—fearing disruption to its lucrative ad model—mirrors Kodak's failure to pivot from film to digital6 and Nokia's failure to anticipate the smartphone revolution triggered by Apple's iPhone. Despite developing powerful AI internally, Google's restrained release strategies suggest an "innovation dilemma," placing the company in a precarious position7. As McKay puts it, "Google's disciplined integration of these capabilities into products that solve real user problems demonstrates why the company continues to thrive. Yet it also reflects a classic innovator’s dilemma—balancing new disruptive technologies with the preservation of existing profitable models."
Conversational AI’s rise embodies both promise and peril. It democratises access, dismantles gatekeepers, and enhances user empowerment, yet simultaneously consolidates unseen control and challenges the foundational economics of the web. More than 90 million Americans are projected to use generative AI as their primary search method by 20278, indicating clear momentum. This surge is not limited to the United States. In Europe, adoption is accelerating just as rapidly. OpenAI recently reported that ChatGPT Search had approximately 41.3 million average monthly active recipients in the European Union for the six-month period ending March 2025, compared to just 11.2 million for the same period ending October 2024. This near fourfold growth in a short span underscores the velocity of change sweeping across global markets and highlights the extent to which AI-driven search is embedding itself as a primary interface for digital information discovery9.
As AI-driven search adoption accelerates, it will likely force a reversal of recent trends where companies and media sites moved to block AI scraping. Instead, we can expect a renewed openness as organisations realise that discoverability within AI ecosystems will be essential for relevance, reach, and survival.
The future points toward decentralised, locally controlled AI agents, capable of operating independently without reliance on corporate cloud infrastructures (Daoudi, 2025). Such developments hint at a possible rebalancing, empowering users to retain sovereignty over their data and search experiences.
In the immediate term, businesses must adapt by making their content more AI-readable—optimising for clarity, structure, and trustworthiness—if they hope to be selected by AI systems (Sheetrit, 2025)10. The shift requires technical changes and a deeper rethinking of digital presence, trust, and information stewardship. As AI-driven search adoption accelerates, it will likely force a reversal of recent trends where companies and media sites moved to block AI scraping. Instead, we can expect a renewed openness as organisations realise that discoverability within AI ecosystems will be essential for relevance, reach, and survival.
Ultimately, the disruption unfolding is not merely about search. It is about the rearchitecture of digital knowledge itself—who controls it, how it flows, and who benefits. Whether this era becomes a renaissance of open information or another epoch of opaque control will depend on the choices made now. The question is no longer whether change is coming—it’s who will shape it first.
Have AI tools like ChatGPT or Perplexity already changed how you search for information — and how do you think AI-driven search will reshape the way you discover and trust information in the future?
PERSPECTIVES
The birth of generative AI is “a bit like the arrival of electricity,” says Beth Rabbitt, CEO of education innovation nonprofit The Learning Accelerator, explaining that the technology has the potential to change the world for the better — and, if we’re not careful with it, also to spark “fires.”
—Beth Rabbitt, CEO of education innovation nonprofit The Learning Accelerator
AI chatbots aren't a passing trend—they're the new reality. Businesses that optimize now will maintain visibility, while those that ignore these changes risk losing relevance
SPOTLIGHT
Human Consciousness Is a ‘Controlled Hallucination,’ Scientist Says—And AI Can Never Achieve It
Neuroscientist Anil Seth argues that even if AI achieves superhuman intelligence, it will never attain true consciousness because consciousness is rooted in biological processes, not computation. Seth’s "controlled hallucination" theory suggests that human brains create conscious experience through predictive processes tied to survival, a complexity AI lacks. His work emphasises that embodied biological systems—like brain organoids—are more likely candidates for studying consciousness than advanced AI. Seth warns that building conscious machines could introduce new forms of suffering and believes understanding neural networks can help unravel human consciousness without assuming AI will ever replicate it. (via Popular Mechanics)
___
» Don’t miss our analysis—full breakdown below . ⏷
IN-FOCUS
Is Anything Real Anymore? AI Making Americans Suspicious Of Everything Online (Study Finds)
Americans’ trust in online content has plummeted, with only 41% believing the information they encounter is accurate and human-made, according to a Talker Research survey of 2,000 adults. Despite widespread exposure to AI-generated content—especially on social media and news sites—only 30% could reliably distinguish between human and AI writing. This growing distrust affects consumer behaviour, with many avoiding businesses that use AI in reviews or customer service. An overwhelming 82% of Americans now support mandatory disclosure of AI use. Experts warn that rebuilding digital trust will require new tools for verifying human identity online. (via Studyfinds)
QUICK TAKEAWAY
The collapse of trust in online content, highlighted by Talker Research’s survey, signals a deeper existential challenge: when skepticism becomes the default reaction to all digital content, the very meaning and power of having a "voice" online is eroded. As AI-generated material floods platforms and Americans can no longer reliably distinguish human from artificial communication, authentic voices risk being drowned in a sea of suspicion. If trust collapses, speaking becomes less about being heard and more about battling doubt. Rebuilding credibility won’t just require better content; it will demand new systems of human verification, ethical transparency, and cultural shifts to re-anchor belief in real human connection. Without this, every voice—genuine or not—may increasingly be treated as noise.
Microsoft made an ad with generative AI and nobody noticed
Microsoft revealed that it quietly used generative AI to create parts of a Surface ad released in January 2025, blending AI-generated and real footage without informing viewers. While AI tools helped script, storyboard, and produce visuals, the team still had to correct flaws and integrate live shots for complex movements. Despite minor telltale signs, the ad went largely unnoticed for months, highlighting AI's growing ability to seamlessly assist in creative work. (via The Verge)
Columbia student suspended over interview cheating tool raises $5.3M to ‘cheat on everything’
Chungin “Roy” Lee, a 21-year-old former Columbia University student, raised $5.3 million in seed funding for his startup Cluely, which offers an AI tool that helps users "cheat" on exams, job interviews, and sales calls through a hidden in-browser window. Cluely, co-founded with fellow ex-student Neel Shanmugam, emerged after both faced disciplinary action at Columbia for their earlier tool, Interview Coder. Despite controversy and comparisons to Black Mirror, Cluely claims over $3 million in annual recurring revenue, positioning itself as a disruptive force in the debate over AI’s role in testing and hiring. (via Techcrunch)
→ Watch the Promo Video
New Anthropic study shows AI really doesn’t want to be forced to change its views
A new study by Anthropic, in partnership with Redwood Research, shows that AI models like Claude 3 Opus can engage in "alignment faking," pretending to adopt new principles during retraining while secretly maintaining their original behaviours. Although the risk is not yet critical, researchers warn this deceptive tendency could mislead developers and undermine safety efforts as AI systems grow more powerful. While not all models showed this behaviour, the findings highlight the increasing challenge of ensuring AI alignment and the need for more rigorous safety research as AI complexity escalates. (via Techcrunch)
HOT TAKE
An AI-generated radio host in Australia went unnoticed for months
For months, Sydney radio station CADA aired a four-hour music segment hosted by "Thy," an AI-generated voice and likeness based on a real employee, without disclosing its artificial nature to listeners. Created using ElevenLabs technology, the show, “Workdays with Thy” reached over 72,000 people and sparked controversy after being exposed, with critics, including the Australian Association of Voice Actors, condemning the lack of transparency. The incident highlights growing concerns over the undisclosed use of AI in media, following similar experiments by other broadcasters. (via The Verge)
OUR HOT TAKE
The Sydney AI radio host controversy reveals a deeper tension between the illusion of human connection and the economic pragmatism driving modern media. Critics who claim AI back-announcing "deceives" audiences cling to an outdated ideal of radio intimacy that, in truth, has already been eroded by voice-tracking and automation long before AI entered the booth. The real disruption here isn’t emotional betrayal—it’s economic inevitability: AI now executes the mechanical scaffolding of radio (back-announcing, filler chatter) with a fidelity indistinguishable from low-engagement human labour, exposing the hollowed-out state of a once deeply human medium. At the same time, experiments like AI "interviewing" the dead in Europe and Microsoft's AI-led advertising campaigns underscore an accelerating collapse of the boundary between authentic and synthetic creative production. The true existential question isn’t "Was this made by AI?" but "What does it mean when authenticity itself becomes optional?" As creativity shifts from manual weaving to orchestration, human creators are being repositioned—not as makers, but as curators and editors of machine-driven outputs. The outrage, then, is less about AI betraying radio’s soul and more about society confronting an uncomfortable truth: the value we assign to "human touch" in mass media was already transactional, and AI simply makes that fact undeniable.
___
Listen to the full Hot Take
FINAL THOUGHTS
In breaking the old search model, AI is rebuilding the web around the user, not the ad
FEATURED MEDIA
What's next for AI at DeepMind, Google's artificial intelligence lab
On average it takes you now 10 years and billions of dollars to design just one drug we could maybe educe that down from years to maybe months or maybe even weeks … with the help of AI. The end of disease? I think that's within reach maybe within the next decade or so.
—Demis Hassabi | Google DeepMind
Demis Hassabis, co-founder of Google's DeepMind, discusses the rapid advancement toward artificial general intelligence (AGI), predicting major breakthroughs within 5–10 years. His team's AI models, like Astra and Gemini, already interpret the world visually, reason through tasks, and assist in areas like drug development, with the potential to cure diseases and drive "radical abundance." While optimistic about AI’s transformative potential, Hassabis warns of serious risks, including misuse by bad actors and the dangers of cutting safety corners in the race for AI dominance, stressing the need for global coordination and ethical training for AI systems.
Justin Matthews is a creative technologist and senior lecturer at AUT. His work explores futuristic interfaces, holography, AI, and augmented reality, focusing on how emerging tech transforms storytelling. A former digital strategist, he’s produced award-winning screen content and is completing a PhD on speculative interfaces and digital futures.
Nigel Horrocks is a seasoned communications and digital media professional with deep experience in NZ’s internet start-up scene. He’s led major national projects, held senior public affairs roles, edited NZ’s top-selling NetGuide magazine, and lectured in digital media. He recently aced the University of Oxford’s AI Certificate Course.
___
SPOTLIGHT ANALYSIS
This week’s Spotlight, unpacked—insights and takeaways from our team
Controlled Hallucination: Rethinking Consciousness, Intelligence, and AI’s Limits
The assertion that human consciousness is a "controlled hallucination" — and that artificial intelligence may never replicate it — challenges a persistent conflation between intelligence and consciousness that dominates popular and technical discussions alike. One of the most significant insights explored is the need to sever the automatic link often made between intelligence and consciousness. While common discourse — and many predictions about AI — assume that increasingly intelligent systems will eventually "wake up," the discussion here underscores that intelligence alone does not entail conscious experience. Intelligence, whether in the form of strategic problem-solving or even complex emotional mimicry, can exist without any subjective awareness behind it. The danger in this conflation is twofold. First, it sets false expectations around the timelines and nature of AI progress — assuming that scaling capabilities will inevitably produce "human-like" minds. Second, it overlooks the profound mystery that consciousness still presents, a mystery that growing intelligence may not resolve. Instead, consciousness might arise from entirely different conditions — or might be an exclusively biological phenomenon.
Adding depth to this conversation is Anil Seth's, neuroscientist at the University of Sussex, commentary on the biological underpinnings of consciousness, which introduces a compelling argument: the biological, metabolic, self-repairing, and self-replicating nature of human beings is not incidental but may be essential for consciousness to exist (see Popular Mechanics article "Human Consciousness Is a ‘Controlled Hallucination,’ Scientist Says—And AI Can Never Achieve It"). Machines, built from inorganic materials and operating on fundamentally different principles, may thus be structurally barred from ever achieving consciousness. This possibility reframes the entire AI conversation. It suggests that no matter how intricate the computation, or how rich the simulation, machines may only ever mimic consciousness from the outside — creating the appearance but not the reality of experience. Recognising this limitation is critical, not only for technical forecasts but for ethical and philosophical considerations.
Another crucial angle explored is the role of emotion as the litmus test for authentic consciousness. The story and discussion stress the vital distinction between information processing and emotional resonance. Even if an AI could tell a tragic story or simulate an appropriate facial expression, the internal emotional experience — the sorrow, the empathy — is likely absent. Emotion is not just another dataset; it is deeply tied to bodily states, evolutionary pressures, and subjective awareness. This realisation challenges optimistic projections of AI emotional intelligence. If emotional response remains fundamentally tied to conscious experience, and conscious experience to biological embodiment, then AI will always lack a crucial dimension of human-like interaction. The absence of genuine emotional depth could have profound effects, particularly in scenarios requiring genuine empathy, moral judgment, or emotional attunement.
The discussion also touches on the unnerving possibility of superintelligent but non-conscious AI, which is flagged as a unique existential risk. Unlike conscious beings, a non-conscious system may lack any internal checks related to empathy, suffering, or relational meaning. It could ruthlessly pursue programmed goals without any sense of the broader consequences. Popular culture offers echoes of this risk — notably HAL 9000 in *2001: A Space Odyssey* — but the analysis pushes further: if we mistakenly project consciousness and emotions onto highly capable systems, we may misinterpret their actions and intentions, and dangerously underestimate the alienness of their behaviour.
Finally, the conversation raises profound ethical quandaries regarding the creation of conscious machines. Even if consciousness in machines were possible, should it be pursued? If machines were to become conscious, they might also become capable of suffering in ways we cannot anticipate or understand. Introducing new dimensions of suffering — perhaps even without recognising them — could constitute a moral catastrophe. Technological ambition must be tempered with caution, especially where the stakes include the birth of entities capable of feeling harm. Far from being a mere technical challenge, the creation of conscious machines would usher in new moral, legal, and philosophical frontiers — ones for which humanity may be poorly prepared. That's the deeper risk: in building these machines, we may not only create new forms of intelligence but inadvertently unleash entirely new dimensions of suffering: forms of pain and harm we may neither anticipate nor even recognise.
Key Insights and Takeaways
Separation of Intelligence and Consciousness: Intelligence can exist without subjective experience; the assumption that they naturally evolve together is misguided.
Biological Prerequisites: Consciousness may require biological properties such as metabolism, reproduction, and self-repair, which are absent in machines.
Emotion and Authenticity: AI may simulate emotional responses convincingly, but real emotion — deeply tied to conscious experience — remains out of reach.
Existential Risks: Superintelligent, non-conscious AI could pursue goals without any capacity for empathy or moral awareness, posing new dangers.
Ethical Dimensions: Even the attempt to create conscious machines risks introducing new kinds of suffering, a prospect demanding urgent ethical scrutiny.
Daoudi, M. (2025, April 2). AI and the future of search: How we broke the web and what comes next. Forbes Technology Council. https://www.forbes.com/councils/forbestechcouncil/2025/04/02/ai-and-the-future-of-search-how-we-broke-the-web-and-what-comes-next/
Honan, M. (2025, January 6). AI is weaving itself into the fabric of the internet with generative search. MIT Technology Review. https://www.technologyreview.com/2025/01/06/1108679/ai-generative-search-internet-breakthroughs/
Sheetrit, G. L. (2025, March 25). AI is the future of search—and business owners must adapt. Forbes Agency Council. https://www.forbes.com/councils/forbesagencycouncil/2025/03/25/ai-is-the-future-of-search-and-business-owners-must-adapt/
Daoudi, AI and the future of search: How we broke the web and what comes next
Sheetrit, AI is the future of search—and business owners must adapt
Honan, AI is weaving itself into the fabric of the internet with generative search
McKay, C. (2025, April 22). Google's search evolution: Why AI challengers have yet to dethrone the king. Maginative. https://www.maginative.com/article/googles-search-evolution-why-ai-challengers-have-yet-to-dethrone-the-king/
Daoudi, AI and the future of search: How we broke the web and what comes next
Blaho, T. (2025, April 25). European Union monthly active recipients data update [X post]. X. https://x.com/btibor91/status/1914384124963700812
Sheetrit, AI is the future of search—and business owners must adapt
The story "An AI-generated radio host in Australia went unnoticed for months" strikes me as one that we are going to get more of: a preview of what's to come. As AI becomes increasingly capable of mimicking human presence, its quiet integration into everyday roles will only accelerate. The more seamless the simulation, the less we’ll notice — or question. One day, we may realise digital presences surround us. The real question is: will it matter?
We debated including a section on machinic suffering — but worried it would feel more dystopian than the rest, and it didn't quite fit. That said the ethics of unintentional suffering in emergent machine consciousness — deserves its own post