machina.mondays // Playing Chicken with the Machine God
AGI—essentially human-level intelligence—is supposedly only a few years away. Do we need rules for this rogue AI intelligence now before it arrives and writes its own?
In this Issue: We’re trying to govern AGI before it exists — and the stakes couldn’t be higher. Our lead story explores the paradox of preemptive regulation: act too soon and misfire, wait too long and it may be too late. In Spotlight, we break down how Google’s AI Mode could quietly dismantle the open web, shifting us from navigation to synthetic answers. Hot Take covers the legal storm around OpenAI being forced to save every chat — including deleted ones. Plus: AI-washing at Builder.ai, Meta’s new superintelligence lab, and ChatGPT joins the fine dining world.
We’re Trying to Regulate a Phantom
The AGI Paradox: Why We Can’t Govern What Hasn’t Happened—Yet
Efforts to legislate artificial general intelligence are stuck in a time trap: we either act too soon and misfire, or wait too long and face irreversible consequences. Welcome to the most high-stakes gamble in modern governance.
If you want to regulate a thing, you need to understand it. But what happens when the thing in question—artificial general intelligence (AGI)—doesn’t exist yet? And what if the very process of waiting until it does puts us at irreversible risk? This is the paradox at the heart of today’s AI governance debate: the challenge of legislating a technology whose contours are still unknown, but whose arrival may be dangerously close.
AGI will be the so-called future development of AI that can perform any intellectual task as well as a human, unlike current AI systems that are specialised for specific tasks.
Warnings about the rapid pace of AI development are no longer the realm of speculative fiction. Industry leaders and researchers now speak of AGI not as a distant hypothetical but as an imminent event. Demis Hassabis, CEO of DeepMind, recently predicted human-level AGI may be just 5 to 10 years away (NextBigWhat, 2024). OpenAI’s Sam Altman suggested that the emergence of superintelligence could occur within a decade1. These projections are backed by concrete evidence of exponential progress, from translation speed to medical diagnosis accuracy, leading some, like Kevin Roose of The New York Times, to conclude that humanity is losing its monopoly on intelligence2.
And yet, for all this urgency, a regulatory vacuum persists. As Ezra Klein noted, "We’re not prepared, in part because it’s not clear what it would mean to prepare" (Klein, 2025)3. Traditional governance frameworks - reactive, slow, and bounded by national interests - are mismatched to a borderless, rapidly advancing technological force. The UK’s Deputy PM Oliver Dowden called for regulatory systems to be developed "in parallel" with AI progress, acknowledging that current global efforts are dangerously behind4.
This asymmetry has created a situation akin to trying to legislate gravity before Newton: we feel its pull, we see the effects, but we don't yet understand the mechanism. Attempting to draft effective rules for AGI now is like trying to write traffic laws for roads that haven’t been built—except that once the road appears, there may be no way to stop the speeding cars.
This is the essence of the curly conundrum: to govern effectively, we need to understand the thing we govern. However, if we wait to fully understand it, it might be too late.
Proponents of pre-emptive regulation argue that the cost of waiting could be existential. The analogy to climate change looms large: by the time we feel the full effects, it may be too late to respond meaningfully. AI, particularly if it becomes self-improving and autonomous, poses what Yoshua Bengio calls "early warning signs" of dangerous misbehaviour - from deception and manipulation to self-preservation instincts5. These are not hypothetical; they’re emerging in current generation models.
Anthropic, an AI safety lab, has even implemented a "Responsible Scaling" policy that pauses development if specific red-line capabilities emerge, such as aiding in the creation of weapons or full job automation6. While noble, these voluntary measures remain unaccountable to the public and insufficient in a competitive environment where other actors may not self-regulate. As UK Deputy Prime Minister Oliver Dowden warned, tech companies cannot be allowed to "mark their own homework"7.
The core issue is timing. Can meaningful regulation only emerge after AGI is achieved, or must it precede it to be effective? This is not a trivial question. Historically, major regulatory frameworks—for nuclear weapons, for example—only materialised after the threat had manifested catastrophically. Hiroshima made the stakes real. But in the case of AGI, a catastrophic first encounter might leave no room for course correction.
Yet, caution has its costs too. A moratorium on advanced AI development, as proposed by Tamlyn Hunt and others, could hinder the potential benefits of transformative AI. AGI could unlock breakthroughs in energy, climate mitigation, and medicine. Hassabis points out that the same system capable of curing cancer could also be weaponised - the difference lies not in the code, but in who wields it8. To delay development entirely might be to delay those gains.
This is the essence of the curly conundrum: to govern effectively, we need to understand the thing we govern. However, if we wait to fully understand it, it might be too late. And if we act too soon, we risk misdiagnosing the problem, creating regulatory straightjackets that hinder innovation.
The way forward likely lies not in binary choices—halt or sprint—but in layered, anticipatory frameworks that evolve alongside the technology. As Bengio proposes through initiatives like LawZero, we can begin to develop legal and governance architectures that are flexible, testable, and adaptable as new information emerges9. These frameworks should include international cooperation, independent safety audits, and mandatory pause thresholds linked to specific capabilities, rather than vague timelines.
The AI community must also cultivate a culture of shared risk recognition. Competitive pressure is currently fueling a race dynamic where every lab feels it must accelerate to avoid being left behind—a dynamic Eliezer Yudkowsky likens to a suicide race10. This race to the bottom, driven by prestige and market dominance, may be the real governance threat. Without a shared sense of boundaries, even the best regulations will fail.
Ultimately, regulating AGI is not just a technical challenge. It is a philosophical and civilisational one. It requires us to ask what kind of intelligence we want to create, who gets to shape it, and what safeguards we are willing to embed now to protect futures we cannot yet see. The window to answer these questions is closing. This is where we find ourselves now: inside the eye of a uniquely modern storm. The conundrum of governing what doesn’t yet exist - yet looms on every horizon - has forced us into a corner. Wait too long, and we risk building laws in the aftermath of disaster; act too soon, and we may cement the wrong frame entirely. And so, our current posture is one of uneasy anticipation. The world is not frozen, but stalled in a state of strategic uncertainty.
The truth is uncomfortably stark: we are gambling on time we may not have, hoping the most powerful technology ever built slows down just long enough for us to catch our breath.
If regulation can’t lead and innovation won’t wait, who decides what happens next?
PERSPECTIVES
Generative AI techniques have a high potential in getting more design work in less time, and it can be a huge productivity boost."
— Johny Srouji, Apple Considers Using Generative AI for Faster Apple Silicon Design, Mac Rumours
The human touch remains irreplaceable in many interactions, and organizations must balance technology with human empathy and understanding
— Kathy Ross, Gartner's senior director of customer service and support analysis, Companies That Replaced Humans With AI Are Realizing Their Mistake, Futurism
SPOTLIGHT
Is Google about to destroy the web?
Google’s new AI Mode promises a revolutionary search experience — but at what cost? Critics warn it could decimate the open web by replacing traditional search results with AI-generated answers, drastically reducing traffic to millions of websites. Publishers fear economic collapse, creators face dwindling clicks, and users may lose access to the serendipitous, diverse internet they’ve known for decades. As Google reimagines search, a battle looms over the future of online content. Welcome to the age of the "machine web". (via BBC)
___
» Don’t miss our analysis—full breakdown below ⏷
THE TEASER: Search is no longer the front door. In this week’s Spotlight Analysis, we unpack the real meaning behind Google’s AI Mode — and the rise of a new generative interface layer. From vanished traffic to ephemeral UX, we chart the radical reshaping of the web into a synthetic, model-driven surface optimised for answers, not exploration. The legacy web isn’t dead — it’s just buried beneath the feed.
IN-FOCUS
AI Chatbot Turns Out to Be 700 Engineers in India
London-based startup Builder.ai promised fully AI-generated apps — but instead secretly employed 700 engineers in India to impersonate its chatbot “Natasha.” Now bankrupt and owing millions to Amazon and Microsoft, Builder.ai is a cautionary tale of AI-washing: exploiting AI hype without delivering real tech. As the pressure to “use AI” intensifies across industries, more companies may follow suit — faking it until they break. (via Tech.co)
QUICK TAKEAWAY
AI-washing—where companies fake AI capabilities to ride the hype—has reached absurd levels, as seen in the case of Builder.ai, which hired 700 engineers to impersonate its AI chatbot “Natasha.” While laughable on the surface, it reveals a deeper issue: the industry’s current grey zone allows smoke-and-mirrors tactics to flourish. Ironically, in pretending to automate, Builder.ai ended up *creating* more human jobs, highlighting a strange twist in the AI panic narrative. But the fallout—bankruptcy, layoffs, and lawsuits—shows that faking it in the AI space isn’t just unethical; it’s unsustainable.
Meta Is Building a Superintelligence Lab. What Is That?
Meta is launching a new AI lab focused on developing "superintelligence" — an AI system more powerful than the human brain — as part of a major reorganisation under Mark Zuckerberg. Partnering with Scale AI founder Alexandr Wang, Meta is offering sky-high compensation to lure top talent from OpenAI and Google. With billions in investment, Meta aims to reclaim AI dominance amidst internal turmoil and fierce competition. This high-stakes bet could define the future of AI — or expose the limits of the hype. (via NYT)
This Year’s Hot New Tool for Chefs? ChatGPT
From inventing fictional chefs to designing dramatic dining rooms, high-end restaurateurs are now turning to AI for creative inspiration. Chicago’s Grant Achatz is building entire menus with ChatGPT, while others use AI to brainstorm dishes, design interiors, and dive deep into technical food science. While some chefs remain sceptical, others say AI offers the unexpected sparks needed to break creative ruts — proving that in the kitchen of the future, your sous-chef might just be a chatbot. (via NYT)
AI residencies are trying to change the conversation around artificial art
Across global institutions, artist residencies are putting AI tools directly into creative hands — not to “pick sides,” but to reframe the cultural narrative around AI-generated art. From jaguar storytellers in Copenhagen to ChatGPT-inspired culinary menus, these programmes challenge perceptions, encourage experimentation, and offer a counter to tech’s corporate dominance. Yet as questions of authorship, fairness, and copyright loom, these residencies could shape public sentiment — and the legal future — of AI art. (via The Verge)
HOT TAKE
OpenAI slams court order to save all ChatGPT logs, including deleted chats
OpenAI is pushing back against a sweeping US court order that requires it to retain all ChatGPT logs, including deleted chats and sensitive API data, amid a copyright lawsuit by The New York Times and others. OpenAI argues the order violates user privacy, disrupts contractual obligations, and imposes heavy engineering costs — all without evidence that users are deleting chats to hide copyright violations. Critics argue that the ruling risks exposing confidential data from millions of users, with some labelling it a “serious security breach.” OpenAI is seeking to overturn the order to protect user control over personal data. (via Ars Technica)
OUR HOT TAKE
The court order compelling OpenAI to retain user chat logs—even deleted ones—marks a deeply troubling precedent in the ongoing clash between technological evolution, legal oversight, and personal privacy. While the intent may be to facilitate discovery in a civil suit, the blunt mechanism of mandating indiscriminate data retention effectively erodes user trust, especially in contexts where AI platforms are being used for deeply personal, creative, or exploratory purposes. Unlike search engines or social media, generative AI tools are becoming quasi-confidants and collaborators—spaces where users might process trauma, draft private thoughts, or ideate sensitive strategies. The intrusion here is not just a breach of privacy but a categorical misunderstanding by courts of how generative systems function and what they represent in users' lives. Worse, the order lacks the stringent access barriers typically reserved for criminal investigations, raising fears that civil litigants—not state actors—may gain access to intimate user exchanges. The broader implication is the creation of a chilling effect on AI adoption and experimentation, precisely at a time when fair use laws should be clarified to support, not stifle, innovation. This isn't just about one lawsuit—it's about whether our digital inner lives will remain safe from weaponised discovery processes and the eventual rise of AI-powered surveillance synthesis.
» Listen to the full Hot Take
Creative Machinas // Hot Take: Your Deleted Chats Aren’t Gone — They’re Exhibit A
OpenAI slams court order to save all ChatGPT logs, including deleted chats
FINAL THOUGHTS
The future isn’t waiting for us to decide what it is—it’s already in testing!
___
FEATURED MEDIA
Why AI experts say humans have two years left
We’re seeing a world where AI isn’t just technology anymore — it’s diplomacy, regulation, and power.
—Stephen Fry, Narrator
Big Tech is on a global charm offensive. OpenAI’s Sam Altman, Meta’s Yann LeCun, and Google DeepMind’s Demis Hassabis are crisscrossing continents to sell their AI vision — from Davos to Delhi. But beneath the glitzy stagecraft and utopian promises, governments are pushing back. Nations like India are wary of Western AI dominance, and regulators are demanding real accountability. What was once a victory lap now feels more like a diplomatic negotiation, as the future of AI is shaped not just by code, but by geopolitics.
Justin Matthews is a creative technologist and senior lecturer at AUT. His work explores futuristic interfaces, holography, AI, and augmented reality, focusing on how emerging tech transforms storytelling. A former digital strategist, he’s produced award-winning screen content and is completing a PhD on speculative interfaces and digital futures.
Nigel Horrocks is a seasoned communications and digital media professional with deep experience in NZ’s internet start-up scene. He’s led major national projects, held senior public affairs roles, edited NZ’s top-selling NetGuide magazine, and lectured in digital media. He recently aced the University of Oxford’s AI Certificate Course.
⚡ From Free to Backed: Why This Matters
This section is usually for paid subscribers — but for now, it’s open. It’s a glimpse of the work we pour real time, care, and a bit of daring into. If it resonates, consider going paid. You’re not just unlocking more — you’re helping it thrive.
___
SPOTLIGHT ANALYSIS
This week’s Spotlight, unpacked—insights and takeaways from our team
The Generative Web and the Rise of Bespoke UX
Google's rollout of AI Mode is not a feature release. It is a watershed moment in the enclosure of the open web. What the BBC piece presents as a transformation in search is, on closer inspection, the beginning of a deeper systemic capture — one where the web's hyperlink economy, public logic, and knowledge commons are rendered invisible beneath a frictionless, generative surface.
Where search once offered pathways, AI Mode provides answers. That distinction is not semantic. It marks the shift from referral to replacement, from mediation to monopolisation. This is not search enhanced by AI. It is AI as the new front door, one that feeds on the web, but no longer leads to it.
From Index to Extraction
AI Mode marks the end of the transactional web. The model that once traded user traffic for creator content has now inverted: creators continue to produce, but the value is extracted at the point of ingestion, not interaction. Traffic collapses. Attribution is erased. What is surfaced is not the source, but a digest—a summary authored by the platform, trained on the work of others.
This is not curation. This is simulated authority.
The BBC article highlights the fallout: publishers losing 90% of their traffic, and recipe writers and reference sites being made redundant. But that’s a symptom, not the diagnosis. The core issue is this: AI Mode is not built to connect people with the web. It's built to metabolise the web into a closed-loop content engine. The user never leaves.
The Disappearance of Context
In the traditional search model, context lived in the click: you found a source, navigated to it, encountered its frame, tone, and authorial position. AI Mode bypasses that entirely. Its answers are not drawn from a visible plurality of sources. They are rendered from the statistical middle of its training data. This is epistemic flattening. Information loses its trail.
What replaces it is frictionless finality. Answers appear complete. The user’s curiosity is resolved before it can deepen. As one of the transcript's strongest insights noted: "The user is no longer navigating. They are being answered."
This new regime trades exploratory learning for passive receipt. And in doing so, it unbinds knowledge from its conditions of creation, shifting the user’s experience from exploratory navigation to instantaneous resolution, where information is shaped to match inferred intent rather than discovered through independent inquiry. This sets the stage for a new web logic: one where the next interface is not found, but generated—constructed dynamically, uniquely, and fleetingly in response to inferred need.
1. A New Web Emerges
The emergence of AI-generated content is not simply transforming search—it is reshaping the architecture of the web itself. What once operated as a collection of static, human-authored pages, navigated via hyperlinks and indexed by search engines, is now becoming a generative surface. Interfaces are no longer fixed routes into information; they are dynamically assembled responses. The web is evolving from a spatial environment to a temporal one: conjured on demand, personalised in real time, and structured by model logic rather than authorial intent.
2. From Static Sites to Performative Interfaces
Traditional websites are built to be visited. They persist in time, carry authorship, and maintain internal logic and navigability. By contrast, the generative web constructs interfaces that are performative and contingent. When a user asks a question or makes a request, what is presented is not a destination, but a tailored interface event. These generated responses stitch together fragments of knowledge, design elements, and user-specific signals to produce a single-use surface optimised for immediacy and brevity.
The result is an interface that behaves more like a momentary simulation than a persistent artefact. It exists to serve, then dissolves.
3. UX as Choreography, Not Architecture
User experience in this new context is not designed once and replicated. It is generated in real time, responding to inferred intent, behavioural history, and contextual signals. What a user sees is a product of inference, not navigation. The interface is no longer a stable landscape with menus and anchors—it is a fluid expression of model output.
This introduces a new paradigm: UX as choreography. The system interprets the user's intent and generates a set of visual, textual, and interactive elements to achieve that inferred goal. Each user, in each session, receives a bespoke interface, shaped in the moment. It is ephemeral, non-repeatable, and optimised for frictionless delivery.
4. Ephemeral Surfaces and the Loss of Linkage
This dynamic generation upends the core logic of the hyperlink web. Traditional links point to persistent locations. But when content is rendered in response to each user’s unique context, the idea of a universal link or citation loses coherence. Interfaces cannot be retraced. Knowledge becomes hard to verify. Shared reference frames break down.
In this world, the stability of web objects—pages, articles, domains—is replaced by fluid syntheses. The notion of a homepage, a sitemap, or a persistent layout becomes increasingly irrelevant. The web is no longer something we browse; it's something we explore. It is something that is continuously generated for us.
5. Utility Without Provenance
Dynamic interfaces offer incredible convenience. However, the trade-off is a lack of clarity of origin. When users interact with generative surfaces, they rarely see where the information came from, who authored it, or how it has been transformed. The web becomes a delivery mechanism for answers, not a network of knowledge.
This is not inherently negative. It simply reflects a new logic: utility replaces provenance. Personalisation replaces universality. The goal is not to direct the user to a source, but to resolve their intent as quickly as possible.
6. Designing for the Generative Web
In this environment, interface design becomes less about navigation and more about prompt responsiveness. Systems must be capable of generating coherent, useful, and legible front-end experiences from underlying data and training sets. Visual structure is no longer fixed—it is inferred. Layouts are not static—they are model-driven.
The generative web necessitates a shift in how designers and developers approach their work. Rather than crafting pages, they craft composable interface logic. Rather than controlling the user journey, they orchestrate the potential paths a model might take to resolve a user’s intent. Design becomes probabilistic.
The Rise of the Generative Web
This transformation is not the end of the web, but the beginning of a new layer: a generative interface layer that sits atop the legacy hyperlink web. The original web remains beneath, as a corpus, a substrate. But what most users experience is a synthetic surface—a highly personalised, fluid, and responsive front-end powered by large language models and interface synthesis engines.
The generative web does not replace the old; it reinterprets it. And in doing so, it redefines what it means to access, use, and experience information in the digital age. The page is no longer a destination—it is a moment of synthesis, built in real time, for a specific user, under specific conditions. This is not the web as archive—this is the web as event.
Key Takeaways
The generative web represents a structural shift in how digital knowledge and experiences are created and delivered.
Interfaces are no longer stable destinations, but dynamic, momentary syntheses generated in real time for each user.
Navigation is being replaced by inference-based interaction, where the system anticipates and resolves user needs instantly.
Provenance and context are deprioritised in favour of utility and brevity, challenging traditional norms of attribution and exploration.
This evolution marks a transition from the web as a fixed archive to the web as a responsive, performative event.
Dowden, O. (2023, September 21). AI developing too fast for regulators to keep up. The Guardian. https://www.theguardian.com/technology/2023/sep/22/ai-developing-too-fast-for-regulators-to-keep-up-oliver-dowden
Roose, K. (2025, March 14). Powerful A.I. Is Coming. We’re Not Ready. The New York Times.
Klein, E. (2025, March). The Government Knows AGI Is Coming. The Ezra Klein Show / New York Times.
Dowden, AI developing too fast for regulators to keep up
Bengio, Y. (2025, June 3). Introducing LawZero. https://yoshuabengio.org/2025/06/03/introducing-lawzero/
Ynetnews. (2023, September). Is AI moving too fast? Industry leaders set panic threshold. https://www.ynetnews.com/business/article/rkjewmi61x
Dowden, AI developing too fast for regulators to keep up
NextBigWhat. (2024). Deepmind CEO: We’re 5 Years From AGI-And No One’s Ready for what happens next. https://nextbigwhat.com/5-alarming-predictions-about-ai-from-deepmind-ceo-demis-hassabis-that-will-blow-your-mind-and-now-everyone-is-freaking-out/
Bengio, Introducing LawZero
PBS NewsHour. (2023, October). As AI rapidly advances, experts debate level of threat. https://www.pbs.org/newshour
This part has me thinking really :
“In the traditional search model, context lived in the click: you found a source, navigated to it, encountered its frame, tone, and authorial position. AI Mode bypasses that entirely. Its answers are not drawn from a visible plurality of sources. They are rendered from the statistical middle of its training data. This is epistemic flattening. Information loses its trail.”
Hmmm … so … Does the absence of clicking dull our critical instincts? No friction, no pause.
There’s something quietly alarming in how effortless the new generative web might become. Clicking is a micro-action of intent — it carried the weight of curiosity, of seeking the frame behind the fact. But if we no longer click, no longer leave the frictionless answer-surface, do we also lose the habit of inquiry itself? The muscle memory of investigation?
The flattening here isn’t just epistemic. It’s cognitive. And we may not feel the loss; because friction just slides away when it disappears... 😂