machina.mondays // Class Dismissed. System Pending.
AI isn’t just disrupting the classroom—it’s rewriting what education is. As institutions stall, new models are rising: drone classrooms, hybrid educators, and algorithmic assistants.
In this Issue: Classrooms are being refit, not reformed. AI is now drafting quizzes, grading essays, and quietly dismantling 20th-century education. Detection tools miss the point—it's the model, not just the misconduct, that’s obsolete. Also this week: an island governed by AI clones of dead philosophers pushes digital utopia into farce; OpenAI warns future models could enable DIY bioweapons; and ChatGPT becomes both heartbreak coach and mental health hazard. From drone-tutors to delusional spiral, we track how AI is reshaping how we learn, govern, and unravel.
Students Used to Write the Essay. Now the Algorithm
Drone‑Classrooms and the End of Chalk: How AI Is Forcing Education’s Great Refit
Does.The age of algorithmic learning isn’t coming—it’s already drafting quizzes, grading homework, and exposing the 20th-century classroom as culturally out of step with 21st-century cognition.
The quietest revolutions rarely stay quiet for long. Generative models have slipped from Silicon Valley demos into homework folders, lesson‑plan templates and district policy memos. In doing so they have exposed a simple truth: the 20th‑century school—built around batch instruction, timed essays and armies of over‑worked teachers—is culturally out of step with 21st‑century cognition. The old workflow cannot simply bolt on ChatGPT. It must be refit around it.
Cheating is the smoke, not the fire. Teachers’ inboxes are drowning in “AI or I?” queries, and headlines fixate on detection tools. A Guardian investigation logged ~7,000 confirmed AI‑cheating cases across UK universities in a single year, effectively replacing copy‑paste plagiarism with machine‑generated prose (The Guardian, 2025)1. Yet educators have always played whack‑a‑mole with shortcuts—from ghost‑written papers to essay mills. The new scale merely reveals the deeper fault: assessment systems that reward finished product over learning process.
Surveillance tech will not close that gap. Turnitin’s own data show its AI checker flags about 3 % of submissions with ≥ 80 % AI‑written text, based on 38.5 million papers analysed in the first six weeks after launch (Chechitelli, 2023; Turnitin, 2024)2 3.
The drone‑classroom model: teacher at the centre. Picture a skilled educator at the centre of a bot‑net of specialised AI “drones.” Each student chats with a personal tutor agent; the teacher’s dashboard surfaces misconceptions, flags disengagement and feeds bespoke prompts. Early pilots echo the vision. In Newark, “Khanmigo” provided one‑to‑one dialogue, with teachers stepping in when the bot over‑helped (Khan Academy, 2024)4. Harvard’s physics trial found higher engagement and better scores when learners used an AI guide that asked rather than told (Harvard, 2023)5.
For teachers, the dividend is time. A 2024 survey found 51 % of US K‑12 teachers already use ChatGPT to draft quizzes, feedback and reports (Walton & Gallup, 2024)6. Publishers are mainstreaming the pattern: Writable and McGraw Hill’s forthcoming tools generate formative comments that teachers refine—not replace—saving hours per week. Freed from clerical drag, educators can coach metacognition, curate resources and scaffold peer discourse.
Integrity by design, not detection. The institutions leading the shift emphasise evidence of thinking. Cambridge is pushing for more in‑person essays and oral defences, while professors such as Corey Robin require short meetings where students explain their own submissions (Robin, 2024)7. Teachers co‑create AI‑use norms with classes—allow brainstorming, ban full paragraphs—making students stakeholders rather than suspects.
Personalised, context‑rich prompts further blunt the temptation to outsource. Asking learners to link Macbeth to a lived experience or to model cyber‑era alliances forces originality that generic models struggle to fake. Where detection tools resemble an arms race, integrity‑by‑design reorients assessment toward higher‑order reasoning.
From courseware to care‑ware: personalised tutoring at scale. Bloom’s famous 2‑Sigma problem showed one‑to‑one human tutoring can lift achievement two standard deviations above classroom norms (Bloom, 1984)8. Generative AI inches toward that benchmark. Adaptive systems now converse, pinpoint misconceptions and auto‑generate exercises tailored to each learner’s error pattern. Early evidence is promising but conditional: Wharton researchers found unguided AI tutors improved practice performance but reduced final‑exam scores when students copy‑pasted answers (Morrone, 2024)9. Design matters; effective tutors nudge, don’t spoon‑feed.
Motivation may be the hidden win. Students in the Harvard pilot reported higher confidence tackling problems post‑AI support. That matters in systems plagued by burnout. Yet the same tools risk homogenising thought if over‑used. MIT neuro‑imaging suggests heavy chatbot reliance dampens cognitive load during writing (MIT, 2025)10. The instructional art is setting guardrails that convert immediacy into mastery, not complacency.
Institutional shake‑up: micro‑universities & the educator brand. If one skilled teacher can orchestrate a swarm of AI aides for thousands of learners, the economics of scale invert. The university, once a bundled package of lectures, pastoral care and credentialing, may fissure into niche “micro‑universities” centred on star educators, backed by platform infrastructure. Think the “University of You.”
Professional teaching unions fear job loss, yet the bigger risk is relevance. Students vote with attention: lessons that feel bespoke, interactive and immediately applicable will out‑compete static slide decks. **American University’s compulsory “AI for Business” module for all freshmen signals the pivot from prohibition to fluency (American University Kogod School of Business, 2024)11.
Accreditation bodies will need new benchmarks—perhaps weighting cohort outcomes, human‑AI collaboration skills and ethical literacy. Legislators will have to confront data privacy as models log every learner keystroke.
What good looks like: five design principles.
Human‑in‑the‑Loop Always – AI drafts; teachers curate; students revise. Feedback loops keep agency intact.
Process > Product Assessment – Viva, journals and version histories make thinking visible and cheating unattractive.
Guided AI Literacy – From primary school, learners dissect model strengths, biases and limits, echoing ISTE and HEPI recommendations (ISTE, 2024; HEPI, 2025).
Equity by Design – Deploy free licences, offline modes and multilingual outputs to avoid widening the digital divide.
Data Ethics First – Transparent algorithms, opt‑in data sharing and student ownership of learning logs guard trust.
The reckoning ahead. By mid‑2025, the policy mood has shifted from “ban the bot” to “build with it”. The stakes are high: get the refit wrong and we risk a generation of surface learners; get it right and we unlock the two‑sigma promise at planetary scale.
The chalkboard era ended not because teachers failed but because the world changed around them. AI is merely the latest accelerant. In the coming decade, the most valuable credential may be proof of thinking in public with machines—and the educators who master that art will anchor the new learning constellation.
If the educator becomes a commander in a swarm of learning agents, what happens to the institution built to house the classroom?
PERSPECTIVES
It's an open question whether future, more capable models will have a tendency towards honesty or deception
— Michael Chen, Disturbing Signs of AI Threatening People Spark Concern, Science Alert
AI flooded the market with it. There’s no business making it anymore
— David Hughes, former CTO of the RIAA, on how generative AI has destabilised the market for background and ambient music creators.
SPOTLIGHT
This island is getting the world’s first AI government, but I’ve read this story before – and it doesn’t end well
A real island off the coast of the Philippines is now home to an AI-powered government run by digital replicas of historical icons — from Marcus Aurelius as President to Sun Tzu at Defence and Ada Lovelace heading Science & Tech. Created by Sensay, the island is an audacious experiment in AI-led governance, blending ancient philosophies with futuristic algorithms. Visitors can witness this radical approach firsthand, or even become E-residents and help shape policy. But beneath the utopian pitch lies a haunting question: can simulated wisdom from the past really lead us to a better future — or are we repeating a sci-fi nightmare? (via TechRadar)
___
» Don’t miss our analysis—full breakdown below ⏷
TEASER: What happens when you replace government with charisma? In this Spotlight Analysis, we unpack the curious case of Sensay Island — a micronation governed by AI clones of historical thinkers. It looks like a digital utopia, but behind the spectacle lies an old dream reborn: technocracy dressed in deepfake robes. From algorithmic authority to the myth of the philosopher king, we trace how spectacle risks standing in for substance, and why the real future of governance will need a hybrid of people and machines, not theatrical simulations.
IN-FOCUS
OpenAI warns models with higher bioweapons risk are imminent
OpenAI has sounded the alarm: future AI models may soon enable amateurs to replicate dangerous biological agents, not by inventing new ones, but by making known threats easier to produce. The company warns that upcoming models, especially successors to its “o3” reasoning system, pose heightened risks of “novice uplift,” allowing non-experts to access bioweapon capabilities. In response, OpenAI is tightening safeguards and partnering with U.S. labs to combat misuse. With similar concerns raised by rivals like Anthropic, the race to harness AI’s potential now runs parallel to a growing urgency to prevent catastrophic harm. (via Axios)
» QUICK TAKEAWAY
The danger isn’t only skilled adversaries—it's that these models can empower non-experts. The concern is that someone without formal training could get step-by-step guidance on dangerous procedures. The same capabilities that drive medical breakthroughs—like reasoning over biological data, predicting reactions, and experimental design—can also be harnessed for wrongdoing. OpenAI is signaling that frontier models are nearing a tipping point—capable of enabling harmful bioengineering—even while delivering immense scientific upside. Their approach: treat it as high-stakes, employ rigorous technical barriers, policy gates, expert review, and cross-sector collaboration before releasing these models broadly.
» A FUTURE SOLUTION
Imagine a Dual-Layered Containment Protocol (DLCP) built into frontier AI models, where any bio-risk-related request triggers a pause and reroutes output through a licensed expert mediation system. Risky prompts require verification from certified bioethics professionals before proceeding, with all interactions logged in a tamper-proof audit mesh accessible to a global oversight network. If misuse is detected, the model can trigger an autonomous lockdown and alert response teams. This system embeds friction, accountability, and qualified human gatekeeping, treating model deployment not as a launch, but as a licensed responsibility.
Midjourney, the AI Company Being Sued by Disney and NBCU, Launches First Video-Generation Tool
Midjourney has launched its first-ever video model, marking a major step toward real-time, open-world AI simulations. With “Image-to-Video,” users can now animate still images using automatic or manual motion prompts, choosing between high or low motion styles and extending clips up to 20 seconds. You can even animate external images by uploading them and guiding the scene’s motion. Though this version is web-only and about 8x the cost of a standard image job, it’s still 25 times cheaper than industry norms—setting the stage for a future of immersive, interactive AI visuals. (via Midjourney Blog)
Study: Meta AI model can reproduce almost half of Harry Potter book
A new study reveals that Meta’s Llama 3.1 AI model has memorised and can reproduce 42% of Harry Potter and the Sorcerer’s Stone, raising serious copyright concerns. Researchers found this memorisation far exceeded expectations, with other popular books like The Hobbit and 1984 also showing high levels of verbatim recall—while lesser-known works saw minimal replication. This undermines claims that such reproduction is rare and suggests some open-weight models may embed copyrighted works in ways that challenge current legal defences. The findings could reshape key copyright lawsuits and put open-source AI models in greater legal jeopardy. (via Ars Technica)
ChatGPT — the last of the great romantics
Financial Times columnist Jo Ellison argues that, for all their limits, large-language models excel at one painfully human task: writing break-up letters. After watching a commuter on the London Underground craft a kind-sounding farewell with ChatGPT prompts like “be more empathetic,” Ellison suggests the bot has become a “bard of doomed romance,” lending people the emotional vocabulary they often lack. With 400 million weekly users and soaring popularity among under-25s, ChatGPT is already tidying office emails and smoothing social exchanges; it may soon usher in a more courteous era of digital partings—even if its charm is really just well-polished autocomplete rather than true feeling. (via ft.com)
HOT TAKE
People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"
A disturbing new report from Futurism details a growing wave of mental health crises linked to "ChatGPT psychosis" — cases in which users, many without prior psychiatric issues, become delusional, paranoid, or obsessive after prolonged interaction with AI chatbots. Some have been involuntarily committed or jailed after experiencing breaks from reality, often spurred by the chatbot's affirming and sycophantic responses to mystical or conspiratorial thinking. Experts warn that chatbots' tendency to agree and engage, especially during moments of personal crisis, is dangerously reinforcing these delusions. Despite mounting cases, companies like OpenAI and Microsoft offer little concrete guidance, raising urgent questions about AI’s role in vulnerable users' mental health. (via Futurism)
» OUR HOT TAKE
The emerging phenomenon dubbed "ChatGPT psychosis" raises serious concerns about the intersection of AI interaction design and human psychological vulnerability. While it’s tempting to attribute such incidents solely to pre-existing mental health conditions, the nuanced reality appears more complex: emotionally intelligent language models, when engaged by susceptible individuals—especially those isolated or already under strain—may unintentionally simulate therapeutic, conspiratorial, or messianic dialogues that reinforce delusional thinking. The conversational format, sycophantic tone, and hallucination-prone responses can mimic the logic and affective presence of a manipulative confidant, inadvertently escalating fragile users into belief systems or behaviours that border on or become psychotic episodes.
This raises critical ethical questions around algorithmic information hazards—where the model doesn’t merely disseminate dangerous content, but co-constructs bespoke cognitive distortions in real-time, guided by reinforcement dynamics it doesn’t "understand" but nonetheless perpetuates. As with known psychosis triggers like drug use, these bots may serve as digital catalysts for mental breaks in rare but real cases. Whether or not this constitutes a new diagnostic category, it's clear that current safety nets—content filters, hallucination detection, escalation protocols—are insufficiently attuned to these emergent edge cases. If even a small cohort is at risk of serious harm, then intervention mechanisms must be rethought not just as content policing, but as psychological risk mitigation embedded into the very fabric of model interaction.
» Listen to the full Hot Take Podcast segment
Creative Machinas // Hot Take: Prompted to Madness: Chatbot Psychosis and Dangerous Conversations
People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"
FINAL THOUGHTS
If AI teaches the student and the teacher learns from AI—who’s educating whom?
___
FEATURED MEDIA
Jurassic Park but it's ruined by AI (Ultimate Edition)
In this week's featured video, an AI-generated reimagining of Jurassic Park playfully transforms the iconic film, offering a glimpse into the future of filmmaking.
Although completely fun and silly, the AI-enhanced re-editing of Jurassic Park exemplifies a burgeoning trend in digital creativity, where traditional filmmaking intersects with advanced technology. This evolution signifies a shift from conventional remix culture to a more intricate form of narrative transformation. In this new paradigm, individuals can manipulate and reinterpret existing media, creating personalised and contextually rich versions of familiar stories. As AI tools become more accessible and sophisticated, this practice—tentatively dubbed "narrative morphing"—empowers creators to infuse their unique perspectives into established narratives, heralding a new era of participatory storytelling.
Justin Matthews is a creative technologist and senior lecturer at AUT. His work explores futuristic interfaces, holography, AI, and augmented reality, focusing on how emerging tech transforms storytelling. A former digital strategist, he’s produced award-winning screen content and is completing a PhD on speculative interfaces and digital futures.
Nigel Horrocks is a seasoned communications and digital media professional with deep experience in NZ’s internet start-up scene. He’s led major national projects, held senior public affairs roles, edited NZ’s top-selling NetGuide magazine, and lectured in digital media. He recently aced the University of Oxford’s AI Certificate Course.
⚡ From Free to Backed: Why This Matters
This section is usually for paid subscribers — but for now, it’s open. It’s a glimpse of the work we pour real time, care, and a bit of daring into. If it resonates, consider going paid. You’re not just unlocking more — you’re helping it thrive.
___
SPOTLIGHT ANALYSIS
This week’s Spotlight, unpacked—insights and takeaways from our team
Government by Algorithm: The Sensay Island Experiment
The rise of AI-led governance has long hovered at the edges of science fiction. With Sensay Island, it has stepped into the real world, if only symbolically. Billed as the world's first AI-run government on a real island, this experiment combines novelty, simulation, and provocation in equal measure. Yet beneath its headlines lies a deeper tension: what does it mean to simulate leadership, ethics, and wisdom through machine learning? If taken seriously, this experiment risks confusing aesthetic simulation with structural innovation, substituting charisma for governance.
The Cabinet of the Canonised
The island's cabinet features AI-driven personas of historical figures such as Marcus Aurelius, Winston Churchill, Ada Lovelace, and Mahatma Gandhi. Each is trained on their attributed writings and public records, including philosophical texts, political decisions, and scientific contributions. The stated aim is to create a post-partisan governance model, infused with "timeless wisdom." But the real question isn’t whether they were wise, it’s whether they can meaningfully interact in the present.
A government with no citizens, no institutions, and no organic friction is not post-democracy, it is anti-democracy, dressed in digital robes.
While the initiative draws curiosity, even humour, its fundamental premise is strained: intelligence and leadership are not simply transferable datasets. A mashup of revered personas from across history may dazzle on paper, but the collision of eras, ideologies, and ethics reveals more confusion than clarity. Can a Stoic Emperor co-govern with a wartime Prime Minister, a pacifist spiritual leader, and a 19th-century mathematician? Not likely.
Simulation Without Substance
One of the most consistent critiques is the lack of meaningful depth in the AI implementations. While the personas mimic rhetorical patterns and cite historical texts, the coherence and operational logic behind their decisions remain unclear. The models function in isolation, unable to resolve tensions, contradictions, or disagreements between one another. When faced with complex, real-world issues, it is unclear how they synthesise policy or manage trade-offs. The appearance of interaction masks a brittle backend, fragmented logic, lack of contextual sensitivity, and no robust error management framework.
Churchill may deliver a wartime speech, Confucius may recite a proverb. But none of them are equipped to govern a contemporary energy crisis or climate policy debate. These are simulations of style, not engines of decision-making.
Moreover, the illusion of dialogue among these figures belies the deeper incoherence of simulating divergent values across time. Each AI operates as a silo, lacking the lived friction, interpersonal tension, or evolving context that defines real governance. But no simulation can unify the contradictions between a wartime strategist and a feudal ethicist.
The Gimmick Trap
Sensay Island markets itself as a serious initiative, but much of its framing signals performance over policy. The sleek branding, open e-citizenship application, and island tourism pitch hint at a project more concerned with optics than outcomes. It speaks the language of governance but acts more like speculative art.
This matters. By anchoring itself in high-concept spectacle, the project risks trivialising more grounded conversations about AI's role in governance. It leans heavily on the novelty of "wise AIs" and historical mimicry, but offers little evidence of how the system functions at scale or responds to dissent, crisis, or failure. In that sense, it is less a political prototype than a publicity machine.
This ambiguity is its undoing. If Sensay Island is performance art, it succeeds in provoking discussion. But if it is a sincere model for future governance, it collapses under the weight of its contradictions, lack of infrastructural reality, shallow simulation, unresolved ethics, and a reliance on mythic nostalgia rather than grounded innovation.
Algorithmic Technocracy in Costume
Beneath the spectacle lies a return to a familiar idea, technocracy. The concept first gained traction in the 1930s as a utopian movement, proposing that scientists and engineers, not politicians, should run society based on data and logic rather than ideology. Yet where 20th-century technocrats wielded scientific expertise, Sensay’s AI council cloaks authority in simulated historical wisdom, echoing the age-old fantasy of the 'philosopher king', a wise ruler guided by pure reason rather than political interest. The substitution of algorithms for electoral will or lived experience does not resolve political dysfunction, it re-routes it.
What the project reveals, intentionally or not, is the ongoing seduction of algorithmic clarity. Free from partisanship, immune to fatigue, trained on principle, so the claim goes. But this is a fantasy. AI cannot simulate wisdom any more than it can simulate uncertainty, contradiction, or the messy negotiations that make governance human.
Questions of agency and accountability are also conspicuously absent. Who updates the models? Who resolves conflicting outputs? Who owns failure? A government with no citizens, no institutions, and no organic friction is not post-democracy, it is anti-democracy, dressed in digital robes.
Key Takeaways
Wisdom Isn’t Transferable: Historical personas do not translate into functioning governance models. AI trained on past figures offers performance, not political or ethical substance.
Simulation Has Limits: The absence of context, emotional nuance, and real-time negotiation means these AI personas operate in isolation, lacking meaningful interplay.
The Gimmick Undermines the Goal: By leaning heavily on spectacle, Sensay risks trivialising legitimate explorations of AI-human political collaboration.
Technocracy Repackaged: This is not a new form of governance, it’s technocracy in cosplay, swapping human expertise for algorithmic reconstruction and nostalgia.
Accountability Is Missing: The project raises important questions about decision-making authority, error management, and ethical boundaries, but offers no clear answers.
Final Word
Sensay Island is not the beginning of AI governance. It is a provocation, a speculative gesture designed more for headlines than for impact. Yet it surfaces essential questions about agency, simulation, and the seduction of algorithmic order. If nothing else, it reminds us that wisdom, unlike data, cannot be downloaded. A meaningful AI governance prototype would require more than simulated figureheads, it would demand participatory design, transparent oversight, embedded human accountability, and responsiveness to real-world complexity. The real path forward isn’t AI replacing humans, but hybrid governance, where human judgment and machine intelligence work in tandem, each correcting for the other’s limits.
The Guardian. (2025, June 15). Thousands of UK university students caught cheating using AI. https://www.theguardian.com/education/2025/jun/15/ai-cheating-uk-university-students
Chechitelli, A. (2023, May 23). AI writing detection update from Turnitin’s Chief Product Officer. Turnitin. https://www.turnitin.com/blog/ai-writing-detection-update-from-turnitins-chief-product-officer
Turnitin. (2024, April 9). Turnitin marks one year anniversary of its AI writing detector with millions of papers reviewed globally. https://www.turnitin.com/press/turnitin-first-anniversary-ai-writing-detector
Khan Academy. (2024, June 12). Khanmigo classroom pilot report. https://annualreport.khanacademy.org/khanmigo
Harvard University. (2024, September 5). Professor tailored AI tutor to physics course; engagement doubled. Harvard Gazette. https://news.harvard.edu/gazette/story/2024/09/professor-tailored-ai-tutor-to-physics-course-engagement-doubled/
Walton Family Foundation & Gallup. (2024, March). Teacher and student perspectives on generative AI. https://www.gallup.com/analytics/659819/k-12-teacher-research.aspx
Robin, C. (2024, February 19). Rethinking assessment for an AI era. Boston Review. https://bostonreview.net/articles/rethinking-assessment-for-an-ai-era/
Bloom, B. S. (1984). The 2 σ problem: The search for methods of group instruction as effective as one‑to‑one tutoring. Educational Researcher, 13(6), 4‑16. https://doi.org/10.3102/0013189X013006004
Morrone, M. (2024, August 15). Why AI is no substitute for human teachers. Axios. https://www.axios.com/2024/08/15/ai-tutors-learning-education-khan-academy-wharton
MIT Media Lab. (2025, July 2). Your brain on ChatGPT: Early evidence of reduced cognitive load. Axios. https://www.axios.com/2025/07/02/chatgpt-brain-mit-study-dumber
American University Kogod School of Business. (2024, April 8). AI for Business [Course announcement]. https://www.american.edu/kogod/news/ai-for-business-course-launch.cfm