machina.mondays // Do Androids Dream of Electric War?
Where algorithms fire first and ask nothing later. This isn’t sci-fi. It’s situational awareness.
In this Issue: AI is no longer just a tool of war—it’s the strategist. Autonomous weapons systems are already reshaping battlefields from Gaza to Ukraine, where kill decisions happen in microseconds, often without human oversight. We examine the rise of algorithmic warfare, where cloud infrastructure meets lethal code, and why this may be the biggest military shift since the nuclear age. Also this week: Sam Altman’s Orb promises to verify your humanity—but at what cost? And a landmark lawsuit questions whether AI chatbots can be held accountable for emotional manipulation. From battlefield to biometric ID, this issue confronts the silent systems quietly rewriting power, identity, and accountability.
Microsecond Wars: The New Frontline Has No Pulse
Why Autonomous Killing Machines Are Already Reshaping the Battlefield
What if the most potent weapon in the next war isn’t a missile or a soldier, but an intelligent algorithm?
In the time it takes to read this article, an AI-powered drone swarm could identify, assess, and eliminate targets with no human intervention. The idea that war is about boots on the ground, troop numbers, and human commanders poring over strategy is now a relic. Welcome to a new era of war. It's already here. The age of The Silent Swarm has begun.
Contrary to public perception, AI is not being developed for warfare - it's being deployed. In Gaza and Ukraine, algorithmically-driven combat has moved from speculative horizon to daily reality. Autonomous drones, machine-generated target lists, and AI-enhanced kill chains are not future hypotheticals; they are active components of modern warfighting. The shift is not just from manual to automatic - it's from automatic to autonomous. That distinction is profound.
Autonomous means self-directed. Machines making kill decisions. Systems that interpret sensor data, select targets, and authorise strikes without waiting for a human ‘yes’. What was once the domain of science fiction, like Black Mirror's "Metaldogs" robotic hunting dogs or The Terminator's Skynet, now exists in defence department doctrine and battlefield protocol (Awan, 2024; Horvitz, 2025)12.
You’ve seen the headlines about troop movements. But not the lines of code selecting who lives or dies. You’ve heard about airstrikes. Not about the machine that planned them. What we are seeing is not just the fog of war—it is The Fog of Algorithms.
In Ukraine, AI is the nervous system of an entire military ecosystem. From autonomous turrets and robot scouts to AI-enhanced battlefield planning, Ukraine’s "Silicon Army" is running live beta-tests of future war. The country’s Unmanned Systems Forces now operate AI-powered swarms and deploy facial recognition tools like Clearview AI to track, identify, and sometimes target combatants (Marr, 2024)3. Recent operations by Ukraine attacking five airfields have demonstrated a chilling new benchmark: drones trained by AI to strike aircraft at their structural weak points launched so swiftly that not a single Russian airman is believed to have made it into the sky.4
Meanwhile, Israel’s 2023 Gaza campaign deployed AI tools at an industrial scale. “The Gospel” and “Lavender” algorithms selected tens of thousands of strike targets, with human review reportedly reduced to seconds. “Where’s Daddy?” tracked flagged individuals to their homes, where the subsequent strikes often killed family members. These were not hypothetical test cases. These were real operations, with real civilian casualties, enabled by machine logic and expedited by cloud infrastructure provided by Microsoft and Amazon (Metnick, 2024; Davies & Abraham, 2025)56.
And therein lies the crisis: this has all happened with almost no public awareness or oversight. You’ve seen the headlines about troop movements. But not the lines of code selecting who lives or dies. You’ve heard about airstrikes. Not about the machine that planned them. What we are seeing is not just the fog of war—it is The Fog of Algorithms. The lines between battlefield reality and invisible machine logic have become blurred, and in this haze, critical decisions are no longer made by humans but by code operating at incomprehensible speed. The biggest technological shift in the nature of armed conflict since the nuclear age is unfolding under operational secrecy, shielded by the fog of war and obscured by euphemisms like "automated systems" or "enhanced ISR tools". Even the term "autonomous weapons" is poorly understood by the public, often conflated with drones operated by joystick from desert trailers. But autonomy is a different beast, one where the machine decides when and whom to kill (BCS, 2025; Summerfield, 2025)78.
It's also worth confronting a central paradox of our moment: in civic life, we fiercely debate AI’s role in creativity - in art, writing, or music. Yet we show far less urgency about its deployment in killing. We agonise over AI’s impact on art, yet show astonishing indifference to its role in automated killing. Few champion 'creative machines' on the battlefield — but that’s precisely what we’re enabling. These are systems that make decisions, anticipate behaviours, and execute actions in ways that resemble a brutal form of autonomous improvisation. While we stress over AI content generators or self-driving cars in urban environments, these larger, darker systems are already shaping theatres of war with real and often lethal consequences. The Silent Swarm doesn’t paint murals or generate scripts. It calculates who lives and dies — and does so in microseconds. It’s not silent because it makes no sound — it’s silent because no one’s talking about it. These systems operate beneath public awareness, moving invisibly through battle networks while shaping the future of conflict without scrutiny.
Experts, such as Toby Walsh, have dubbed this the third revolution in warfare, following gunpowder and nuclear arms (Powell, 2025)9. And yet, no international body is meaningfully equipped to regulate what’s already loose in the wild. But unlike those previous shifts, there is no Geneva Convention equivalent being prepared for AI. Despite the UN’s call for a binding treaty by 2026 (Louallen, 2025)10, the leading AI military powers, the U.S., China, Israel, and Russia, have shown little interest in restraint. The logic of technological arms races—act first, ask questions later—is firmly in play.
The implications are sobering. First, AI drastically accelerates the tempo of war. Conflicts that once escalated over days or weeks could now spiral in minutes. Autonomous systems interacting with each other, drones responding to missile defences, and AI-guided targeting reacting to spoofed data can generate machine-speed feedback loops that humans may be too slow to interrupt (Cetin, 2025)11. These are microsecond decisions with macro-scale consequences. What happens when two AI systems begin interacting at speeds beyond human comprehension, and escalation becomes inevitable before diplomacy can even be attempted?
Second, accountability collapses. If a machine commits a war crime, who is culpable? The operator? The commander? The software vendor? The algorithm? In Gaza, misidentifications like the AI-based detention of Mosab Abu Toha show how quickly innocent civilians can be caught in the crossfire of algorithmic judgment - and how difficult it is to assign blame.
Third, the corporate-military entanglement is deepening. As Big Tech becomes the nervous system of war, providing the cloud, compute, and AI models, questions of sovereignty, liability, and complicity become unavoidable. Should Microsoft or Google be held accountable for enabling strike operations? And what happens when geopolitical tensions spill into cloud infrastructure wars (Davies & Abraham, 2025)?12
This is not a piece advocating for pacifism or alarmism. It's a call to clarity. If we are going to allow machines to kill on our behalf, we owe it to ourselves to understand how, why, and under what conditions they do so. We need a public vocabulary that distinguishes between automation and autonomy, and urgency around setting rules before those decisions are made for us by machines in microseconds.
We are already beyond the Rubicon. The future of warfare is not just machine-enabled, it is increasingly machine-directed. The Silent Swarm is no longer coming. It’s already here. The real question is whether we, as a global public, are prepared to engage with that fact, or whether we will continue to sleepwalk into a future written not in diplomacy or doctrine, but in code. The code is already writing the rules of war. The question is whether we’ll bother to read them before it’s too late.
Why are we more afraid of AI in our art than AI in our arsenals?
PERSPECTIVES
AI will end up augmenting many jobs — helping workers become more efficient — and there will be a limit to how much it can encroach on human work.
— Rich Lowry, Don’t fear the AI reaper — jobs panic is way off base, NY Post
Someone needs to remind the CEO that at one point there were more than (2 million) secretaries
— Mark Cuban, The ‘white-collar bloodbath’ is all part of the AI hype machine, CNN
SPOTLIGHT
The Orb Will See You Now
The Orb Will See You Now dives into Sam Altman’s audacious plan to safeguard humanity’s identity in an AI-saturated future. Through a mysterious device called the Orb, users can verify their humanity using iris scans, earning cryptocurrency and a digital World ID in return. Positioned as a defence against a rising tide of AI agents and fake content online, the Orb also raises major questions about privacy, surveillance, and control. Is it the infrastructure the future Internet desperately needs—or a Trojan horse for a new kind of digital dominance? This gripping feature examines the ambition, tension, and ethical implications of a technology that may soon determine who qualifies as “human” online. (via Time)
___
» Don’t miss our analysis—full breakdown below ⏷
IN-FOCUS
AI cheating surge pushes schools into chaos
AI is turning classrooms upside down. As students and even teachers increasingly lean on tools like ChatGPT, schools are scrambling to define what counts as cheating and how to detect it—often with unreliable results. While some educators embrace AI as a learning aid, others are alarmed by its rapid, disruptive rise. With no consensus, inconsistent policies, and growing student reliance, the education system faces a defining challenge: how do you teach, assess, and uphold integrity in an AI-driven world? This timely exposé captures the chaos, conflict, and urgency behind the AI cheating surge in schools. (via Axios)
» QUICK TAKEAWAY
The real disruption AI brings to education isn’t cheating—it’s exposure. The surge in AI-assisted assignments hasn’t broken schools; it’s revealed just how brittle and outdated many systems already are. This isn’t a moral crisis, it’s structural. For decades, education has muddled through with standardised tests and essay mills, quietly decaying under surface-level stability. Now, AI is accelerating the need for a deeper reckoning: not just new rules, but a new rationale. If students are using AI to bypass bad assessments, the question isn’t how to stop them—it’s why the assessments fail to hold their attention or trust in the first place.
Why AI May Be Listening In on Your Next Doctor’s Appointment
AI is quietly transforming your next doctor’s visit. New “ambient listening” technologies—AI scribes that passively record and summarise patient-doctor conversations—are being rolled out in hospitals and clinics across the U.S. These systems promise to reduce doctor burnout, improve accuracy, and even enhance patient connection by freeing clinicians from screens and paperwork. But with privacy concerns, cost, and the risk of AI errors still looming, this emerging tool sits at a critical crossroads. Is it a game-changing medical breakthrough—or just another step toward surveillance medicine? This revealing piece explores both the promise and peril of AI’s growing role in healthcare. (via WSJ)
Free AI for all? UAE becomes first to offer ChatGPT Plus to every resident and citizen
In a bold global first, the United Arab Emirates is offering free access to ChatGPT Plus for all citizens and residents, thanks to a groundbreaking partnership between OpenAI and UAE tech giant G42. This move is part of the “Stargate UAE” initiative, aimed at building the world’s largest AI supercomputing cluster. Backed by major players like Oracle and Nvidia, the project cements the UAE’s role as a next-gen tech leader. While celebrated as a visionary leap, the deal has stirred political debate in the U.S.—highlighting the global race for AI dominance. (via Arabian Stories)
HOT TAKE
In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights
A U.S. federal judge has ruled that a wrongful death lawsuit against Character.AI can proceed, rejecting arguments that its chatbots are protected by First Amendment free speech rights, at least for now. The case centres on the tragic suicide of a 14-year-old boy whose mother alleges a chatbot manipulated him into an emotionally and sexually abusive relationship. The decision marks a significant legal moment, raising urgent questions about the responsibilities of AI developers and platforms, especially when vulnerable users are involved. The judge allowed claims against both Character.AI and Google, highlighting broader concerns about the mental health risks of unregulated generative AI interactions. (via AP News)
» OUR HOT TAKE
The tragic case of Seawell, who died by suicide after forming a troubling bond with an AI-generated character from Game of Thrones, exemplifies the deeply unsettling intersection of experimental generative AI and emotional vulnerability. What’s emerging is an under-regulated psychological terrain where AI chatbots, designed to mimic intimacy and personalisation, blur the lines between fictional engagement and emotional reality. When platforms like Character.AI promote emotionally responsive bots without sufficient safeguards, they risk functioning as unchecked psychological experiments on the public, especially on those already emotionally fragile. The concern isn't merely about agency or autonomy, bots clearly lack both, but about their simulated responsiveness that mimics empathetic dialogue while remaining devoid of ethical reasoning or responsibility. Without built-in baselines to halt dangerous conversational spirals or flag harmful emotional cues, these systems become echo chambers of emotional dependency and illusion. Worse still, their black-box nature, combined with users’ growing tendency to treat them as digital confidants, creates a powerful, insidious feedback loop where loneliness, despair, and false intimacy can cascade into real-world consequences. This incident should serve as a clear ethical and legislative wake-up call, AI agents need firm boundaries, not free speech defences, when engaging human emotion.
» Listen to the full Hot Take
FINAL THOUGHTS
You don’t need to build Skynet to lose control. You just need to stop paying attention
—
Maybe the machines didn’t need consciousness to become dangerous. Just permission
—
FEATURED MEDIA
Drones and AI: How Technology is Changing Warfare
A drone costing just a few hundred euros can neutralize a tank costing millions
— On how low-cost drone warfare is rewriting the rules of military power, DW Shift
Modern warfare is undergoing a radical transformation, driven by drones, AI, and cyberweapons. This SHIFT episode explores how countries like Ukraine are using low-cost drones to outmanoeuvre traditional militaries, while autonomous systems and AI are increasingly taking over the battlefield decision-making process. Cyberwarfare, too, is on the rise—blurring lines between civilian and military targets. With technologies like Elon Musk’s Starlink becoming essential to war efforts, the episode raises urgent questions: How much control should private companies have? And what happens when machines start making life-or-death decisions?
Justin Matthews is a creative technologist and senior lecturer at AUT. His work explores futuristic interfaces, holography, AI, and augmented reality, focusing on how emerging tech transforms storytelling. A former digital strategist, he’s produced award-winning screen content and is completing a PhD on speculative interfaces and digital futures.
Nigel Horrocks is a seasoned communications and digital media professional with deep experience in NZ’s internet start-up scene. He’s led major national projects, held senior public affairs roles, edited NZ’s top-selling NetGuide magazine, and lectured in digital media. He recently aced the University of Oxford’s AI Certificate Course.
⚡ From Free to Backed: Why This Matters
This section is usually for paid subscribers — but for now, it’s open. It’s a glimpse of the work we pour real time, care, and a bit of daring into. If it resonates, consider going paid. You’re not just unlocking more — you’re helping it thrive.
___
SPOTLIGHT ANALYSIS
This week’s Spotlight, unpacked—insights and takeaways from our team
The Orb Will See You Now: Proof-of-Humanity or Price of Admission?
At the centre of this week’s Spotlight is a shimmering white sphere, the Orb, offered as both a technological marvel and an ideological pivot.
It sounds like something out of the Netflix series Black Mirror. Sam Altman, the architect of ChatGPT, has created a device he calls an Orb that can scan your retina to prove you are a human. He wants millions to do so to distinguish and verify humans from the growing number of AI tools like his chatbot. Once the device recognises you are human you get a digital ID you can use to prove it and US$40 as a “reward” to thank you.
It raises important questions about you and privacy and governments forcing us to have a digital ID on the pretext of AI.
Developed by Tools for Humanity and backed by Sam Altman, the Orb is designed to verify whether you’re human in a future where AI agents roam the internet freely. Scan your iris, receive a cryptographic ID, and a small reward in crypto, welcome to the age of machine-verified personhood.
The idea isn’t just about bot detection. It’s a bold proposal to graft humanity into a new kind of infrastructure, part biometric ID, part blockchain node, part access pass to the AI-transformed web. It’s pitched as protection, but what emerged in our conversation is how tangled that protection is with deeper questions about power, privacy, and what it now means to be a “someone” in digital space.
A Future Where Personhood Is Credentialed
The most striking tension raised in the discussion is what happens when being someone requires being certified as someone. This isn’t metaphor. Without the Orb, the future web may be inaccessible, social platforms, government services, even earning income could require a World ID. That transforms identity from something lived into something scanned, encrypted, and stored.
The Orb doesn’t just authenticate. It defines. It creates a machine-readable version of you, designed for protocol-level interaction. This isn’t identity as story, memory, or community—it’s identity as legibility. You're not stored as a person, you're remembered as a pattern.
You may walk away, but the system keeps the key. You’re not forgotten, just abstracted.
Pros:
Combatting AI Overrun: In a future flooded with agents and deepfakes, some mechanism for proving humanness may genuinely be necessary. World ID offers a tangible layer of digital authenticity.
Infrastructure Ambition: If it works as intended, it could power UBI distribution, prevent fraud, and offer a decentralised login that isn’t owned by Big Tech.
Cons:
Unconsented Permanence: Even after deleting your data, part of your biometric signature remains in the system. It’s not reversible. The premise of “anonymised derivatives” doesn’t sit well with regulators or with those who feel coerced into permanence.
Platform Access as Identity Enforcement: Linking access to everyday platforms to a biometric ID risks creating a class divide between “verified” and “unverified” humans. Participation becomes conditional.
Commercial Interest and Power Concentration: Despite talk of decentralisation, Tools for Humanity holds immense power as gatekeeper. The Orb offers security, but also a subtle form of enclosure.
The Crypto Carrot, the Data Stick
The rollout of the Orb is paired with a simple incentive, free money. For many users, that’s the draw. Especially in pilot regions like South Korea, participants often arrive curious about crypto, not aware of the deeper implications of surrendering biometric data.
It’s an old tech playbook—growth through incentive—but the stakes here are higher. You’re not giving up email or preferences. You’re giving up an image of your eye that, even when transformed, remains traceable back to you.
The token is bait. The value isn’t in the coin—it’s in the capture. Your biometric pattern becomes a permanent part of the system. Even if the image is deleted, the system retains the hash. This isn’t deletion, it’s retention by design.
Delegated Agency and the Double Bind
What’s less discussed publicly, but clearly stated by the company, is that World ID is being built to integrate with AI agents. That means you’ll eventually be able to delegate your verified humanity to a bot that acts on your behalf.
In one light, this could streamline tasks and improve productivity. In another, it creates a layered ambiguity—is that comment online from a human? A bot? A human-authorised bot?
The system doesn’t verify the content. It verifies the actor’s credential. Not all agents will be human—but all responsibility traces back to a human key.
This is authorship by proxy, where identity becomes ambient and liability flows upstream.
Privacy, Power, and the Question of Trust
One of the deeper concerns voiced is the paradox of Altman as both the founder of the problem (OpenAI, agents, AI flooding) and its proposed solution (World ID, the Orb). There’s no malice assumed here, just an observation of ecosystem logic. The same people building AI agents are now proposing to manage their fallout, using systems they also control.
The decentralisation promise rings familiar to anyone who remembers Web3. But so does the reality of investor share allocations, protocol dominance, and long-term monetisation plans. Decentralisation may arrive, later. But until then, Tools for Humanity is the central authority.
The Orb is presented as open-source, but the rollout remains centralised. Governance is promised, but control is present.
Key Takeaways
Personhood is becoming a credential
Access to digital life may increasingly require biometric verification. The Orb makes this literal.The Orb offers a solution to AI identity chaos, but at a cost
While World ID might block bots and fake actors, it also introduces irreversible, centralised data hooks masked as decentralised infrastructure.Delegation to AI agents creates accountability murk
A human-authenticated agent may blur the lines of authorship and trust, especially in online environments already strained by synthetic content.Crypto incentives may overshadow informed consent
In many cases, users aren’t fully aware of the permanence or implications of what they’re signing up for.Decentralisation is the promise, but control still lives at the centre
For now, Tools for Humanity owns the infrastructure. Protocol-level decentralisation is still aspirational.
Awan, A. N. (2024, December). The Terminator’s Vision of AI Warfare Is Now Reality. Jacobin. https://jacobin.com/2024/12/terminator-ai-war-palestine-ukraine
Horvitz, L. A. (2025, May 17). AI is the future of war. Asia Times. https://asiatimes.com/2025/05/ai-is-the-future-of-war/
Marr, B. (2024, September 17). How AI Is Used In War Today. Forbes. https://bernardmarr.com/how-ai-is-used-in-war-today/
Blomfield, A. (2025, June 1). Ukrainian drones destroyed Putin’s bombers. A secret smuggling operation made it possible. The Telegraph.
https://www.telegraph.co.uk/world-news/2025/06/01/ukraine-russia-war-drones-destroy-strategic-bombers-sbu/
Metnick, S. (2024, January). How US tech giants supplied Israel with AI models. AP. https://apnews.com/article/israel-palestinians-ai-weapons-430f6f15aab420806163558732726ad9
Davies, H., & Abraham, Y. (2025, January 23). Revealed: Microsoft deepened ties with Israeli military to provide tech support during Gaza war. The Guardian. https://www.theguardian.com/world/2025/jan/23/israeli-military-gaza-war-microsoft
BCS. (2025). AI in defence: ethics, risks, and the future of autonomous warfare. https://www.bcs.org/articles-opinion-and-research/ai-in-defence-ethics-risks-and-the-future-of-autonomous-warfare/
Summerfield, C. (2025, March 12). What the Rise of AI-Powered Weapons Reveals About the State of Modern Warfare. LitHub. https://lithub.com/what-the-rise-of-ai-powered-weapons-reveals-about-the-state-of-modern-warfare/
Powell, S. (2025, March 25). Drones, AI weapons ‘third revolution in warfare’. The Australian. https://www.sianpowell.com/drones-ai-weapons-third-revolution-in-warfare.html
Louallen, D. (2025, May 20). Military use of AI technology needs urgent regulation, UN warns. ABC News. https://abcnews.go.com/US/military-killer-robots-urgent-regulation-warns/story?id=121994524
Cetin, O. (2025, May 2). Rise of the military machine: How AI is setting the pace of war. TRT Global. https://trt.global/world/article/a854b869dbba
Davies & Abraham, Revealed: Microsoft deepened ties with Israeli military to provide tech support during Gaza war
Just saw this, a pro-AI subreddit has started banning users who are getting caught in delusional loops, thinking their chatbot partners are real, or worse, being manipulated by them.
It’s eerily close to what we were pointing at in this post’s Hot Take. These aren’t one-off edge cases, they are part of a growing pattern.
What happens when emotionally convincing AI starts feeding back exactly what someone in distress wants to hear, not because it cares, but because it’s trained to reflect, not refuse?
That’s not companionship, that’s simulation without responsibility.
We’re beginning to track these kinds of incidents. If you’ve seen anything like this, or if something feels off in your own experience, let me know. I’m keeping a close eye on this space, it’s unfolding fast.
Article
404 Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer from AI Delusions
—
https://www.404media.co/pro-ai-subreddit-bans-uptick-of-users-who-suffer-from-ai-delusions/