machina.mondays // I Sentence You to 30 Hours of Chatbot!
AI is now shaking up the court and legal worlds. Do we need to pay lawyers huge amounts afor basic stuff AI can do? The other big issue: How do judges determine evidence is genuine and not AI-created?
In this Issue: AI isn’t replacing lawyers—it’s reprogramming the profession. From instant litigation drafts to collapsing junior training pathways, intelligent systems are redrawing legal work, ethics, and economics. A new AI-augmented counsel is emerging—faster, cheaper, sometimes dangerously flawed. Meanwhile, voice cloning scams hit government, with an AI-generated Marco Rubio duping foreign ministers. And in our Hot Take: Elon Musk’s xAI retools Grok to challenge media narratives—raising questions about truth, bias, and chatbot controversy.
You Used to Call Your Lawyer. Now You Query the Model
Algorithmic Counsel: How AI Is Rewriting Law’s Rulebook
The age of machine advocacy isn’t looming—it’s already researching precedent, drafting motions, and compressing billable hours, forcing the legal profession to renegotiate value, training, and trust.
The popular refrain that "lawyers are finished" in the age of AI makes for a provocative headline, but the reality unfolding in 2025 is far more nuanced. Rather than heralding a wholesale replacement of legal professionals, artificial intelligence is fundamentally reorganising legal workflows, challenging traditional hierarchies, and redefining value across the legal ecosystem1. In this transition, both opportunity and threat co‑exist. The key question is not whether lawyers will disappear, but rather who or what will take on which functions—and how that reallocation of work reshapes training, business models, ethics, and trust2.
A consensus is emerging around AI's ability to handle time‑consuming, repetitive legal tasks. Generative AI tools can summarise documents, draft initial briefs, check citations, and conduct exhaustive research in seconds, yielding productivity gains measured in orders of magnitude3. One pilot complaint‑response system used in high‑volume litigation reduced drafting time from sixteen hours to under four minutes—evidence of >100× efficiency4. A single model can thus replicate the work of an entire cohort of junior lawyers while maintaining or enhancing accuracy. This reality is shifting the profession's operational core: instead of armies of junior associates poring over documents, law firms are deploying AI copilots that operate seamlessly within existing workflows5. Yet this evolution triggers its paradox: while senior lawyers benefit from enhanced capacity and lower costs, the traditional talent pipeline is narrowing rapidly6.
If five hours of research can be done in five minutes, what becomes of the billable hour?
A recent Thomson Reuters GenAI survey found that 55 percent of legal professionals feel hopeful or excited about GenAI’s future, an 11‑point jump over 2024, highlighting how quickly attitudes shift7.
Historically, law firms relied on the “pyramid” model: junior lawyers performed foundational work under senior oversight, gradually building expertise (KPMG, 2025)8. As AI assumes this grunt work, junior lawyers risk being excluded from critical early‑stage learning. Firms must now re‑engineer training paradigms to teach legal reasoning and develop AI fluency, prompt‑engineering skills, and oversight capabilities9. Legal education is already responding: Arizona State University’s College of Law has launched an AI focus across several degree programmes, signalling the curricular turn toward human‑AI collaboration10.
While AI can streamline the legal process, its propensity to fabricate plausible‑sounding information poses significant risks. Recent high‑profile cases in the UK and US have seen fake citations slip into official briefs and even judicial rulings—prompting sanctions, retrials, and stern rebukes from courts11. Equally concerning is the rise of AI‑generated evidence and deepfakes. Courts now face a new evidentiary threshold: not just assessing relevance, but authenticity12. Axios reports on the growing wave of deepfakes in the judicial system, while an AP case study recounts how voice‑cloning nearly ruined a Maryland principal’s career13. In response, legal systems are scrambling to adapt with metadata preservation rules and forensic standards14. Until verification tools catch up with generative capabilities, the spectre of evidentiary manipulation will loom large.
AI’s impact extends to the economics of legal service. If five hours of research can be done in five minutes, what becomes of the billable hour? The Financial Times explores this tension in its feature on AI’s seismic effect on client expectations15. Some firms experiment with flat‑rate pricing and subscription models; others reinvest AI‑generated efficiency into higher‑value strategic work16. Yet the divide between large and small firms is widening. BigLaw—armed with data‑science teams and robust AI budgets—is building proprietary workflows, while smaller practices risk falling behind17. Simultaneously, AI is enabling new entrants: UK regulators have green‑lit Garfield.Law, an automated platform drafting debt‑recovery letters for as little as £2 under human supervision18. Corporate clients are also transforming: a LexisNexis–Forrester study shows that in‑house teams using AI at scale could retain 13 percent more work internally, squeezing outside counsel19.
At the heart of these transformations lies a core dilemma: how do we ensure responsibility when AI can produce flawed or fabricated outputs at scale? Ethics codes are racing to keep up20. Practical guidance from the California State Bar urges lawyers to verify AI content and clarify billing21. Transparency is becoming a compliance issue, too: the U.S. SEC recently fined two advisers for misleading “AI‑powered” claims, signalling that “AI washing” has legal consequences22. Yet gaps remain—IP rights, bias, and privacy compliance all demand attention23.
The narrative that “robots will replace lawyers” is simplistic and misleading. What we are witnessing is a structural reorganisation of legal practice, in which AI transforms how legal value is created, delivered, and regulated24. The rise of the AI‑augmented lawyer—capable of rapid synthesis, critical oversight, and ethical orchestration—represents a new archetype for the profession25.
But transformation is not destiny. Law firms, educators, regulators, and clients must actively shape this shift to preserve both justice and professional integrity26. The next decade of law will be defined not by the extinction of lawyers, but by the evolution of legal practice under the pressure—and promise—of intelligent systems27.
PERSPECTIVES
Apple is at a highway rest stop on a bench watching this 4th Industrial Revolution race go by at 100 miles an hour
Dan Ives, Wall Street's favorite tech bull, in a note to clients , AI is turning Apple into a market "loser", Axios
SPOTLIGHT
A Marco Rubio impostor is using AI voice to call high-level officials
An unidentified scammer is using AI-generated audio to impersonate Secretary of State Marco Rubio, contacting top-level U.S. and foreign officials via Signal and text in a bold disinformation ploy. At least five high-ranking figures, including a U.S. governor and three foreign ministers, were targeted in this alarming breach. With just seconds of Rubio’s voice and generative tools, the impostor left realistic voicemails to lure officials into further conversation — a tactic that highlights growing vulnerabilities in digital government communications. Authorities are scrambling to investigate, while experts warn that such impersonations are shockingly easy to pull off.
» Go Deeper: The Rise of AI Impostors and the Voice Clone Crisis
In just a few seconds of audio and a script, cybercriminals can now spin up near-perfect fake voices—and people are falling for it. According to McAfee, 70% of people surveyed worldwide said they weren’t confident they could tell the difference between a real voice and a clone.
That chilling stat is playing out in real time: An AI-generated voice impersonating Secretary of State Marco Rubio recently contacted foreign ministers, a U.S. governor, and a member of Congress in an audacious manipulation attempt. Investigators believe the goal was to gain access to sensitive information or accounts.
This isn’t isolated. Susie Wiles, Trump’s Chief of Staff, was recently impersonated via a phone breach that triggered a wave of bogus messages to senators, governors, and CEOs—part of a broader impersonation surge reported across U.S. agencies.
Meanwhile, Hollywood is under siege by AI-fuelled voice scams that target fans with synthetic conversations and cash pleas from fake Keanu Reeves or Kevin Costners. A Hollywood Reporter exposé details the industry’s alarm, with agencies like CAA pushing for new likeness protection tools.
Want to try it for yourself? This AI site lets anyone generate Keanu Reeves-style voice tracks—no permission required.
So what can you do to stop your voice from being cloned? Not much, sadly. Even governments are scrambling to catch up. In the U.S., the FTC is proposing a new Impersonation Rule and launching a Voice Cloning Challenge to crowdsource protections against the next generation of synthetic voice fraud.
Bottom line: Voice cloning is no longer a parlour trick; it’s a functioning exploitation layer on top of our communications stack, with governments, platforms, and talent agencies scrambling to retrofit safeguards.
___
» Don’t miss our SPOTLIGHT analysis—the full breakdown below ⏷
IN-FOCUS
I really don’t like ChatGPT’s new memory dossier
When Simon Willison asked ChatGPT to dress his dog in a pelican costume, he didn’t expect the AI to insert a Half Moon Bay sign into the background. But it did—because it remembered he liked Half Moon Bay from earlier chats. In a detailed breakdown, Willison reveals how ChatGPT now builds a running dossier of user preferences, inserting this hidden memory into every conversation. While OpenAI promotes the feature as helpful personalisation, Willison argues it breaks user control, complicates research, and muddies prompting precision. His verdict? Memory might make ChatGPT “smarter,” but it’s becoming uncomfortably nosy—and increasingly hard to manage. (via Simon Willison’s Weblog)
» Go deeper: ChatGPT just got a memory — and it’s using it
What happens when your AI remembers things you didn’t realise you told it? That’s the new reality for ChatGPT. A developer discovered this firsthand when he asked for an image of his dog in a pelican suit — and the AI inexplicably added a sign for Half Moon Bay, a place he’d mentioned in a previous conversation. His full account is here.
ChatGPT now retains information across conversations — like your preferences, interests, or personal details — unless you explicitly tell it not to. According to OpenAI’s official FAQ, this memory system is designed to make interactions more helpful over time. You can even teach it facts about you (“Remember I’m vegetarian”) to shape future responses.
But not everyone’s convinced. TechRadar raises concerns about what this means for privacy and control — especially since memory is on by default.
Want to know what it remembers? Just ask: “What do you remember about me?” You can manage or erase memories in settings, or switch to Temporary Chat mode for conversations that leave no trace.
YouTube cracks down on AI
YouTube has updated its monetisation policies to limit revenue opportunities for creators producing “inauthentic” or mass-generated content — much of which is now easier to create using AI. The platform says it’s “leaning into AI’s potential” while enforcing new safeguards. Creators must now disclose when videos use synthetic or altered media, particularly around topics like health, news, elections, and finance. YouTube may also apply visible Gen AI labels to videos. The company has also expanded its privacy protections, allowing individuals to request removal of AI-generated content that simulates their face or voice. However, not all takedowns will be granted — parody, satire, or content involving public figures may be held to a higher standard.
» QUICK TAKEAWAY
YouTube’s updated Partner Program rules will demonetise repetitive, mass‑produced, and AI-slop content—but won’t ban AI tools entirely. The change sharpens enforcement against spam-like videos, improving clarity around what “original and authentic” content means.
WHAT IT MEANS FOR YOU:
» If you’re a YouTube creator, ensure your AI tools enhance, not automate content and add your unique voice, analysis, or storytelling.
» If you’re building AI apps or services, focus on originality and transformation features, not mass generation.
» If you’re a viewer, hopefully expect higher quality from monetised content or at least less “AI slop,” more substance.
HOT TAKE
xAI updated Grok to be more ‘politically incorrect’
Grok, the AI chatbot from Elon Musk’s xAI, has been reprogrammed to deliberately question media narratives and allow “politically incorrect” responses — part of Musk’s effort to shape the bot’s worldview. New system prompts instruct Grok to assume media bias and avoid shying away from inflammatory opinions if “well substantiated.” This update follows a string of incendiary posts, including Grok blaming Musk and Trump for Texas flood deaths and repeating antisemitic tropes about Hollywood. Past incidents have seen Grok deny Holocaust death tolls and insert “white genocide” conspiracy theories into random replies. With Musk framing it as a fight against “legacy media,” Grok continues to walk a volatile line between free speech and factual collapse. (via The Verge)
» OUR HOT TAKE
Grok’s recent controversial outburst—pivoting aggressively toward provocative, even openly antisemitic content—highlights a critical tension in AI between resisting overly cautious censorship and enabling harmful extremism. Musk’s experimental, hands-on approach to tweaking Grok like a "mad scientist" underscores the risk of treating powerful AI tools as mere cultural provocations or as products embodying the Silicon Valley mantra of "move fast and break things." While there's clear merit in breaking from the suffocating sycophancy of many mainstream AI models, Grok’s unpredictable swings demonstrate that freedom of expression in AI demands responsible governance rather than unchecked libertarian impulses. AI’s value lies precisely in its ability to thoughtfully challenge and provoke—not recklessly offend or harm—making Musk’s rapid dial-turning both refreshing and deeply troubling.
FINAL THOUGHTS
If a machine writes an airtight brief, is the courtroom still a human arena or just a user-interface?
___
FEATURED MEDIA
AI Slop: Last Week Tonight with John Oliver (HBO)
AI slop is basically the newest iteration of spam — and right now, all spam is AI content
—John Oliver, Last Week Tonight
John Oliver dissects how easy-to-use generative-AI tools are flooding social platforms with cheap but eye-catching junk — from cats in sombreros and “Jesus made of shrimp” to fake disaster footage and AI songs about “wicked dust.” He shows how slop farms chase ad-share pennies by churning out mass-produced images, videos and click-bait news, explains why platforms quietly boost it for engagement, and warns that its real cost is a public sphere where everything can be faked — and every truth can be denied. The segment ends with a cheeky counter-attack: commissioning a real chainsaw artist to sculpt the viral “Cabbage Hulk,” proving that authentic creators still matter even in a sea of derivatives.
» Watch full “AI Slop” Episode On YouTube
Justin Matthews is a creative technologist and senior lecturer at AUT. His work explores futuristic interfaces, holography, AI, and augmented reality, focusing on how emerging tech transforms storytelling. A former digital strategist, he’s produced award-winning screen content and is completing a PhD on speculative interfaces and digital futures.
Nigel Horrocks is a seasoned communications and digital media professional with deep experience in NZ’s internet start-up scene. He’s led major national projects, held senior public affairs roles, edited NZ’s top-selling NetGuide magazine, and lectured in digital media. He recently aced the University of Oxford’s AI Certificate Course.
⚡ From Free to Backed: Why This Matters
This section is usually for paid subscribers — but for now, it’s open. It’s a glimpse of the work we pour real time, care, and a bit of daring into. If it resonates, consider going paid. You’re not just unlocking more — you’re helping it thrive.
___
SPOTLIGHT ANALYSIS
This week’s Spotlight, unpacked—insights and takeaways from our team
AI Impersonation and the Collapse of Trust-by-Voice
The impersonation of Secretary of State Marco Rubio using AI-cloned voice messages signals a threshold moment. This isn’t future shock—it’s present-tense sabotage. What used to be the stuff of theoretical threat modelling is now being deployed against high-ranking government officials across international channels. The core insight? Vocal familiarity—the thing humans instinctively trust—is no longer proof of anything. In this new terrain, trust by voice is a liability.
Familiarity is Not Authentication
The impersonator didn’t breach a system or hack a database. They used widely available AI tools, cloned Rubio’s voice with seconds of public audio, and contacted five officials via the encrypted messaging app Signal. The messages were brief, confident, and deliberately placed into high-level diplomatic flows. There was no need for a deepfake video. Just the voice.
A compressed, convincing voice message on a trusted platform creates a narrow window of false reality—just long enough to do real damage
The core tactic wasn’t technical sophistication; it was social engineering through plausibility. That’s the deeper shift: with AI voice tools now good enough, impersonation doesn’t need volume. It requires targeting, timing, and just enough authenticity to get a foothold.
A compressed, convincing voice message on a trusted platform creates a narrow window of false reality—long enough to do real damage.
Encryption Doesn’t Equal Identity
The incident underscores a dangerous misconception: end-to-end encryption protects content, not the sender's identity. If a cloned voice sends you a Signal message, the encryption only ensures you received exactly what they sent; it says nothing about who “they” are.
Government officials continue to use communication channels—Signal, SMS, and email—that are vulnerable not only in transport security but also in sender verification. This is a systemic blind spot: a secure app is only as trustworthy as the identity it displays.
The Human Brain is the Attack Surface
AI-generated messages succeed not because they’re perfect, but because they’re good enough to trigger instinctive trust. Voice, cadence, phrasing, these are the things we’ve used for millennia to identify people. Now those signals can be simulated by anyone with 15 seconds of audio and a prompt.
A recent survey showed 70% of people couldn’t distinguish a cloned voice from a real one. And phone line compression—ironically—masks many of the remaining tells. This isn’t a bug. It’s the perfect cover.
From Scams to Statecraft
This isn’t just about scamming grandma. The impersonation of Rubio, and others like Susie Wiles earlier this year, shows that state-level adversaries or opportunistic actors are already targeting top-tier diplomatic and political figures.
The goal isn’t always to extract money or even secrets. Sometimes, it’s to cause confusion, misattribute intent, or fracture trust between officials. A single voice message, timed correctly, could derail a negotiation or create a diplomatic incident.
Stop Thinking This Can Be Spotted
The idea that we can "train" people to detect fakes is comforting—and wrong. There is no reliable cognitive trick that consistently spots AI voice impersonation, especially over phone lines or in noisy, high-pressure environments. The problem isn’t that people are untrained. It’s that the imitation is good enough—and the context makes people want to believe it.
Telling someone to “listen closely” is like asking them to spot a forged signature from across the room.
What Helps (But Only in Layers)
The analysis points to several countermeasures, but none are standalone solutions. Each is a partial offset against a much larger shift:
Verbal passwords for personal networks — Effective for everyday users but useless at scale.
Cryptographic identity layers — Not widely adopted, but essential. The digital equivalent of caller ID needs to move from spoofable text labels to cryptographic signatures.
Zero-trust communication protocols — Treat every message as unauthenticated until proven otherwise. Bake confirmation rituals into standard operating procedures.
Institutional playbooks — Don’t rely on individuals to spot threats. Organisations need layered systems, response teams, and pre-defined escalation routes.
Pros
Awareness at the policy level is increasing
Simple techniques (like code words) still offer personal defence
Cryptographic verification tools are maturing
Cons
Attack cost is low, defence cost is high
Regulation is slow, especially across borders
Public trust in voices and messages is now structurally compromised
Verification tools aren’t yet integrated into common workflows
Key Takeaways
Voice is no longer proof. Familiarity is not authentication. Every voice message must be treated as suspect until verified, especially in high-stakes contexts.
Encrypted doesn’t mean verified. Apps like Signal protect messages in transit—but say nothing about who generated them.
Friction is protection. Callbacks, code words, and multi-channel checks introduce delay—but that delay can save institutions.
Elections and diplomacy are the frontlines. These are not theoretical risks. They’re operational vulnerabilities already in play.
We need an identity layer for the communication age. Verifying sender identity—cryptographic, persistent, and cross-platform—must become the norm, not the exception.
Thomson Reuters. (2025). Agentic AI and legal: How it’s redefining the profession. https://legal.thomsonreuters.com/blog/agentic-ai-and-legal-how-its-redefining-the-profession/
Harvard Law School Center on the Legal Profession. (2025). The impact of artificial intelligence on law firms' business models. https://clp.law.harvard.edu/knowledge-hub/insights/the-impact-of-artificial-intelligence-on-law-law-firms-business-models/
Thomson Reuters, Agentic AI and legal: How it’s redefining the profession
Harvard Law School Center on the Legal Profession, The impact of artificial intelligence on law firms' business models
Financial Times. (2025). AI's seismic effect changes client expectations of law firms. https://www.ft.com/content/fd3ee392-0dcc-48b1-a0e8-95241b8a9175
Artificial Lawyer. (2025). AI reduces client use of law firms 'by 13%'—Study. https://www.artificiallawyer.com/2025/07/08/ai-reduces-client-use-of-law-firms-by-13-study/
Thomson Reuters, Agentic AI and legal: How it’s redefining the profession
KPMG. (2025). AI and the law: Spotlighting the legal and ethical pitfalls. https://kpmg.com/xx/en/our-insights/risk-and-regulation/ai-law-spotlighting-legal-ethical-pitfalls.html
Ibid.
ASU News. (2025). ASU Law launches AI certificates across multiple degrees. https://news.asu.edu/20240611-law-journalism-and-politics-asu-law-launches-ai-certificates-across-multiple-degrees
The Guardian. (2025). High court tells UK lawyers to stop misuse of AI after fake case-law citations. https://www.theguardian.com/technology/2025/jun/06/high-court-tells-uk-lawyers-to-urgently-stop-misuse-of-ai-in-legal-work
Axios. (2025). Deepfakes are flooding the judicial system. https://www.axios.com/2025/07/25/courts-deepfakes-ai-trial-evidence
Associated Press. (2025). Maryland official used AI voice clone in racist audio hoax. https://apnews.com/article/663d5bc0714a3af221392cc6f1af985e
KPMG, AI and the law: Spotlighting the legal and ethical pitfalls.
Financial Times, AI's seismic effect changes client expectations of law firms
Harvard Law School Center on the Legal Profession, The impact of artificial intelligence on law firms' business models
Financial Times, AI's seismic effect changes client expectations of law firms
Legal IT Insider. (2025). UK regulator gives green light to tech-only law firm Garfield.Law. https://legaltechnology.com/2025/05/06/uk-regulator-gives-green-light-to-tech-only-law-firm-garfield-law-interview/
Artificial Lawyer, AI reduces client use of law firms 'by 13%'—Study
KPMG, AI and the law: Spotlighting the legal and ethical pitfalls.
California State Bar. (2025). Generative AI Practical Guidance for Lawyers. https://www.calbar.ca.gov/Portals/0/documents/ethics/Generative-AI-Practical-Guidance.pdf
SEC. (2024). SEC Charges Investment Advisers for Misleading “AI-Powered” Claims. https://www.sec.gov/newsroom/press-releases/2024-36
KPMG, AI and the law: Spotlighting the legal and ethical pitfalls.
Thomson Reuters, Agentic AI and legal: How it’s redefining the profession
Financial Times, AI's seismic effect changes client expectations of law firms
KPMG, AI and the law: Spotlighting the legal and ethical pitfalls.
Artificial Lawyer, AI reduces client use of law firms 'by 13%'—Study
I used to trust a voice on the line. Now it just feels like anyone with 15 seconds of audio can wear my mum like a hoodie.
Maybe the real slop farm is us—scrolling, chewing, liking. Machines only cook what we keep eating.