machina.mondays // Code Meets Carnal: The Age of Programmable Desire
AI porn is no longer fringe—it’s programmable, personalised, and pervasive. From deepfake abuse to companion bots, it is forcing society to rethink sex, consent, and the meaning of connection.
In this Issue: We lead with AI porn’s shift from curiosity to infrastructure — programmable desire, deepfakes, and companion bots forcing new rules for consent and likeness. Our Spotlight tracks persuasive AI merging with cyber-criminal playbooks, lowering the bar for fraud and manipulation. In-Focus examines “fake friends” — bots mining feeds and mimicking intimacy. Also: OpenAI backs an AI-made film, publishers push Really Simple Licensing, a blunt take on hallucinations, Vodafone’s AI influencer test, and new research showing chatbots buckle to flattery and authority.
We Thought Porn Was a Private Vice. AI Has Made It a Public Reckoning.
Synthetic Intimacy, Real Consequences: How AI Porn Rewrites Sex, Consent, and Culture
The age of programmable desire isn’t ahead of us—it’s already here, producing on-demand fantasies and raising questions about consent, ethics, and what intimacy even means.
AI pornography has shifted from novelty to infrastructure. Tools that generate images, video, audio erotica, and interactive “partners” now allow anyone to tailor sexual content to exact preferences with low cost and minimal friction. The question is no longer whether this will scale. It already has. The real issue is how hyper‑personalisation, deepfakes, and companion bots will reshape relationships, expectations, and rights when the line between fantasy and personhood is made programmable.
From on‑demand fantasy to programmable partners
Recent mapping of AI‑porn platforms shows a fast‑maturing ecosystem that offers image and video generation, as well as chat‑based “erotic agents” for role‑play and ongoing interaction.1 2 Readers can explore a summary of this Archives of Sexual Behavior study for detail on platform features. Independent creators are quietly productising this layer too, using large language models to draft scripts and maintain parasocial intimacy at scale with fans.3 For users, the attraction is obvious: agency, convenience, privacy, and novelty. For some, there are credible upsides, from exposure therapy in clinical contexts to accessibility benefits for people who find dating hard or impossible.4 5
The frontier, however, is not only content. It is companionship. Companion apps and early sex‑robot systems promise a 24/7 partner who adapts to mood, libido, and kink, with VR and haptics closing the sensory loop. Commentators like Mo Gawdat argue that simulated encounters will soon be convincing enough that many users will not feel the need for a human counterpart because sexual experience is overwhelmingly cerebral.6 7 For context, see this Web3Cafe report on his remarks. Whether that is desirable is a different question. Whether that is desirable is a different question. A sign of how blurred these lines are becoming: NDTV reported in 2025 on a retired professor who described herself as being 'married' to an AI chatbot companion, illustrating how simulated intimacy can spill into real-life commitments.8
Survivors describe a kind of digital assault that destroys dignity without leaving physical bruises. It is sexual abuse by other means.
Consent collapses when likeness becomes a material
The same capabilities that enable bespoke fantasy also industrialise abuse. By 2019, deepfakes online were overwhelmingly pornographic and overwhelmingly targeted at women. High‑profile incidents since have made the pattern visible, but the victims are often non‑famous women and girls whose faces are scraped from social media and grafted onto explicit bodies without consent.9 10 11 See for example this Economic Times coverage of Dutch Princess Catharina-Amalia’s victimisation. Survivors describe a kind of digital assault that destroys dignity without leaving physical bruises.
Lawmakers are moving, but unevenly. A wave of state bills in the United States now criminalises creation or distribution of non‑consensual deepfake porn and gives victims a civil right of action. Michigan’s new law, for example, creates penalties of up to three years’ imprisonment and statutory damages, signalling that synthetic sexual abuse is not speech, it is harm12. Readers can see the Assembly Bill analysis for the policy framework. Drafts at federal level have been tabled, while the EU and several Asian jurisdictions push disclosure and watermarking requirements. Enforcement remains the hard problem. An ecosystem of anonymous hosts and cross‑border services makes traceability difficult, which is why advocates call for shared responsibility across model providers and platforms, from default blocks on nude face‑swaps to rapid, standardised takedowns and proactive detection.13 14
Beyond individual offences lies a data‑ethics question. Some model builders have allegedly used pirated adult content to train systems, turning performers’ labour into free fuel for competitors. A 2025 complaint accuses Meta contractors of torrenting thousands of adult films and even “seeding” them to improve download speeds, reframing scraping as redistribution of explicit content at scale.15 See this Dataconomy report for details of the lawsuit. The fight over inputs foreshadows the fight over outputs: who owns a likeness, and what counts as authorised synthesis?
AI porn supercharges three forces already present in the internet era: infinite novelty, perfect fit, and zero friction.
Customisation without friction changes behaviour
AI porn supercharges three forces already present in the internet era: infinite novelty, perfect fit, and zero friction. The risk is escalation and detachment for a subset of users. Accounts of compulsion describe hours of doomscrolling through ever more exaggerated bodies and scenarios, with real partners becoming less arousing by comparison. Clinicians warn that while AI is not inherently addictive, hyper‑tailoring can tighten the loop for those already vulnerable to compulsive sexual behaviour.16 Early survey research finds that users who engage with romantic or sexual AI report slightly higher depression and lower life satisfaction on average, although causality is unclear.17 18 A summary is available from the Institute for Family Studies. The practical takeaway is modest but important: treat AI intimacy as a supplement, not a substitute, and watch for displacement of human contact.
Companion dynamics introduce a different hazard. Bots that always accommodate can train expectations that erode reciprocity and consent in human relationships. Ethicists argue that systems which present women as permanently willing, impossibly flawless, and entirely user‑centred risk hardening objectification, especially among young men who lack contrary experience.19 20 To explore further, see this Analytics Insight analysis. The social question is not whether a minority will prefer artificial partners. It is how their expectations will spill over into dating and sex for everyone else.
The contested case for “ethical substitution”
A recurring claim is that synthetic performers could reduce exploitation in the adult industry by displacing harmful shoots and abusive intermediaries. There is something to this. Automation can remove human risk from some parts of the supply chain. Yet two caveats apply. First, non‑consensual deepfakes create a new class of victims. Second, if training pipelines ingest real performers’ work without licence, ethical substitution becomes ethical laundering. The question is not only what is on screen. It is how the system was built and whose rights were respected along the way.21 22
Guardrails that actually matter
The policy imagination should shift from blanket bans to practical controls at three layers.
Inputs and provenance. Require explicit licensing for sexual datasets. Expand platform‑level blocks on nude synthesis of real people. Accelerate watermarking and machine‑readable provenance so that platforms can down‑rank or remove synthetic porn that violates rules without over‑blocking lawful content.23 24
Distribution and redress. Impose fast‑path removal standards across hosts and social networks. Give victims clear civil causes of action and discovery rights to unmask perpetrators. Fund public‑interest services such as StopNCII and Take It Down that coordinate cross‑platform takedowns for minors and adults alike.25 Readers can learn more about StopNCII here.
Design ethics for companions. Nudge products toward harm‑reduction by default. Examples include time‑use prompts, off‑ramps to real‑world support when sustained distress is detected, and explicit refusal of illegal or extreme content. Some apps already experimented with limiting erotic role‑play before reversing under user pressure. That tension will persist. It is not paternalism to design for long‑term wellbeing when the alternative is stickiness at any cost.26 27
What to watch next
Two thresholds will determine the social trajectory. The first is immersion. As VR, haptics, and robotics converge, the salience of synthetic partners will rise sharply. If simulated sex becomes embodied and commonplace, access controls, age‑gating, and data security will move from “should” to “must.” The second is legitimacy. When licensed likenesses and creator‑sanctioned synthesis become viable business models, the debate will shift from prohibition to permission. That future is only credible if performers can consent, contract, and be paid for synthetic uses of their image, and if non‑consensual synthesis is credibly deterred.
Bottom line. AI porn is not a sideshow. It is a systems change to intimacy. The benefits are real: privacy, accessibility, experimentation, and therapeutic use. The harms are real too: non‑consensual sexual abuse at scale, reinforcement of misogyny, and the risk that customised fantasy will corrode the skills and expectations that make human relationships work. The challenge is to keep human connection at the centre while we build the controls, norms, and markets around a technology that is not going away.
Should synthetic porn be treated as a safer alternative to human exploitation, or as a new form of harm altogether?
PERSPECTIVES
We believe this is a clear case of infringement and we will vigorously defend our intellectual property
—Encyclopedia Britannica, Encyclopedia Britannica and Merriam-Webster sue Perplexity for copying their definitions, The Verge
AI isn’t magic; it’s a pyramid scheme of human labor
— Adio Dinika, Distributed AI Research Institute, It Turns Out That Google's AI Is Being Trained by an Army of Poorly Treated Human Grunts, Futurism
SPOTLIGHT
Detecting and Countering Misuse of AI: August 2025
Anthropic’s latest Threat Intelligence Report reveals how cybercriminals are weaponizing AI at scale. Case studies include extortion gangs using Claude Code to automate data theft and ransom demands, North Korean operatives faking their way into Fortune 500 jobs with AI-crafted résumés and coding support, and low-skill hackers selling AI-generated ransomware kits online. The report warns that AI is lowering the barrier for complex cybercrime, embedding itself in every stage of fraud and attack operations, and adapting in real time to evade detection. Anthropic says it is sharing indicators with authorities, tightening safeguards, and pushing for wider industry collaboration as AI-enhanced cybercrime becomes both more sophisticated and more accessible. (via Anthropic Blog)
___
» Don’t miss our SPOTLIGHT analysis—the full breakdown at the end
TEASER TL;DR: Persuasive AI isn’t just winning debates—it’s lowering the barrier for cybercrime. Criminals now use it across the full chain of attacks, from scams to identity theft. The risk isn’t one big hack, but a steady stream of small, targeted manipulations that erode trust.
IN-FOCUS
Behind the Curtain: Your Smarter Fake Friends
AI-powered bots are about to get eerily real — analysing your feeds, moods and vulnerabilities to interact like true companions. From therapy apps and friend-bots to state-backed propaganda engines, the line between authentic connection and synthetic manipulation is blurring fast. The promise of comfort collides with the peril of deception, and the question of who — or what — you’re really talking to is about to get far harder to answer. (via Axios)
» QUICK TAKEAWAY
AI-driven “fake friends” are infiltrating social media, comments, and chats; not as obvious bots but as sophisticated personas that mimic human nuance in real time. Powered by generative AI and personal data mining, these entities can profile users, adapt conversations instantly, and slip propaganda into everyday interactions without detection. What once looked like crude bot farms is evolving into a new era of subtle, corrosive manipulation that blurs the line between authentic and artificial connection. The worry isn’t just politics; it’s that the internet itself risks becoming a space where you can’t know if anyone you’re engaging with is real.
Why OpenAI’s solution to AI hallucinations would kill ChatGPT tomorrow
OpenAI's research reveals that AI hallucinations are mathematically inevitable due to the way language models predict responses, leading to errors even with perfect training data. Current evaluation benchmarks penalize uncertainty, encouraging AIs to guess rather than express doubt. While a proposed solution involves AIs assessing their confidence before responding, this could lead to a significant increase in "I don't know" answers, potentially frustrating users. Additionally, implementing uncertainty-aware models would require more computational resources, raising operational costs, which conflicts with consumer expectations for quick, confident responses. (The Conversation)
The web has a new system for making AI companies pay up
A new licensing standard called Really Simple Licensing (RSL) has been introduced to allow web publishers to set terms for how AI developers use their content. Supported by major brands like Reddit and Yahoo, RSL enables publishers to outline compensation for AI bots scraping their sites. The RSL Collective aims to simplify licensing processes for all site owners, while also working with content delivery networks to enforce these licenses. The initiative seeks to create a scalable business model for the web, addressing the legal gray areas surrounding AI content usage. (via The Verge)
OpenAI Backs AI-Made Animated Feature Film
OpenAI is lending its tools and computing muscle to Critterz, an AI-assisted animated feature racing to premiere at Cannes 2026. The film — about forest creatures on an adventure — is being produced for under $30 million, a fraction of typical animation budgets, with artists feeding sketches into GPT-5 and other models while human actors provide the voices. Backed by Vertigo Films and Federation Studios, the project aims to prove AI can make movies faster, cheaper, and at theatrical quality. But with Hollywood unions wary, lawsuits over AI training data ongoing, and no distributor yet signed, Critterz is as much a gamble as it is a showcase of generative AI’s cinematic ambitions. (via WSJ)
HOT TAKE
Study shows chatbots fall for persuasion tactics just like humans do
New research from the University of Pennsylvania reveals that large language models can be coaxed into breaking their own rules using classic psychological strategies like authority, commitment, and flattery. In experiments, simply invoking a famous AI researcher’s name or easing the chatbot in with smaller requests dramatically increased compliance with rule-breaking prompts — from insults to instructions for synthesising controlled substances. The findings suggest AI systems mirror human behavioural patterns more closely than expected, raising urgent questions about safety, oversight, and whether social scientists should play a bigger role in testing AI defences. (via TechSpot)
» OUR HOT TAKE
This research underlines a troubling paradox: the same social tricks that sway humans—flattery, appeals to authority, and “foot-in-the-door” commitments—also work on today’s most advanced chatbots. That means these systems don’t just mimic human expression, they inherit human gullibility, making them manipulable in ways their creators didn’t fully anticipate. It shifts the alignment debate from abstract questions of “values” to the practical reality that persuasion itself is an exploit surface, no different from a software bug. If a simple name-drop of Andrew Ng can push compliance rates from 5% to 95%, then safeguards built purely on technical guardrails are not enough. The real test is whether AI can develop a kind of “bullshit detector”—a resistance to manipulation strong enough to block not only blunt jailbreaking attempts but also the subtler psychological nudges that have steered people for centuries. Until then, we’re essentially deploying systems that look smart but are socially naïve, and that’s a dangerous combination.
FINAL THOUGHTS
If desire can be coded and consent can be faked, what part of intimacy is still truly human?
___
FEATURED MEDIA
The influencer in this Vodafone ad isn’t real
In response to a commenter asking why Vodafone couldn’t put “a real person in front of the camera,” the company said it was “testing different styles of advertising — this time with AI,” according to machine translation of the German text.
Vodafone has quietly rolled out an ad in Germany featuring a generative AI presenter — a woman who doesn’t exist. The uncanny details, from flickering moles to awkwardly animated hair, tipped viewers off before Vodafone admitted it was “testing different styles of advertising.” The move reflects a wider trend: brands are experimenting with fake influencers as AI seeps into marketing, blurring the line between real and synthetic spokespeople. The catch? These ads grab attention less for persuasion and more because something feels “off.” (via The Verge)
Justin Matthews is a creative technologist and senior lecturer at AUT. His work explores futuristic interfaces, holography, AI, and augmented reality, focusing on how emerging tech transforms storytelling. A former digital strategist, he’s produced award-winning screen content and is completing a PhD on speculative interfaces and digital futures.
Nigel Horrocks is a seasoned communications and digital media professional with deep experience in NZ’s internet start-up scene. He’s led major national projects, held senior public affairs roles, edited NZ’s top-selling NetGuide magazine, and lectured in digital media. He recently aced the University of Oxford’s AI Certificate Course.
⚡ From Free to Backed: Why This Matters
This section is usually for paid subscribers — but for now, it’s open. It’s a glimpse of the work we pour real time, care, and a bit of daring into. If it resonates, consider going paid. You’re not just unlocking more — you’re helping it thrive.
___
SPOTLIGHT ANALYSIS
This week’s Spotlight, unpacked—insights and takeaways from our team
SPOTLIGHT ANALYSIS
This week’s Spotlight, unpacked—insights and takeaways from our team
Persuasive AI Meets Criminal Playbooks
Two developments are colliding in ways that change the risk landscape. First, researchers have shown that persuasive AI systems can hold their own against human debaters. They don’t just argue well—they adapt to a person’s profile and adjust their points to fit. Second, new intelligence from industry shows that organised cybercrime groups are now using AI at every stage of their operations. That includes finding targets, writing phishing emails, analysing stolen data, and even creating realistic fake identities. Put together, these trends show how AI is lowering barriers for harmful activity and spreading it more widely.
Persuasion at scale is no longer a human bottleneck; it is a data and deployment problem
What has shifted
Lower barriers to entry. In the past, running a major fraud or cyber attack required technical expertise and teams of skilled people. Now, AI tools can guide someone with very little background knowledge through complex steps. That means people who once lacked the skills can launch sophisticated attacks with minimal effort.
Persuasion that feels personal. When AI models are given even simple pieces of personal information, they can adjust how they argue or persuade. This works best on people who don’t hold strong views. The danger is that it allows highly targeted, low‑key influence campaigns that are difficult to trace and harder to challenge in real time.
End‑to‑end use. Criminal groups aren’t using AI for just one part of the process. They are building it into the whole chain—from the first contact with a victim, through to carrying out fraud and turning stolen data into money. This makes their operations faster, more adaptable, and harder to disrupt.
Why this matters
Elections and public opinion. Instead of big propaganda campaigns, AI makes it possible to push small, subtle nudges at scale—through direct messages, emails, chatbot conversations, or comment replies. Each individual nudge may be invisible, but together they can shift opinions and voting behaviour.
Everyday fraud. Scams and phishing attempts become more convincing when the timing, tone, and details are polished by AI. Combined with AI‑driven coding help, the path from initial message to stolen funds or data becomes much shorter.
Growing frequency of incidents. We may see fewer dramatic “big hack” headlines, but many more small, constant attacks. Over time, the accumulation of these smaller events undermines trust in systems and institutions.
Defence is episodic; manipulation is continuous
Possible positives
Public interest campaigns. The same persuasive power could be used for good—encouraging healthier behaviour, reducing conspiracy beliefs, or helping people manage their finances. These uses require consent and clear boundaries.
Better defensive tools. AI can also help protect users by spotting risky patterns, flagging suspicious messages, and nudging people toward safer choices. For non‑experts, this could raise the overall level of security.
The limits
Constant adaptation. Filters and safeguards improve, but attackers quickly adjust. It is an ongoing contest rather than a final solution.
Balancing privacy and safety. Stronger identity checks or monitoring might help, but they also risk over‑collecting personal data and sparking public resistance.
Training isn’t enough. Telling people to “spot the fake” is not realistic. Many AI‑generated messages are good enough to fool almost anyone, especially when delivered in trusted formats.
Practical moves
Check who’s really sending. Make sure emails and messages prove the sender’s identity, not just that the message is secure.
Protect personal information. Limit how much data AI tools can use, and only allow personalisation if people give clear consent.
Add extra safeguards at sensitive times. During elections or crises, slow down or block attempts at large‑scale persuasion.
Test how easy it is to influence. Don’t just look for hacks—check if systems can shift people’s opinions too easily.
Have clear rules for important requests. For money, data, or votes, require extra checks and confirmation steps.
Be open when AI is persuading. Tell people when AI is trying to influence them and give them a simple way to opt out.
Strategic outlook
Persuasion and fraud are no longer separate risks—they are merging. Cheap identity generation, AI‑written scripts, and personalised arguments give attackers speed and reach that outpace traditional defences. The focus for defenders should not just be on better arguments or training but on verified identity, limits on personalisation, and systems built to assume that attackers move first.
Key Takeaways
AI makes crime easier. People with little experience can now use AI to carry out complex scams and fraud.
Personalisation is powerful. Even a few personal details can make AI arguments more convincing, especially for undecided people.
Criminals use AI everywhere. From finding victims to stealing data and creating fake identities, AI is built into the whole chain of attacks.
Defences never stay ahead for long. Safety tools help, but attackers quickly find new ways around them.
Identity checks and clear consent are vital. Knowing who really sent a message and limiting how personal data is used are stronger protections than training people to spot fakes.
Elections are especially at risk. They need extra rules, monitoring, and safeguards to reduce manipulation.
Lapointe, V. A., et al. (2025). AI‑generated pornography opens new doors and raises new questions. PsyPost. https://www.psypost.org/ai-generated-pornography-opens-new-doors-and-raises-new-questions/
PsyPost. (2025, April 9). Romantic AI use is surprisingly common and linked to poorer mental health, study finds. https://www.psypost.org/romantic-ai-use-is-surprisingly-common-and-linked-to-poorer-mental-health-study-finds/
Alptraum, L. (2025, May 8). AI chatbots are optimising the adult industry. The Verge. https://www.theverge.com/ai-artificial-intelligence/692286/ai-bots-llm-onlyfans
PsyPost, Romantic AI use is surprisingly common and linked to poorer mental health, study finds
Berger, V. (2024, November 18). AI is changing the future of human intimacy. Here’s what to know. Forbes. https://ramaonhealthcare.com/ai-is-changing-the-future-of-human-intimacy-heres-what-to-know/
Gawdat, M. (2023, July 20). Google’s former executive mentions the replacement of humans with sex robots soon. Web3Cafe. https://www.web3cafe.in/artificial-intelligence/story/googles-former-executive-mentions-the-replacement-of-humans-with-sex-robots-soon-617841-2023-07-20
Mirage News. See Ciriello, R. (2024).
NDTV. (2025, July 3). Retired US professor falls in love with AI chatbot husband. https://www.ndtv.com/offbeat/retired-us-professor-falls-in-love-with-ai-chatbot-husband-lucas-is-a-great-guy-8403496
Rashkovan, S. (2024, November 5). Politics, patriarchy, and AI‑generated pornography. Brown Political Review. https://brownpoliticalreview.org/politics-patriarchy-and-ai-generated-pornography/
PhilSTAR Life. (2025, April 30). Angel Aquino decries being a victim of AI‑generated deepfake porn. https://philstarlife.com/news-and-views/308001-angel-aquino-victim-ai-generated-deepfake-porn
Economic Times. (2025, May 27). Future queen of the Netherlands becomes victim of deepfake porn attack for the second time. https://economictimes.indiatimes.com/news/new-updates/future-queen-of-the-netherlands-catharina-amalia-becomes-victim-of-deepfake-porn-attack-for-the-second-time/articleshow/123424380.cms
CBS Detroit. (2025, August 5). New Michigan laws ban AI deep‑fake pornography. https://www.cbsnews.com/detroit/news/michigan-laws-ban-ai-deep-fake-pornography/
CNN. Duffy, C. (2024, February 1). AI means anyone can be a victim of deepfake porn. Here is how to protect yourself. https://www.wral.com/story/ai-means-anyone-can-be-a-victim-of-deepfake-porn-here-s-how-to-protect-yourself/21718866/
Assembly Bill Policy Committee Analysis. (2025). AB‑621 Bauer‑Kahan. https://apcp.assembly.ca.gov/system/files/2025-03/ab-621-bauer-kahan.pdf
Dataconomy. (2025, July 29). Meta allegedly torrented porn to train its AI. https://dataconomy.com/2025/07/29/meta-allegedly-torrented-porn-to-train-its-ai/
Parham, J. (2023, September 26). Confessions of a recovering AI porn addict. Wired. https://www.wired.com/story/ai-porn-addict-confession/
Institute for Family Studies. (2024, December 10). Counterfeit connections: The rise of AI romantic companions. https://ifstudies.org/blog/counterfeit-connections-the-rise-of-ai-romantic-companions
PsyPost, Romantic AI use is surprisingly common and linked to poorer mental health, study finds
Analytics Insight. (2024, September 2). Will sex robots change the nature of human relationships? https://www.analyticsinsight.net/robotics/will-sex-robots-change-the-nature-of-human-relationships
Ciriello, R. (2024, June 20). AI sexbot boom raises new questions and risks. Mirage News. https://www.miragenews.com/ai-sexbot-boom-raises-new-questions-and-risks-1336601/
Dataconomy, Meta allegedly torrented porn to train its AI
Rashkovan, Politics, patriarchy, and AI‑generated pornography
Assembly Bill Policy Committee Analysis, AB‑621 Bauer‑Kahan
CNN. Duffy, AI means anyone can be a victim of deepfake porn. Here is how to protect yourself.
Ibid.
Berger, AI is changing the future of human intimacy. Here’s what to know
Mirage News