monday.machinas // Baby, AI Can Drive My Car ... and Maybe I'll Love You
Tesla’s Robotaxis have just launched in Texas. Some of us would love someone else cope with traffic congestion and bad drivers. Are you ready for the first move to have robots drive your life?
In this Issue: Tesla’s robotaxis launch in Austin is more than a rollout—it’s a test of trust in autonomous machines acting alone. Our lead story asks: are we ready to share control? In Spotlight, Expedia’s AI pivot hints at the end of browsing, as holidays become summoned, not searched. This week’s Hot Take calls out LinkedIn’s slide into AI-generated blandness. Plus: AI stalking from a single photo, Duolingo’s AI shake-up, Getty’s UK lawsuit, and why Anthropic killed its AI-written blog.
You Used to Be the Driver. Now You're Just the Cargo
Machine at the Wheel: Tesla’s Robotaxi and the Threshold of Trust
The age of ambient robotics isn’t coming—it’s parked at the kerb. Tesla’s robotaxi debut isn’t about transport. It’s about trust. These cars think, choose, and react in the real world, introducing a radical shift in how we define accountability, safety, and even agency.
In June 2025, a fleet of Tesla's long-promised robotaxis finally began navigating the streets of Austin, Texas. Years behind Elon Musk's original 2020 projection, the launch represents a technological milestone and a cultural fault line: are we ready to entrust our daily movement to autonomous machines?
The debut arrives amid rising expectations and sharper scrutiny, underpinned by a broader societal tension between enthusiasm for technological convenience and unease about handing over control.
Tesla's rollout is modest in scope—10 to 12 cars operating within a tightly geofenced area—but outsized in symbolism. These vehicles are not just another app-powered mobility option; they are the first autonomous robots given agency in open public environments, tasked with real-time decision-making involving human lives and social infrastructure. Unlike Waymo, which has logged over 10 million paid rides across multiple cities using a sensor-rich, lidar-dependent system, Tesla relies solely on cameras and neural network inference to drive. This vision-only approach is bolder and potentially cheaper, but also more controversial due to its reliance on software without redundant safety systems like radar or lidar (The Verge, 2025; Reuters, 2025)1 2.
Public reaction has been predictably polarised. Enthusiasts laud the smoothness of Tesla's rides and point to the convenience of having a sober, alert, and emotionless machine behind the wheel at all times (Vox, 2025)3. For critics, however, each glitch—from phantom braking to confused traffic behaviour—symbolises premature deployment. These concerns gained momentum after The Dawn Project's viral Austin demo showing a Tesla robotaxi failing to stop for a school bus and striking child-sized mannequins (FOX 7 Austin, 2025)4.
This is a cultural reckoning with machine autonomy. It marks the first meaningful public test of autonomous robotic systems integrated directly into everyday human environments.
This trust gap is the central tension. Studies show that self-driving cars, like Waymo's, are already dramatically safer than human drivers. A 2025 peer-reviewed analysis of 56.7 million autonomous miles found that Waymo vehicles had 92% fewer pedestrian/cyclist collisions and 85% fewer serious injuries than the human baseline (Vox, 2025)5. Yet, autonomous systems are still perceived as uniquely risky. While humans accept daily traffic fatalities as the cost of mobility, even one robotaxi accident becomes a headline, a lawsuit, a legislative trigger.
Part of this stems from how we narrate technology. The robotaxi is not just a car; it is an active agent in our environment—an intelligent machine that navigates, interprets, and acts in real-time without human oversight. This is a profound shift in the nature of automation. These cars do not just follow rules; they make judgment calls, reacting to unpredictable events on human terms. The psychological shift involved ceding control to something non-human, yet being expected to be flawless.
This is a cultural reckoning with machine autonomy. It marks the first meaningful public test of autonomous robotic systems integrated directly into everyday human environments. For the first time, machines equipped with decision-making capabilities operate independently in public spaces, interacting with people, vehicles, and dynamic conditions without direct human oversight. These vehicles do not simply assist; they act. They navigate, respond, and make moment-to-moment judgments in the real world—on our roads, in our cities. Trust in such systems cannot be secured through technical performance alone. It will require a broader societal shift, where cultural adaptation, clear governance, and shared standards evolve alongside the machines we invite into our lives.
Currently, the regulatory ecosystem is uneven. Texas has served as a permissive testing ground due to laws that once required little oversight. But starting September 1, 2025, new state-level regulations will require robotaxi operators to meet stringent safety standards and offer transparency around accident reporting, black-box data collection, and emergency response protocols (KVUE, 2025; TPR, 2025)6 7. California and New York, conversely, have already moved toward tighter control following high-profile AV mishaps, including the shutdown of GM's Cruise after a pedestrian dragging incident (AP, 2025)8.
Tesla's robotaxis in Austin are more than test vehicles. They are the first real-world manifestation of autonomous robotics operating without a human in the loop—a first case study in how we might live alongside machines that act independently.
Trust, then, is not just about passengers. It is about ecosystems: first responders, traffic officers, regulators, and the broader civic framework that underpins public life. When autonomous vehicles fail to respond to hand gestures or emergency diversions, as seen in Austin and San Francisco, they erode not just safety, but shared expectations of responsibility and coordination (The Verge, 2025)9. Machines may follow code, but cities run on social context—and that mismatch remains a challenge.
Even so, the long-term benefits remain compelling. If AVs are deployed at scale with proven safety, studies suggest up to 34,000 lives could be saved annually in the U.S. alone by eliminating human error in driving (Vox, 2025)10. Beyond safety, robotaxis promise increased mobility for those unable to drive—elderly populations, the disabled—and have the potential to reduce congestion, emissions, and inefficiencies in logistics. The scope of impact positions AVs as not just safer cars, but catalysts for structural transformation in mobility, urban planning, and accessibility (Travel and Tour World, 2025)11.
Yet, reaching that future demands not just better AI but better governance and public dialogue. Advances like "socially aware AI" systems that mimic human ethical judgment (HKUST, 2025)12 or federated learning models that let AVs learn from each other's mistakes (NYU, 2025)13 hint at a more resilient future. But no amount of code can substitute for legitimacy. That must be earned.
Tesla's robotaxis in Austin are more than test vehicles. They are the first real-world manifestation of autonomous robotics operating without a human in the loop—a first case study in how we might live alongside machines that act independently. Each deployment is a cultural signal that robots are no longer confined to factories or labs. Like elevators once were, they start as novelties, then become essential infrastructure. But unlike elevators, these machines operate in unpredictable environments, with no shaft, no rails, and no boundaries. That difference is why public trust will take longer to earn.
The road ahead is neither smooth nor certain. Autonomy will not arrive all at once, nor everywhere at once. But the trajectory is clear: increasingly intelligent systems will take on roles once reserved for humans. Whether they are called robotaxis, domestic robots, or AI caregivers, the underlying shift is the same. We are beginning to cohabitate with decision-making machines. The question is not if we will adopt them, but how—and whether we shape their integration with foresight, accountability, and care. The driverless future isn’t approaching. It’s already arrived. The only question left is whether we are prepared to share the road. And more importantly, are we prepared to share our world with a future shaped by autonomous machines—an ecosystem of robotic AI agents navigating our spaces, making decisions, and quietly redefining what it means to live together?
What would it take for you to trust a machine with your life—and are we already closer to that moment than we realise?
PERSPECTIVES
In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI.
— Jakob Engel-Schmidt, Danish culture minister, Denmark to tackle deepfakes by giving people copyright to their own features, The Guardian
I think it forces both designers and clients to rethink the value of designers. Is it just about producing a design? Or is it about consultation, creativity, strategy, direction, and aesthetic?
— Sendi Jia, Graphic artists in China push back on AI and its averaging effect, The Verge
SPOTLIGHT
I'm Expedia's marketing chief. Here's how we're preparing for a future when people use AI to plan their vacations
Expedia CMO Jochen Koedijk says search boxes are dying and he wants the travel giant to be an AI first-mover before voice agents and chatbots take over trip-planning. Expedia is already feeding ChatGPT, Copilot, and OpenAI’s Operator with its inventory, testing an Instagram Reels tool that can identify a dreamy video clip, surface the exact destination, and hand buyers ready-to-book options. As AI overviews in Google Search erode old traffic patterns, Koedijk is shifting more marketing in-house, watching which queries trigger Gemini summaries, and betting that the classic inspiration-to-booking “funnel” will soon collapse into a single, conversational step. (Business Insider)
___
» Don’t miss our analysis—full breakdown below ⏷
TEASER: What if your next holiday wasn’t planned, but summoned? As AI tools like Expedia’s evolve beyond search boxes and blue links, we’re entering a world where the web builds itself around your desires in real time. This Spotlight explores how frictionless travel planning could transform not just vacations, but the very fabric of the internet, and why that might be both brilliant and unsettling.
IN-FOCUS
AI can now stalk you with just a single vacation photo
A startling Vox investigation shows that today’s multimodal AIs can identify the exact beach in your family snapshot, turning a harmless holiday post into a roadmap for stalkers. Journalist Kelsey Piper demonstrates how models like OpenAI’s new o3 collapse the effort once needed to track someone online, stripping away our old “security through obscurity.” From pinpointing locations to autonomously emailing authorities, AI’s cheap, scalable surveillance power is racing ahead of laws designed for a slower, human-limited era. Piper argues that personal caution—fussing over cookies or permissions—won’t cut it; only robust regulation can keep tomorrow’s chatbots from turning every shared photo into a privacy breach. (via Vox)
» QUICK TAKEAWAY
AI has collapsed the cost of surveillance: what once demanded armies of analysts now takes a single prompt and a holiday snapshot. That makes every casual post, your beach photo, your coffee-shop selfie, a potential breadcrumb trail to your doorstep. The bigger issue isn’t just personal caution; it’s that our legal and social frameworks still assume privacy breaches are hard and costly. They aren’t anymore, and regulation hasn’t caught up.
» A FUTURE SOLUTION
Imagine a “privacy blockchain” where each image or post is cryptographically watermarked with usage rights enforced by smart contracts. Every AI system must check the ledger before processing personal content or triggering an auditable alarm. It wouldn’t stop bad actors, but it could give regulators and courts a clear trail—and maybe deter the biggest violators.
Duolingo's CEO says workers need a 'mind shift' about AI
In an interview with The Financial Times, Duolingo boss Luis von Ahn clarifies his much-debated “AI-first” email: the plan isn’t mass layoffs, but a radical rethink of how the language-learning giant works. AI will draft lessons in hours instead of months, turning employees into “creative directors” rather than line-by-line content builders. Only a handful of repetitive-task contractors were let go, the company insists, while full-time hiring continues and subscriber growth remains strong. Still, a social-media backlash shows how skittish customers and investors are about automation—as Duolingo races to prove that smarter tools can mean better courses, not fewer teachers. (via Quartz)
Getty argues its landmark UK copyright case does not threaten AI
Getty Images has hauled Stability AI into London’s High Court, accusing the start-up of illegally scraping millions of Getty photos to build the popular Stable Diffusion model. Stability warns the lawsuit endangers the whole generative-AI industry, but Getty insists it’s merely defending creators’ rights and that “AI and copyright can coexist in harmony — just not for free.” With courts and lawmakers watching closely, the outcome could set the first major UK precedent on whether training data counts as fair use or infringement, shaping where investors place their next AI bets and how every future model is built. (via Reuters)
Anthropic’s AI-generated blog dies an early death
Anthropic yanked its month-old “Claude Explains” blog, meant to flaunt Claude’s writing chops, after critics panned the vague line between AI and human edits and likened the posts to automated content-marketing fodder. Despite plans to broaden into everything from creative writing to business strategy, the firm quietly redirected the page to its homepage, signalling the reputational risk of publishing AI prose that may hallucinate or mislead. The swift retreat underscores how even top labs remain cautious about over-claiming AI abilities while the tech’s accuracy and public tolerance lag behind the hype. (via Techcrunch)
HOT TAKE
LinkedIn is becoming an AI wasteland. Let’s reclaim it for humans
AI-generated posts and cookie-cutter “Great insights!” comments are flooding LinkedIn, warns AI trainer Leanne Shelton, turning a once-vibrant networking hub into a sterile bot-to-bot echo chamber. Shelton isn’t anti-AI; she’s anti-outsourcing-your-voice. Tools like ChatGPT can be used to brainstorm, tidy drafts, and spark ideas, but handing them the mic strips out the personal stories, vulnerability, and genuine insight that forge real connections. Her fix: keep AI backstage as a glamorous assistant while you take centre-stage. Read the post you’re replying to, add a lived experience, ask if the words could have come from anyone, and always layer back your perspective before hitting “publish.” The power of LinkedIn, she argues, lies not in volume but in authentic human value—something no algorithm can replicate. (via Mumbrella)
» OUR HOT TAKE
The creeping normalisation of AI-generated content on professional platforms like LinkedIn reflects a deeper shift away from authentic human expression toward a gamified attention economy driven by superficiality. While AI's potential as a collaborative tool is immense—acting as a sophisticated assistant that can refine, summarise, and enhance human input—many users are surrendering authorship entirely, effectively outsourcing their voice to machines. This full handover results not only in trite, hollow posts but also in the eerie emergence of a “digital twin” dynamic, where one's online persona is increasingly fabricated by algorithmic logic rather than genuine thought. The risk isn't just dull content—it’s the erosion of responsibility and presence, where users let AI attend their professional conversations like nametag-wearing robots at a networking event. If this trend continues unchecked, the danger is not that AI becomes too powerful, but that we collectively forget what it means to show up as ourselves in the first place.
» Listen to the full Hot Take
Creative Machinas // Hot Take: Step Aside, Human: Who’s Really Writing Those LinkedIn Posts?
LinkedIn is becoming an AI wasteland. Let’s reclaim it for humans
FINAL THOUGHTS
The future didn’t ask for permission. It just pulled into traffic.
___
FEATURED MEDIA
Riding the Wayve with Sir Richard Branson
Some innovations will always seem like science fiction – and for me, self-driving is one of them
—Richard Branson
Come with Sir Richard Branson on his first autonomous car ride driven by the Wayve AI Driver. The 25-minute autonomous ride, organised as part of the grand opening of Virgin Hotels London in Shoreditch, was hosted by Wayve Co-founder and CEO, Alex Kendall.
Justin Matthews is a creative technologist and senior lecturer at AUT. His work explores futuristic interfaces, holography, AI, and augmented reality, focusing on how emerging tech transforms storytelling. A former digital strategist, he’s produced award-winning screen content and is completing a PhD on speculative interfaces and digital futures.
Nigel Horrocks is a seasoned communications and digital media professional with deep experience in NZ’s internet start-up scene. He’s led major national projects, held senior public affairs roles, edited NZ’s top-selling NetGuide magazine, and lectured in digital media. He recently aced the University of Oxford’s AI Certificate Course.
⚡ From Free to Backed: Why This Matters
This section is usually for paid subscribers — but for now, it’s open. It’s a glimpse of the work we pour real time, care, and a bit of daring into. If it resonates, consider going paid. You’re not just unlocking more — you’re helping it thrive.
___
SPOTLIGHT ANALYSIS
This week’s Spotlight, unpacked—insights and takeaways from our team
Summoning the Trip: AI, Expedia, and the End of Browsing
When Expedia's CMO declared that his children might never type a search query into a box, it wasn't just a marketing flourish. It was a line in the sand. The future he sketched out, where travel is summoned rather than searched, pushes us into the territory of what some are calling the generative web. It's not just a change in interface. It's a dismantling of the assumptions baked into the structure of the internet itself.
For nearly two decades, the web has been navigated by users entering queries, browsing results, reading reviews, and assembling their own decisions. Web 2.0 offered control, comparison, and personal agency. AI, as imagined by Expedia and others, seeks to replace that friction with fluency: send a video, make a voice request, and receive a complete itinerary tailored to your preferences. This is not evolution. It's a reset.
And therein lies both the promise and the tension.
The Promise: Hyper-personalisation Meets Convenience
The AI-enabled travel experience is pitched as frictionless, intelligent, and deeply personalised. It leverages existing data trails, user preferences, social content (like Instagram Reels), and real-time models to generate contextual recommendations and bookable outcomes. In this world, AI is the new travel agent: faster, more responsive, and better resourced. It packages, sorts, predicts, and optimises — all while claiming to know you better than you know yourself.
This model collapses the traditional marketing funnel. Inspiration, comparison, and transaction are no longer stages but simultaneities. You see a post, send it to the AI, and it offers availability, pricing, seasonal recommendations, and booking options instantly. For many, particularly those fatigued by endless tabs and review rabbit holes, this is an appealing proposition. The sheer volume of decision-making — sifting through reviews, comparing prices, checking availability — makes the prospect of an AI handling it all an increasingly attractive proposition for many users.
The Resistance: Autonomy, Trust, and the Value of the Journey
Yet this friction — the very process of searching, comparing, and researching — is not universally seen as a problem. For some, it is the trip before the trip. It's part of the anticipation and decision-making ritual. The risk, as one speaker put it, is that "we're going full circle back to travel agents, but this time they're machines." And not everyone wants to surrender that agency.
There are also valid concerns about trust and accountability. Who do you turn to when the AI-generated trip disappoints? What happens if the itinerary is subtly wrong, if the hotel doesn’t match the generated preview, or if expectations are misaligned? In the current system, platforms like Booking.com or human agents offer recourse. With AI, the chain of responsibility becomes murky.
Moreover, a reliance on AI-curated packages may stifle exploration. If the machine knows you too well, it may only offer options within your established preferences, limiting the serendipity and spontaneity that travel is often prized for. As the conversation noted, this is especially critical when dealing with more complex, open-ended trips like safaris, multi-city tours, or loosely defined adventures.
The Collapse of Web Grammar: AI as the Interface
At its core, this transformation is about more than travel. It signals a structural shift in how the web is experienced. Traditional websites, with their navigation menus and visual hierarchies, may give way to AI-generated front ends that assemble dynamically in response to user queries. In this vision, AI is not just a tool inside the site — it is the site.
This shift carries profound implications. It challenges the business models of platforms dependent on advertising and affiliate traffic, reshapes the concept of SEO, and alters user expectations around interface grammar. The destination website becomes less relevant. What's valued is the generative layer that pulls content into personalised, task-oriented containers.
What emerges is a hybrid model: AI curates and packages options, but humans still engage, adjust, and verify. The sweet spot likely lies in systems that allow users to toggle between guided experience and self-directed research. AIs that know when to push, when to pause, and when to invite the human back into the loop. The machine-driven holidaymaker is part planner, part predictor, part butler — a tireless assistant who books before you blink, anticipates your hesitations, and cheekily insists it knows what you want before you do.
Key Insights and Takeaways
The Generative Web is Post-Browsing
AI systems like Expedia's prototype anticipate a future where users no longer search and compare, but instead issue prompts that generate contextual responses. This dismantles the blue link model of Google-era search.Friction Has Value
While AI removes overhead, not all users want that. For some, planning is part of the joy. The risk is in designing systems that prematurely collapse options into a single outcome.Trust and Recourse Are Unclear
If an AI-generated trip fails, who takes responsibility? Trust systems and safety nets must evolve alongside the tech.AI as the Interface Layer
Websites may be replaced by real-time, AI-generated interactions that assemble information on demand. The interface itself becomes generative.Hybridity Will Define the Transition
A middle ground between full automation and full human control will likely define early adoption. Systems that offer AI-generated scaffolding but retain space for user exploration will resonate more widely.Data Familiarity is Power
AI recommendations will hinge on deep user profiles. The more data points known, the more predictive and persuasive the system becomes, raising new concerns around consent and surveillance.
The Verge. (2025). Here’s a running list of all of Tesla’s robotaxi mishaps so far. https://www.theverge.com/news/692639/tesla-robotaxi-mistake-wrong-lane-phantom-braking
Reuters. (2025). Robotaxis go from hype to maybe, possibly, profit. https://www.reuters.com/breakingviews/robotaxis-go-hype-maybe-possibly-profit-2025-06-04/
Vox. (2025). New study finds self-driving cars safer than human drivers. https://www.vox.com/future-perfect/411522/self-driving-car-artificial-intelligence-autonomous-vehicle-safety-waymo-google
FOX 7 Austin. (2025). The Dawn Project live demonstration of Tesla robotaxis. https://www.fox7austin.com/video/1661189
Vox, New study finds self-driving cars safer than human drivers
KVUE. (2025). Texas lawmakers urge Tesla to delay Robotaxi rollout - Austin. https://www.kvue.com/article/money/cars/tesla-robotaxi-rollout-delay-lawmaker-letter/269-e8abda1d-b9ba-4c8c-8f16-f4e589e9c650
TPR. (2025). Lawmakers urge Elon Musk's Tesla to delay Austin robotaxi launch until new safety law takes effect. https://www.tpr.org/economy-and-labor/2025-06-23/lawmakers-urge-elon-musks-tesla-to-delay-austin-robotaxi-launch-until-new-safety-law-takes-effect
AP News. (2025). Musk finally rolls out his driverless Tesla taxis after years of promises. https://apnews.com/article/tesla-selfdriving-robotaxis-musk-waymo-austin-autonomous-d50749e288dc50ceff44b8e6a4b64961
The Verge, Here’s a running list of all of Tesla’s robotaxi mishaps so far
Vox, New study finds self-driving cars safer than human drivers
Travel and Tour World. (2025). AI Behind the Wheel: How Self-Driving Cars Are Revolutionizing US Road Trips and Reshaping the Future of Travel. https://www.travelandtourworld.com/news/article/ai-behind-the-wheel-how-self-driving-cars-are-revolutionizing-us-road-trips-and-reshaping-the-future-of-travel/
Interesting Engineering. (2025). Autonomous cars that 'think' like humans cut traffic risk by 26%. https://interestingengineering.com/transportation/autonomous-vehicle-safety-upgrade-hkust
NYU Tandon. (2025). Self-driving cars learn to share road knowledge through digital word-of-mouth. https://www.sciencedaily.com/releases/2025/02/250227165756.htm