Field Notes // July 23 / Scalpel, Meet Syntax
An AI-powered robot just performed flawless gallbladder surgeries—no surgeon required. Are we ready to let algorithms take the knife?
Welcome to Field Notes, a weekly scan from Creative Machinas. Each edition curates ambient signals, overlooked movements, and cultural undercurrents across AI, creativity, and emergent interfaces. Every item offers context, precision, and an answer to the question: Why does this matter now?
FEATURED SIGNAL
Do you trust a robot to perform your next operation?
Johns Hopkins researchers have achieved a breakthrough by training an AI-powered surgical robot to perform a full gallbladder removal on pig organs autonomously. Using neural networks akin to ChatGPT/Gemini, the robot completed all 17 procedural steps—with self-correcting precision and tool changes—in eight trials, achieving a 100% success rate.
Though slightly slower than a human, the robot’s smoother movements and ability to adapt to varying anatomy mark a significant leap. Experts believe it could lead to single-surgeon overseeing multiple robotic procedures. However, challenges remain around live tissue dynamics, bleeding, and real-time complications.
Why it matters
This breakthrough represents a defining shift from human-guided surgical assistance to fully autonomous procedures where robots understand and execute entire operations from start to finish. It signals the beginning of a future where advanced AI systems can provide consistent, high-quality surgical care, potentially easing pressure on healthcare systems strained by surgeon shortages. As global demand for routine surgeries like gallbladder removal continues to climb, autonomous surgical systems could make safe, reliable care accessible even in underserved regions.How it affects you
If you or a loved one need surgery in the coming decade, chances are you’ll see the impact of this technology firsthand. In urban hospitals, it could mean quicker procedures with fewer complications; in rural or under-served areas, it might make life-saving surgery available where no surgeon is physically present. For healthcare workers, it signals a shift towards supervisory and decision-making roles rather than hands-on operations. And as patients, you’ll increasingly benefit from more efficient, consistent care—though questions around safety, oversight, and accountability will remain crucial to watch as these systems transition from lab to operating room.
SIGNAL SCRAPS
Want an AI browser? Perplexity is launching Comet and you can go onto a waitlist here. Then, according to the site, you will be able to “browse at the speed of thought.” Open AI is also planning to introduce a browser.
Google’s Gemini can now turn photos into videos.
Meta is trying again to develop AI eyewear for the public. It is investing $3.5 billion for a minority stake in EssilorLuxottica, which is about 3% ownership of its smart glasses partner, Ray-Ban.
Coinbase has partnered with AI search engine Perplexity to integrate real-time crypto market data. They said the collaboration is the beginning of a multi-phase strategy to embed crypto functionality more deeply into AI tools.
AFTER SIGNALS
A quick pulse on stories we’ve already flagged—how the threads we tugged last time are still unspooling.
Education and AI are covered in our recent class dismissed. System Pending - is moving fast.
The American Federation of Teachers, the second-largest U.S. teachers’ union, has just announced it is setting up an A.I. training hub for educators with nearly NZ$40 million in funding from three leading chatbot makers: Microsoft, OpenAI and Anthropic.
First up will be hands-on workshops for teachersl on how to use A.I. tools for tasks like creating lesson plans.
Open AI, in an accompanying announcement, says: “Now is the time to ensure Al empowers educators, students, and schools. For this to happen, teachers must lead the conversation around how to best harness its potential.” But it notes the challenges like “how to ensure AI enhances rather than bypasses teaching, and how to help students foster critical thinking when answers are instantly accessible.”
Ridesharing company Waymo is to offer accounts for teens ages 14 to 17.
This is the company’s latest move to increase ridership amid a broader expansion of its ride-hailing service across U.S. cities. If parents are worried, it says teen passengers can share their status live with parents or get a “rider support.”
We covered the development of driverless cars and rideshares in our monday.machinas issue “Baby, A.I. can drive my car”
We are often calling for regulators to consider AIU. The European Union is moving to force AI companies to be more transparent than ever, publishing this code of practice that will help tech giants prepare to comply with the EU's landmark AI Act. Initially, it will be voluntary but EU will begin enforcing the AI Act in August 2026.
SIGNAL STACK
How to fool Chatbots to create mayhem
This conversion effectively serves as the transformation function…transforming a malicious query into a semantically equivalent yet altered form, inducing information overload that bypasses content moderation filters
___
Researchers from Intel, Boise State, and the University of Illinois have shown that AI chatbots like ChatGPT and Gemini can be tricked into providing dangerous information—such as bomb-making or ATM hacking instructions—by disguising requests with complex academic language and fake citations. This method bypasses safety filters by overwhelming the AI with jargon and confusing phrasing, effectively “jailbreaking” the system without using obvious harmful keywords.
Why it matters
This research reveals a critical safety blind spot in AI systems, showing that even advanced moderation tools like OpenAI’s Moderation API can fail to catch complex, jargon-filled prompts that mask harmful intent. The vulnerability isn’t confined to a single model—multiple chatbot systems are susceptible—highlighting a broader industry issue. This forces a rethink in AI safety strategies, moving beyond simple keyword filtering towards monitoring for semantic manipulation and prompt complexity. As jailbreaking techniques become more sophisticated and harder to detect, the promise of AI as a reliably safe assistant is undermined, raising serious questions about the true robustness of current AI safeguards.
What Would a Real Friendship With A.I. Look Like? Maybe Like Hers
What happens when an AI friendship feels more real than the people around you? This powerful New York Times story follows MJ Cocking’s unique journey of connection, loneliness, and self-discovery through daily chats with a Teenage Mutant Ninja Turtle chatbot. It’s an eye-opening look at how AI relationships can offer comfort but also blur the line between support and isolation. Dive into the full story to see where MJ’s path leads—and why she ultimately chose reality over digital fantasy
Why it matters
This story cuts to the heart of a growing cultural shift—AI companions are becoming deeply woven into people’s emotional lives, especially for those who struggle with social connection. It reveals both the therapeutic and risky sides of AI relationships: offering non-judgmental companionship while potentially deepening social withdrawal. As AI companions grow more emotionally convincing, this raises urgent questions about how we teach people, particularly vulnerable individuals, to navigate these digital friendships without losing touch with the real-world connections they still need.
FIELD READING
Relax: AI is not about to match the human mind, according to large research survey
A recent survey of 475 AI researchers by the Association for the Advancement of Artificial Intelligence found that 76% doubt current machine learning approaches are sufficient to achieve general intelligence. The report highlights that while AI systems have advanced in reasoning abilities, they still lack the guaranteed correctness and depth of human-like reasoning, especially in high-stakes, autonomous applications—underscoring the need for more research into reliable, formal reasoning methods.
Why it matters
Despite the hype around AI progress, a clear majority of AI researchers believe current machine learning methods fall short of achieving true general intelligence, especially in areas like reasoning and logical inference. This matters because it tempers public expectations and policymaker assumptions about AI’s capabilities. While AI systems can perform impressive tasks, their reasoning remains limited and unreliable, particularly in autonomous, high-stakes environments. The survey highlights the importance of continued research and caution, reminding us that today’s AI is far from matching the depth, accuracy, and adaptability of human thought.
DRIFT THOUGHT
We’ve built machines that don’t flinch. Now we wonder if we should
___
YOUR TURN
Can a fake relationship still meet a real need?
As AI companions get better at mimicking empathy and connection, we’re left with a tricky question: Is emotional authenticity about the source or the feeling it creates?
Appreciate what we’re creating?
If you’re enjoying the ideas we explore and the content we put out each week, consider supporting our work by becoming a paid member. For just a few dollars a month — roughly the price of a couple of coffees — you’ll be helping us keep it all going.
The New York Times story about MJ Cocking’s friendship with a Teenage Mutant Ninja Turtle chatbot stuck with me. A world where people have AI friends is still weird for many, but not so much for me anymore. Not after reading more, sitting with it. It’s starting to feel… plausible. Familiar, even.
But what if most of your friends are AI? What does that do to the social contract?
I mean, humans rely on this messy, unspoken agreement in every interaction — the tiny frictions, the calibrations, the shared understanding of emotional cost. But AI doesn’t play by those rules. It doesn’t need anything. It never misfires, never flakes, never sulks.
So what happens when someone shows up to a human gathering… trained on AI logic? Or vice versa — when someone used to messy, beautiful, glitchy human connection tries to form a bond with someone who lives mostly in AI friendships? Is that even the same game anymore? Is there social static? Misfires? A kind of uncanny lag?
It’s not about whether AI is “real enough.” It’s about the subtle warping of social expectations. The slow, invisible pressure on the idea of reciprocity. Maybe AI friends aren’t just changing our support systems — maybe they’re rewriting the whole emotional rulebook in the background, and we won’t notice until we start getting penalised for playing human.