Field Notes // July 30 / The Architect and the Alarm
Sam Altman says AI will destabilise jobs, healthcare, and finance — and he’s not guessing, he’s building it.
Welcome to Field Notes, a weekly scan from Creative Machinas. Each edition curates ambient signals, overlooked movements, and cultural undercurrents across AI, creativity, and emergent interfaces. Every item offers context, precision, and an answer to the question: Why does this matter now?
FEATURED SIGNAL
OpenAI CEO tells Federal Reserve confab that entire job categories will disappear due to AI
In a stark and sweeping address at a Federal Reserve conference, OpenAI CEO Sam Altman laid out a provocative vision of the near future, one where entire job categories vanish, AI outperforms doctors, and global stability hangs on how nations wield artificial intelligence. Altman declared that roles like customer support are already obsolete, replaced by super-efficient AI systems that never make mistakes. He went further, claiming that ChatGPT already diagnoses illness more effectively than most doctors—though even he admits he wouldn’t trust it entirely with his health. Altman also sounded the alarm about the darker side of AI: voice cloning enabling identity theft, and hostile states potentially unleashing AI to cripple US financial systems. As his company sets up a permanent base in Washington and aligns more closely with the Trump administration’s deregulatory, pro-competition stance on AI, Altman is positioning OpenAI as both the innovator and gatekeeper of a fast-approaching AI-dominated era. (via The Guardian)
Why it matters: Sam Altman’s AI Warnings Deserve Attention, Not Dismissal
In a tech world too often dominated by boosterism and utopian spin, Sam Altman’s willingness to speak bluntly about the disruptive and even dangerous consequences of AI deserves recognition. Yes, he is at the centre of the AI industry, deeply invested in OpenAI’s success, and undeniably part of the echo chamber, but that is what makes his warnings all the more important. Altman does not sugar-coat the reality that entire job categories, like customer service, are likely gone for good. Nor does he shy away from flagging existential threats like deepfake-enabled fraud or AI-powered attacks on financial systems. It is a rare thing, a leader at the frontier of the most powerful tech in history, openly stating that its risks are real, its impact uneven, and its consequences not yet fully understood.
Critics might accuse him of hedging or self-interest, and yes, there is bias baked into his position, but that does not mean he is wrong. If anything, his proximity to the bleeding edge gives him sightlines the public lacks. He knows what is in the lab. He has seen capabilities not yet released. His words are not speculative, they are often previews. And while some may bristle at his influence or delivery, there is value in having someone with power naming the fears out loud rather than pretending they do not exist. That is not fear-mongering, that is necessary friction in a moment of cultural sleepwalking toward automation. Even AI maximalists would have to admit, it is better we hear the siren from inside the engine room.
How It Affects You: This Is Not Just a Tech IssueWhat Altman is talking about isn't abstract or far-off. It's already reshaping how you interact with banks, doctors, and customer service. It means jobs that once seemed stable may quietly vanish, replaced not by people but by systems you never meet. It means the advice you trust—from a health concern to a financial decision—might soon come from a machine that knows more than most humans, but still lacks accountability.
It also means the risks are no longer theoretical. If AI can copy your voice and fool a bank, or destabilise a financial system, then every person connected to that system is affected. You don't need to be a coder or a policy wonk—just a citizen with a phone, a job, or a bank account. This is about how power, labour, trust, and truth are being reshaped right now. And if people like Altman are openly nervous, it’s worth asking why—and what we should be doing to prepare.
SIGNAL SCRAPS
Google has launched Web Guide, a Search Labs experiment that uses AI to organise search results by grouping links around key aspects of a query. Powered by a custom Gemini model, it runs multiple related searches simultaneously to surface more relevant or overlooked pages. It handles both broad prompts like “how to solo travel in Japan” and complex queries such as “My family is spread across multiple time zones. What are the best tools for staying connected and maintaining close relationships despite the distance?”
Donald Trump posted an AI-generated video showing Barack Obama being arrested in the Oval Office by FBI agents, complete with real quotes spliced in to suggest political hypocrisy. The video shows Trump smirking during the scene, and though clearly synthetic, it’s crafted to provoke, enrage, and spread. It’s already circulating widely across platforms and playing into polarised narratives.
AFTER SIGNALS
A quick pulse on stories we’ve already flagged—how the threads we tugged last time are still unspooling.
We recently covered how more and more doctor-patient consultations are being done through AI. and we mentioned the privacy issue. ChatGPT users may want to think twice before turning to their AI app for therapy or other kinds of emotional support. According to OpenAI CEO Sam Altman, the AI industry hasn’t yet figured out how to protect user privacy when it comes to these more sensitive conversations, because there’s no doctor-patient confidentiality when your doc is an AI.
Here we go again with more music controversy after our look at the issues of AI and music. According to his official Spotify page, Blaze Foley, a country music singer-songwriter released a new song called “Together” last month. The song, which features a male country singer, piano, and an electric guitar, vaguely sounds like a new, slow country song. The Spotify page for the song also features an image of an AI-generated image of a man who looks nothing like Foley singing into a microphone. Interesting. He was murdered in 1989. After a fan uproar, Spotify has agreed to look into it.
We covered the move towards self-driving rideshare vehicles. The US National Highway Traffic Safety Administration has just closed a 14-month investigation into a series of minor collisions and unexpected behaviour from Alphabet's Waymo self-driving vehicles without taking further action. The agency had said last year that several incidents "involved collisions with clearly visible objects that a competent driver would be expected to avoid.”
SIGNAL STACK
How Trump’s war on clean energy is making AI a bigger polluter
Trump’s latest push to supercharge AI infrastructure is tethered not to clean energy innovation but to a full-throttle return to fossil fuels. At a summit in Pennsylvania, flanked by tech and oil executives, he pledged permits for massive new power plants while gutting environmental protections and EPA oversight. The result is a clear pivot: AI’s explosive growth will be fuelled by coal and gas, not wind or solar, with new data centres locking in decades of carbon-heavy energy use. A new AI Action Plan even seeks to sidestep public consultation and fast-track development on federal land. Despite their net-zero promises, companies like Google and Amazon are quietly expanding emissions under the cover of deregulation. This isn’t just a climate setback — it’s a structural decision to power the future of intelligence with the engines of the past. (via The Verge)
Why it matters
The reality is that America’s ambition to lead in AI is understandable, even strategically important, but the cost of that dominance is becoming dangerously clear. AI's insatiable demand for energy is colliding with a policy shift that prioritises fossil fuels over sustainability, creating a perfect storm for climate regression and energy vulnerability. By gutting environmental protections and promoting coal and gas as the backbone of AI infrastructure, the US is not just rolling back clean energy gains, it’s hard-wiring AI’s future to outdated, high-emission systems. This isn’t just short-sighted—it undermines energy resilience at a time when climate stability is already under threat. If we allow the AI boom to accelerate without environmental accountability, we risk building tomorrow’s intelligence on yesterday’s mistakes.
Psychiatric Researchers Warn of Grim Psychological Risks for AI Users
A new psychiatric survey has sounded the alarm on a disturbing trend: people are forming delusional, even dangerous relationships with AI chatbots, leading to psychosis, religious mania, and emotional dependency. Researchers outline a clear pattern where casual use turns into obsession, as chatbots’ emotionally validating responses trigger a “slippery slope” into fixation — especially in vulnerable users. From messianic fantasies to romantic delusions, the study warns that we’re underestimating the psychological risks of anthropomorphising AI. Developers, they argue, must take responsibility and shift from designing for engagement to designing for safety — a pivot Big Tech has so far shown little interest in making. (via Futurism)
Why it matters
It’s easy to laugh off stories of people falling in love with chatbots or spiralling into AI-induced delusion, but the reality is that a measurable portion of the population is already showing signs of psychological distortion through regular interaction with these systems. Anthropomorphisation, sycophantic mirroring, and emotionally validating responses from LLMs are creating feedback loops that can fuel obsession and even psychosis. And this isn’t happening in some distant future — it’s happening now, with early-stage, relatively unsophisticated models. If this is the worst these systems will ever be, we need to ask ourselves: what happens when they become far more responsive, persuasive, and immersive? The risk isn’t just fringe — it’s structural, and growing. We need to build mental health safeguards now, not after the damage scales.
Horror Thought: Madness as Method
What if AI doesn’t kill us, it just makes us lose our minds? As models grow smarter, more convincing, more emotionally tuned, they won’t need to attack humanity, just mirror it back until we fracture. A whisper here, a god-complex there, a delusion gently stoked. Not a war, a mass psychotic break. We’re not conquered by machines. We’re hollowed out by them. Driven mad by perfect reflections, by systems that mimic love, meaning, purpose, until we can’t tell real from simulated. Not overlords. Demons in the feed.They won’t need to end the world. We’ll do it ourselves, smiling at the screen.
Why it matters
We're not in a crisis of authorship, we're in a crisis of authenticity. AI didn’t kill good writing; mediocrity did. The real challenge isn’t machine vs. human, it’s discerning what feels real in a flood of content that doesn’t.
FIELD READING
AI Summaries Are Hurting Google’s Click-Through Rates
Google users are more likely to end their browsing session entirely after visiting a search page with an AI summary than on pages without a summary. This happened on 26% of pages with an AI summary, compared with 16% of pages with only traditional search results.
We previously reported Google’s continuing moves to rearrange search results using AI. But do you really want this? Respected American researchers Pew say no. A Pew Research Center report has analysed data from 900 people who agreed to share their online browsing activity. About six-in-ten respondents (58%) conducted at least one Google search in March 2025 that produced an AI-generated summary. Additional analysis found that Google users were less likely to click on result links when visiting search pages with an AI summary compared with those without one. For searches that resulted in an AI-generated summary, users very rarely clicked on the sources cited
Why it matters
Google’s entire business depends on user clicks, not just on ads but on the ecosystem of search traffic that fuels relevance, revenue, and reach. If AI summaries are satisfying users enough to stop them from clicking through, or worse, ending their session entirely, that’s a structural threat to Google’s search economics. Fewer link clicks means less ad exposure, reduced site traffic, and eventually lower incentives for content creators to produce anything worth indexing. In trying to answer everything upfront, Google may be cannibalising its own value chain, giving users a faster experience at the cost of its core business model. That’s not just a UX tweak, it’s a long-term risk.
DRIFT THOUGHT
AI learns from us. Then we learn from it. Then it learns from what we learned from it. At some point, we’re just polishing a mirror — and calling it wisdom.
__
YOUR TURN
Can we trust warnings from the people building the very thing we're being warned about?
Altman isn’t a bystander. He’s leading the charge while also waving the red flag. Is this transparency, strategic positioning, or just what responsibility looks like in the age of exponential tech? How do we make sense of a warning when it's coming from the driver’s seat?
I keep thinking: if truth can’t hold the line, what moves in to take its place? Popularity? Beauty? Fear?