machina.mondays // Hey Siri, "What’s the Point?"
You think you're cool using ChatGPT, but has it told you its end game?
In this Issue: We examine the widening gulf between AI ambition and public trust—where consumers are no longer awed, but alienated. From the glossy promises of “smart” devices to subscription-locked assistants no one asked for, this issue explores how the AI future is being built without a compelling “why.” With insights from Pew, Fortune, and OrgVue, we unpack a growing backlash rooted not in fear, but in rational doubt. Is AI becoming another 3D TV moment—impressive tech that misunderstands the user? And if so, who is this future really for?
A Subscription to Nowhere: The AI We Didn’t Order
In the race to define the AI future, someone’s been left behind: the public.
For tech insiders, AI feels like destiny—a high-speed train already in motion. Engineers, executives, and evangelists tout breakthroughs in real-time summarisation, multimodal vision, and semantic retrieval. Devices are “getting smarter,” chatbots are everywhere, and AI models now write poetry, pass legal exams, and conduct customer support calls. But in this world of frictionless automation and glossy launch videos, something is missing: a compelling reason for the everyday person to care.
A new Pew Research Center study reveals just how stark the divide has become. While 56% of AI experts say AI will have a positive impact on the United States over the next 20 years, only 17% of the public agree1. Nearly half of experts are “more excited than concerned” about AI. Just 11% of U.S. adults feel the same. It’s not just a matter of differing expertise—it’s a matter of divergent realities.
From a consumer point of view, it feels like the tech giants are solving problems nobody asked them to solve—justifying it with slogans and speculative promises.
The issue isn’t awareness. Over 90% of U.S. adults have at least heard of AI tools like ChatGPT or Gemini. But enthusiasm lags. Only a third of U.S. adults say they’ve ever used an AI chatbot, and even among those who have, only 33% found it “very helpful”—compared to 61% of experts2. The prevailing public mood is not one of curiosity or empowerment—it’s concern, confusion, and growing disconnection.
This mismatch isn’t accidental. In fact, it echoes a familiar pattern: tech hype that forgets the user.
From 3D TV to Alexa: When Innovation Skips the "Why"
In our recorded discussions, we kept returning to an uncanny parallel—3D television. Remember it? A marvel of display engineering, trumpeted by TV companies as the next great leap in home entertainment. And yet, it flopped. Why? Because wearing awkward glasses to watch limited content didn’t solve a real problem for most people. It misunderstood both the context and the consumer.
We see similar missteps today. Consider Apple’s announcement that “Apple Intelligence” will only be available on the iPhone 15 Pro and up—devices representing only a percentage of their global customer base. Or Amazon's move to place Alexa’s new AI features behind a subscription paywall. These are not decisions rooted in user needs. They are commercial manoeuvres built around legacy hardware pipelines and investor pressure.
And crucially, they fail to answer the one question that matters most to the public: Why do I need this? What problem does this solve for me? That absence of a compelling "why"—a clear, grounded benefit—explains why consumers are sceptical. It's not resistance to technology; it's resistance to irrelevance. Just like with 3D TV, consumers are being told that the tech is extraordinary, but not why it's necessary. And without that connection between innovation and lived experience, the hype collapses under its own weight.
From a consumer point of view, it feels like the tech giants are solving problems nobody asked them to solve—justifying it with slogans and speculative promises.
The Consumer Isn't Cynical—They’re Rational
Surveys show the public isn’t naïvely anti-tech. They’re making reasoned judgments. According to a recent OrgVue report, 59% of U.S. workers are unclear how AI will improve their productivity or work experience, even as their employers pour money into it3. Another study finds that 71% of companies are investing in AI but admit they have little evidence of actual impact4.
The real question isn't “Why aren’t people more excited about AI?” It’s “Why haven’t companies done a better job showing what AI is for?”
Too often, the answer is that AI is being shoehorned into existing business models—not to offer genuine breakthroughs, but to justify hardware refresh cycles and subscription services. As Nada Sanders from Fortune5 argues, companies are attempting to apply AI to outdated frameworks rather than rethinking their approach entirely. The result is not transformation, but entrenchment. As she notes, “Adding AI to a business model of the past doesn’t lead to competitiveness—it simply solidifies old processes.” In essence, AI becomes a high-tech patch for legacy systems, rather than a portal to new possibilities.
Spencer Fung, CEO of Li & Fung, captures it vividly: “Companies acquiring AI without a new business model is like a company digitizing a horse and carriage—while the competition has created a digital automobile”6. The implication is clear: AI without structural change isn’t progress—it’s stagnation in a futuristic costume.
In this light, AI becomes less of a transformative force and more of a tactical update—an innovation-shaped excuse to keep existing pipelines flowing.
Consumers instinctively sense this. They aren’t rejecting AI itself; they’re rejecting the feeling that it’s being used to extract more from them rather than deliver more to them. And without a clear, compelling "why," these initiatives risk falling into the same trap as 3D TV: technically impressive, but experientially hollow. The result is scepticism not rooted in fear, but in a rational understanding that so far, the future isn’t being designed with them in mind.
Consumers don’t need another language model baked into a $2,000 phone. They need clarity on how these systems improve their daily lives—how they help them shop, navigate, communicate, or make decisions. They want benefits that are tangible, not abstract.
Beyond the Hype Cycle: A Different Approach to AI Futures
In our upcoming newsletters, we’ll unpack what this moment really requires: a turn away from platform-centred hype, toward human-centred design. We'll ask what it would look like to genuinely listen to user needs—rather than treat them as afterthoughts or conversion metrics.
We’ll also argue that the most successful AI futures won’t be built by the current tech incumbents. Their models are often constrained by sunk costs and legacy thinking. Just as Netflix, not Blockbuster, redefined television, we believe a new class of players—more attuned to user behaviour, less beholden to hardware sales—will shape the next great AI leap.
Until then, one thing is clear: if the future is to be truly intelligent, it needs to be intelligible.
PERSPECTIVES
All … streams of government are going to change with [AI] technology
Right now, Big Tech and AI companies are using publishers’ own content against them, taking it without authorization or compensation to power AI products that pull advertising and subscription revenue away from the original creators of that content
—Danielle Coffey, President and CEO of the News/Media Alliance
SPOTLIGHT
Apple Reportedly Developing AI Agent ‘Doctors’ in Latest Health Push
Apple is advancing its health AI efforts with Project Mulberry, an initiative expanding on Project Quartz to deliver personalised health advice through its Health app. Launching as early as iOS 19.4, it incorporates expert medical content and ventures into food tracking, competing with apps like MyFitnessPal. Meanwhile, healthcare firms are rapidly increasing GenAI investments, expecting strong returns.
» Don’t miss our analysis—full breakdown below . ⏷
IN-FOCUS
Some people think AI writing has a tell — the em dash. Writers disagree
Some social media users claim the em dash is a giveaway for AI-generated writing, dubbing it the “ChatGPT hyphen,” but writers and linguists strongly disagree, defending it as a nuanced, expressive punctuation mark. The debate highlights broader tensions around authorship, creativity, and the evolving influence of AI on language.
QUICK TAKEAWAY
As AI becomes a normalised layer in everyday writing processes, the obsession with detecting its fingerprints—through quirks like punctuation or word choice—reveals a deeper anxiety about authorship and authenticity. The idea that using an em dash or certain phrasings could trigger accusations of AI involvement exposes just how brittle our frameworks for originality have become. We’re not confronting the real shift: creativity is already entangled with machines. Tools like Grammarly and GPT aren’t replacing writers—they’re becoming part of the cognitive workflow, refining, iterating, extending what starts as human thought. The problem isn’t AI mimicry; it’s our failure to adapt our ethics, evaluation models, and cultural narratives to hybrid authorship. This isn’t about catching cheaters. It’s about acknowledging that the terrain of writing has fundamentally changed—and trying to police it with 20th-century thinking is not just futile, it’s regressive.
Gen Z uneasy about AI, but still using it
Despite widespread use, many Gen Zers remain anxious about AI, highlighting a strong desire for clear policies in schools and workplaces to guide its responsible and confident use.
From AI Barbie to 'Ghiblification' - how ChatGPT's image generator put 'insane' pressure on OpenAI
OpenAI’s ChatGPT-4o image generator has gone viral with Barbie and Ghibli-style recreations, sparking record user growth—and controversy over GPU strain, artistic exploitation, and copyright concerns.
Lord Of The Rings Turned Into Studio Ghibli AI Slop Looks Like Garbage
OpenAI’s Ghibli-style AI remake of The Lord of the Rings trailer has drawn sharp criticism for producing lifeless, crude visuals, with critics calling it an artistic failure and an example of how generative AI often prioritises viral hype over genuine creativity or respect for original art.
HOT TAKE
A new vacuum can alert you to incoming text messages. Why?
Samsung's new AI-powered appliances, like a vacuum that shows text alerts, highlight a trend of adding smart features to home devices—often more for buzz than real need. While some functions offer convenience or energy savings, many consumers remain unconvinced, citing high costs, privacy concerns, and limited practical value. Adoption is still low, with most buyers preferring simpler, reliable appliances over flashy AI upgrades.
HOT TAKE
The absurdity of “smart” appliances — from vacuum cleaners that notify you of texts to washer-dryers that pretend to take calls — is more than just laughable; it’s symptomatic of a wider problem infecting AI consumer tech. Like the 3D TV fiasco of the past, we’re watching a familiar cycle where companies lead with tech gimmicks, not user value. These feature-stuffed devices, often justified with vague promises of convenience, actually erode meaningful interaction by adding pointless layers between us and our everyday tasks. They don’t solve real problems; they manufacture needs. AI, when inserted in this way, becomes a buzzword plastered onto redundant features — leading consumers to ask the very question that spells death for innovation: “Why?”
And here’s the real concern — these gimmicks are setting up AI for a trust crash. As consumers grow tired of buying hardware that only works with paid subscriptions, or “smart” features that stop functioning after a service shutdown, we risk undermining genuine progress in AI. This is not ambient intelligence; it’s rent-seeking in disguise. The overreach of pseudo-AI use cases will breed scepticism and fatigue, making people laugh at, not lean into, AI. If the industry doesn’t course-correct, it won’t be the technology that fails — it’ll be the consumer’s willingness to engage with it.
Listen to the full Hot Take
FINAL THOUGHTS
If the future speaks in code no one understands, who is it really for?
A truly intelligent future isn’t just powerful — it’s understandable, usable, and human
FEATURED MEDIA
AI Threats and Trustworthiness
Deepfakes are everywhere—in every election... we've seen AI used to manipulate people
—Dr. Latanya Sweeney, a prominent computer scientist and expert in technology
and public policy
The video traces long-standing fears about intelligent technologies, from myths like the Golem to modern AI. It distinguishes real risks—like job loss, privacy, or biased systems—from exaggerated sci-fi fears. Key concern: the growing threat of deepfakes, especially in global elections and public trust.
Justin Matthews is a creative technologist and senior lecturer at AUT. His work explores futuristic interfaces, holography, AI, and augmented reality, focusing on how emerging tech transforms storytelling. A former digital strategist, he’s produced award-winning screen content and is completing a PhD on speculative interfaces and digital futures.
Nigel Horrocks is a seasoned communications and digital media professional with deep experience in NZ’s internet start-up scene. He’s led major national projects, held senior public affairs roles, edited NZ’s top-selling NetGuide magazine, and lectured in digital media. He recently aced the University of Oxford’s AI Certificate Course.
___
SPOTLIGHT ANALYSIS
This week’s Spotlight, unpacked—insights and takeaways from our team
Branded Intelligence: The Next Frontier of Corporate AI Entities
In this emerging paradigm of digital services, we are witnessing the birth of a new category of AI-powered entities—software-based constructs developed and branded by major corporations, designed to offer hyper-specialised, personalised services. The discussion anticipates a strategic pivot from traditional hardware-centric models toward “entity products,” such as an Apple Doctor or Nike Coach, that encapsulate both brand identity and domain-specific functionality. These entities go beyond the concept of agents by embodying a persistent branded intelligence—subscribed to, interacted with, and trusted by users. While the current ecosystem is still embryonic, it’s evident that this model could mature into a dominant layer in the product-service architecture of future corporations, especially in wellness, health, and fitness. The most compelling potential lies in creating affordable, always-available, hyper-personalised AI companions that blend data, brand loyalty, and cognitive assistance in new and intimate ways.
However, the path forward is not without critique. The current hesitation of companies like Apple to fully commit to this vision is seen as a missed opportunity. Despite leading the field with foundational tools like Apple Health, the ecosystem remains fragmented—requiring users to supplement it with third-party apps like MyFitnessPal for advanced functionality. This piecemeal strategy reflects a broader corporate ambivalence about whether to double down on AI-led transformation or preserve traditional service roles. A central frustration emerges from this lack of clarity: many corporations appear to flirt with disruption but fail to demonstrate the boldness required to lead it. Instead of decisively owning the wellness or diagnostic spaces, companies are caught in a PR-driven balancing act—offering AI capabilities while trying not to trigger concerns about job displacement or over-automation. This hedging weakens innovation and delays the inevitable evolution of AI-integrated personal services.
Yet the long-term trajectory seems unstoppable. AI’s incursion into medicine is not merely speculative—it’s demonstrably effective. Referencing studies where AI outperforms doctors in diagnostic accuracy, the discussion highlights how rapid, comprehensive data parsing by AI could dramatically reduce waiting times and improve outcomes. But the vision here is not one of replacement—it’s of augmentation. Drawing on Garry Kasparov’s model of human-AI synergy, the analysis affirms that the most powerful future lies in collaborative intelligence. AI may handle the bulk of analysis, but human judgment, lateral thinking, and intuitive insight remain vital—particularly in fields where stakes are high and variables are complex. In this hybrid future, the doctor remains, but as a partner to the machine, not its rival. This balanced model has the potential to redefine trust, expertise, and care in a future where branded AI entities become indispensable allies in everyday life.
Pew Research Center. (2025, April). How the U.S. Public and AI Experts View Artificial Intelligence. https://www.pewresearch.org/
Ibid.
OrgVue. (2024). AI in the workforce: Generational differences and organisational planning. https://www.orgvue.com/resources/articles/ai-in-the-workforce-generational-differences-and-organizational-planning/
HR Dive. (2024). Companies are investing in AI — but aren’t sure about the impact. https://www.hrdive.com/news/companies-investing-in-ai-but-arent-sure-about-impact/710550/
Fortune. (2024, October 1). The real reason 75% of corporate AI initiatives fail: Leadership, not tech. https://fortune.com/2024/10/01/real-reason-75-corporate-ai-initiatives-fail-leadership-tech/
Ibid.
Drawing the Netflix / Blockbuster comparison is very insightful. I think Google may still be around for a while - but I totally see what you are saying; we need new economic, social, and consumerist systems - not a reskin of current processes with agentic operation.