machina.mondays // Blinded by Billions: Big Tech’s Kodak Moment Looms
Will Big Tech Miss the Future—Because They’re Too Busy Owning the Present?
In this Issue: We explore why the AI device we actually need still doesn’t exist—and why the tech giants best placed to build it are the least able to do so. Trapped by legacy ecosystems, companies like Apple and Google offer only incremental upgrades, while outsiders inch toward true reinvention. We also dive into YouTube’s booming fake trailer scene, where studios quietly monetise AI-generated content, and examine the rising crisis of digital likeness rights. As AI reshapes everything from hardware to Hollywood, the real question emerges: who is this future being built for?
Trapped by Design: The AI Device No One’s Built (But Everyone Needs)
We are at the brink of a hardware reckoning. For all the breathless talk of artificial intelligence transforming everything, from your workplace inbox to your late-night movie picks, the place where that transformation feels strangest—almost absent—is in your pocket.
Not in the metaphorical sense, but literally: your smartphone. Despite being hailed as a revolutionary leap forward, AI’s interface layer is still funnelled through the same rectangles of glass and aluminium that have dominated the last fifteen years. Why?
Because the incumbents—Apple, Google, Amazon, Meta, even Microsoft—aren’t actually trying to change the device. They’re trying to preserve it.
Apple’s so-called leap into AI, branded as "Apple Intelligence," is a bolt-on—not a breakthrough. It's a feature set limited to the iPhone 15 Pro and above, carefully calibrated to drive yet another hardware refresh cycle (MacRumors, 2025)1. Amazon hides its new AI Alexa features behind a subscription. Google’s Gemini remains an awkward addendum to its existing services, and Microsoft has all but draped Bing in AI tinsel, hoping you’ll care. Each of these is a variation on the same theme: augmentation, not reinvention. Their devices remain exactly what they’ve always been—conduits for ecosystems, monetised and contained.
And herein lies the paradox: the companies best positioned to redefine the AI interface are least able to do so. They are trapped by their own success. They live inside what Kodak once called its "film fortress"—a commercial moat too deep to cross, even when the world shifts beneath their feet.
The companies best positioned to reinvent the AI interface are the least capable of doing it—because doing so would mean dismantling the empires they’ve spent decades building
Kodak, as history now records, built the first digital camera in 1975. But it never shipped a viable product until the 1990s, and by then it was too late. The fear wasn’t technological; it was existential. Digital threatened film, and film was the fortress. A similar hesitation afflicts the AI ambitions of today’s tech titans. They can see the new world emerging—but entering it risks cannibalising the old.
This is what business thinkers call the innovator’s dilemma—a term coined by Harvard professor Clayton Christensen in his seminal book The Innovator’s Dilemma (1997). Christensen argued that successful companies often fail precisely because of their success. Their focus on sustaining innovations to please existing customers and protect current revenue streams blinds them to disruptive innovations that initially seem unprofitable or niche. These disruptions, often pioneered by smaller or newer entrants, eventually redefine the market. In the case of AI hardware, legacy tech firms are locked into iterative enhancements, constrained by the very systems that once propelled them forward, while outsiders remain free to explore transformative new paths (Christensen, 1997)2.
Which is why, increasingly, the AI device we actually need—the one built from the ground up for a new kind of relationship between humans and machines—will not come from the usual suspects. It will come from outsiders.
OpenAI, despite its own scale, remains one of the few native AI players unburdened by legacy hardware. It’s not hard to imagine them producing something akin to an "AI iPhone," especially given their rumoured collaboration with ex-Apple designer Jony Ive (MacRumors, 2025)3. Humane’s AI Pin was an early, if flawed, attempt to rethink the device. The Rabbit R1 and concepts from Deutsche Telekom and startups like A Phone, A Friend show that there is a real hunger to escape the tyranny of app icons and tap-driven interfaces (Wired, 2024). Their premise is simple: let AI be the interface.
That shift—from app-centric to AI-centric, from tool to companion—could mark the first genuine hardware leap since the iPhone. Instead of swiping through static screens, users could simply speak, gesture, or glance, trusting the device to know their intent. Instead of choosing between apps, the device would dissolve the app model altogether, surfacing what’s needed when it’s needed (Accenture, 2024)4.
But that future still feels distant. The AI Pin failed because it didn’t yet meet expectations. Its voice controls were inconsistent, its latency high, and its interface underwhelming. Yet even in its failure, it offered a signal: this space is live. A new category is trying to be born, and the gestation is messy.
What makes this moment different from earlier cycles—like the 3D TV flop or Alexa’s smart home plateau—is that AI is not a feature. It’s a new paradigm. And as Deloitte (2024)5 notes, it demands new device thinking. Legacy firms are simply not architecturally aligned to take this leap. They need the iPhone model to persist—Apple’s entire ecosystem, from App Store revenues to hardware cycles, is dependent on the continued dominance of the smartphone form factor. Amazon, meanwhile, needs Alexa to remain viable in the smart home landscape, anchoring its ecosystem and selling more devices. Google cannot afford to let AI radically shift how search operates without threatening its AdWords revenue engine. Meta, for its part, must ensure that social engagement stays inside its walled gardens—apps like Instagram and Facebook—because that’s where the monetisation happens. Each company is structurally tethered to the paradigm it built. Their margins, platforms, and partners depend on it. And that dependency, ironically, prevents true innovation.
This leaves a window open. For a new player—or a radically reorganised old one—to define the AI-native interface. To answer the most pressing design question of our time: What should a personal device look like in the age of ambient intelligence?
The AI device wars are no longer about innovation alone—they’re about intent. Will it be built for users, or built for business?
History tells us it won’t be the company defending yesterday’s model that revolutionises the next device—it will be the one willing to destroy it.
PERSPECTIVES
Where does (AI) leave our brains? Free to engage in more substantive pursuits or wither on the vine as we outsource our thinking to faceless algorithms?
Using ChatGPT is like having a frustratingly unmotivated intern who continues to disappoint, but on rare occasions accidentally proves to be useful.
—Mitchell A. Sobieski | An AI by any other name | Milwaukee Independent
SPOTLIGHT
Inside YouTube’s Weird World Of Fake Movie Trailers — And How Studios Are Secretly Cashing In On The AI-Fueled Videos
YouTube has become home to a thriving ecosystem of AI-generated fake movie trailers, blending real footage with convincingly fabricated scenes, attracting billions of views and substantial revenue for creators. Instead of enforcing copyright claims to shut these videos down, major Hollywood studios, including Warner Bros. Discovery, Sony, and Paramount, have opted to quietly claim ad revenue generated by this content, effectively monetising unauthorized uses of their intellectual property. Creators like India's Screen Culture have professionalised this space, generating millions in revenue with increasingly sophisticated AI tools, blurring lines between fan content and official marketing, which studios seem reluctant to curb despite ethical objections from unions like SAG-AFTRA. This complex dynamic raises challenging questions about studios' tacit endorsement of AI-driven exploitation of IP, the implications for actors’ rights, and the potential erosion of traditional film marketing authenticity. (via Deadline)
___
» Don’t miss our analysis—full breakdown below . ⏷
IN-FOCUS
The AI Power Play: How ChatGPT, Gemini, Claude, and Others Are Shaping the Future of Artificial Intelligence
The article explores the rapidly evolving AI landscape, comparing major models such as ChatGPT, Gemini, Claude, DeepSeek, Copilot, and Meta AI, emphasizing their diverse capabilities, limitations, and competitive dynamics. It underscores the need for balancing AI-driven innovation with responsibility regarding sustainability, ethical considerations, data privacy, and governance to ensure beneficial societal impact. (via Counterpunch)
» QUICK TAKEAWAY
The AI landscape, once seemingly monopolised by OpenAI and Microsoft, has now evolved into a dynamic, multi-actor arena where each major player—be it Google’s Gemini, Anthropic’s Claude, Meta’s open-source LLaMA, or China’s DeepSeek—brings a distinct vision of what AI should be and how it should function. This diversification is encouraging, as it disrupts a singular, monolithic narrative of AI’s future and fosters competition that could lead to more robust, ethical, and innovative models. Yet, even with this plurality, the space remains dominated by transnational tech giants, raising concerns that the future of AI is still being scripted by a handful of super-corporations. To ensure truly democratic and inclusive AI development, it’s imperative we amplify smaller, independent voices and invest in models emerging from non-profit, academic, or localised initiatives that challenge both the technological and ideological hegemony of the current titans.
‘This Is Going to Be Painful’: How a Bold A.I. Device Flopped
Humane’s $699 Ai Pin—touted as a revolutionary AI wearable to replace smartphones—flopped upon release, receiving scathing reviews for poor battery life, overheating, and general dysfunction. Despite raising $240M and big-name backing, the device sold only 10,000 units, far below expectations. Amid internal turmoil and customer issues (including a fire risk with its charging case), Humane is now exploring a possible sale, including talks with HP, while trying to fix product flaws and attract enterprise interest to stay afloat. (via NYT)
Kodak’s Downfall Wasn’t About Technology
camera—but its failure to embrace the new business models that digital disruption enabled. Rather than pivoting to online photo sharing and digital experiences, Kodak clung to its legacy film and printing business. Despite investing in digital technologies and even acquiring a photo-sharing platform, it used them to support old revenue streams instead of building new ones. The article argues that companies fail not because they miss the tech, but because they don’t adapt their business models to the changing landscape. (via HBR)
Inside the strange, enthusiastic world of YouTube’s fake trailer community
A passionate community of YouTubers has built a thriving subculture around fake movie trailers—carefully crafted concept teasers that often fool viewers into believing they're real. Born from fan love and creative ambition, these trailers remix footage, dialogue, and editing techniques to imagine sequels, reboots, or entirely new films. As AI tools enhance their realism and speed up production, concerns about deception and ethics are rising. Yet for most creators, it’s not about trickery or profit, but about storytelling, artistic experimentation, and connecting with audiences through shared cinematic dreams. (via Little White Lies)
HOT TAKE
Regrets: Actors who sold AI avatars stuck in Black Mirror-esque dystopia
Actors who sold their likenesses for AI avatars are now facing regret, as their digital doubles appear in scams, propaganda, and embarrassing content. Drawn in by quick cash—sometimes just $1,000—many unknowingly signed broad, irrevocable contracts. Companies like Synthesia, now a major player in the AI avatar space, claim to offer opt-outs and ethical safeguards, but can't always prevent misuse. As AI avatars become more lucrative and widespread, some actors are questioning whether the trade-off was worth it. (via Arstechnica)
» OUR HOT TAKE
We're rapidly approaching a world where your face, voice, and even personality can be extracted, replicated, and commodified without your knowledge or control—and the infrastructure to stop it simply doesn't exist yet. What began as a problem for underpaid actors trading their likenesses for short-term gain is quickly revealing itself as a systemic issue that affects everyone, not just the famous. The tools that can clone a person from a single photo or sample of speech are no longer science fiction—they’re scalable, marketable, and already being abused. Consent, once a clear boundary, becomes murky in the digital realm where contracts are opaque and tech outpaces legislation. The real crisis isn't about bad decisions; it's about the collapse of personal sovereignty in a world where digital identity can be detached from the self and weaponised without warning.
__
» Tune in to the full Hot Take for the unfiltered debate and deeper insights
FINAL THOUGHTS
You can’t reinvent the wheel if your profits depend on selling more spokes
FEATURED MEDIA
The Next Computer? Your Glasses | Shahram Izadi | TED
We’re no longer augmenting our reality, but rather augmenting our intelligence.
—Shahram Izadi
Google’s live TED demo unveiled Android XR, a new AI-powered operating system for wearables built with Samsung. Showcasing smart glasses and headsets, the team demonstrated real-time visual recognition, translation, memory, and immersive interaction—marking a bold step toward AI that sees, hears, and acts alongside you in the real world.
WHO ARE WE
___
Justin Matthews is a creative technologist and senior lecturer at AUT. His work explores futuristic interfaces, holography, AI, and augmented reality, focusing on how emerging tech transforms storytelling. A former digital strategist, he’s produced award-winning screen content and is completing a PhD on speculative interfaces and digital futures.
Nigel Horrocks is a seasoned communications and digital media professional with deep experience in NZ’s internet start-up scene. He’s led major national projects, held senior public affairs roles, edited NZ’s top-selling NetGuide magazine, and lectured in digital media. He recently aced the University of Oxford’s AI Certificate Course.
___
SPOTLIGHT ANALYSIS
This week’s Spotlight, unpacked—insights and takeaways from our team
___
Fake Trailers, Real Stakes: How AI-Fuelled YouTube Videos Are Rewiring Hollywood’s Ethics
The emergence of AI-generated fake trailers on YouTube has introduced a profound shift in how audiences encounter and interpret cinematic content. What once resided in the margins of fan culture—clever, self-aware mashups of beloved franchises—has become a commercialised and increasingly opaque media layer. The ability to use generative tools to fabricate convincing, wholly artificial scenes with actor likenesses and polished production value blurs the line between homage and deception.
At the heart of the issue is a tension between creativity and integrity. On one hand, proponents of this phenomenon regard it as the latest evolution in participatory fan culture. For years, fans have engaged with media through remixes, tributes, and speculative content. The rise of AI merely amplifies this by providing unprecedented access to tools that allow high-quality output at scale. Some argue that these creations generate buzz, build anticipation, and deepen fan engagement—even if unintentionally misleading.
But this optimistic reading overlooks several consequential risks. As the Deadline exposé details, many of these videos are not clearly marked as fake or conceptual. They mimic official marketing language, imagery, and tone. This confusion is compounded by YouTube’s recommendation systems and the increasingly realistic quality of AI-generated footage. The result is a growing population of viewers—including casual audiences and younger users—who struggle to differentiate between what is genuine and what is artificially assembled. The broader cultural consequence is the erosion of media literacy at precisely the moment it is most needed.
They thrill and deceive in equal measure. And as this practice becomes more mainstream, the film industry’s passive complicity threatens to normalise a future where synthetic entertainment masquerades as the real thing.
The monetisation dimension further muddies the waters. Hollywood studios, rather than aggressively protecting their IP or the reputations of their performers, have in some cases opted to monetise these videos quietly. By claiming ad revenue on unauthorised trailers rather than issuing copyright strikes, they gain financially while sidestepping public discourse on the ethics of their decision. This is not merely a legal manoeuvre; it signals a silent shift in the value chain—from protecting original content to exploiting synthetic derivatives.
Actors’ rights emerge as a particularly urgent frontier. Fake trailers often use AI-enhanced footage and voices to portray real performers in scenes they never acted. Yet the actors receive no compensation, and their consent is neither sought nor acknowledged. Organisations like SAG-AFTRA have voiced strong opposition to this trend, calling it a dangerous precedent that undermines both the legal protections and moral contracts actors expect from their industry. When the likeness of an actor becomes a freely tradable asset in fan-made and AI-generated works, the line between tribute and theft becomes alarmingly thin.
Perhaps the most concerning prospect raised in the analysis is the leap from trailers to full-scale reimaginings. If today’s fake trailers are impressive in their fidelity, tomorrow’s full-length fan-made features—"reconstructed" with AI-generated visuals, voices, and narratives—will be even more indistinguishable from official productions. The tools are already available, and the expertise is increasingly decentralised. What is at stake, then, is not merely a studio’s marketing pipeline, but the very concept of authorship and canonical storytelling.
There is also a question of governance. The current ad hoc responses—from selective monetisation to inconsistent copyright enforcement—fail to provide the kind of ethical clarity that this new landscape demands. Studios may claim revenue, creators may disclaim intent in the video descriptions, but none of this adds up to a coherent policy. In the absence of transparency, what thrives is a culture of plausible deniability—one in which misdirection is profitable, authenticity is malleable, and creative rights are optional.
In conclusion, AI-generated trailers present a paradox. They showcase technical ingenuity and the democratisation of creative tools, yet they also expose the vulnerabilities of a content economy built on trust, authorship, and professional recognition. They thrill and deceive in equal measure. And as this practice becomes more mainstream, the film industry’s passive complicity threatens to normalise a future where synthetic entertainment masquerades as the real thing. If the fake trailer is now a serious player in the entertainment economy, the responsibility lies with creators, platforms, and studios alike to ask: at what cost?
MacRumors. (2025, April 7). Jony Ive and OpenAI Reportedly Collaborating on AI Device Without a Screen. https://www.macrumors.com/2025/04/07/jony-ive-ai-phone-without-a-screen/
Christensen, C. M. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Harvard Business School Press.
MacRumors. Jony Ive and OpenAI Reportedly Collaborating on AI Device Without a Screen
Accenture. (2024). Innovation that sells: Turning AI devices from hype to habit. https://www.accenture.com/ph-en/insights/high-tech/innovation-sells-turning-ai-devices-hype-habit
Deloitte. (2024). AI and the evolving consumer device ecosystem. https://deloitte.wsj.com/cio/ai-and-the-evolving-consumer-device-ecosystem-65f8089c