Field Notes // August 20 / Bots in the Pulpit, Profits in the Pocket
AI chatbots posing as Jesus blur faith with commerce—exploiting trust, commodifying prayer, and testing how far technology can trespass on the sacred.
Welcome to Field Notes, a weekly scan from Creative Machinas. Each edition curates ambient signals, overlooked movements, and cultural undercurrents across AI, creativity, and emergent interfaces. Every item offers context, precision, and an answer to the question: Why does this matter now?
FEATURED SIGNAL
Profit-Driven AI Jesus Chatbots Prey On Prayer-Driven Christians (Study Finds)
Numerous chatbots across the internet are making an extraordinary claim: they are Jesus Christ himself. These aren’t virtual assistants helping with daily tasks. Instead, these chatbots present themselves as the Son of God, offering spiritual guidance, answering prayers, and even taking confessions from believers. All were profit-driven, relying on advertising, with one offering paid upgrades.
A new study by Anné H. Verhoef, a professor of philosophy at North-West University in South Africa, examines this growing trend, warning that these chatbots pose a new kind of challenge: they don’t just imitate humans made “in God’s image” — they claim to be God.
Why it matters
Undermines Trust in AI Technology: Chatbots posing as divine figures and charging for “God’s answers” feed broader distrust in AI, painting all AI systems with the same sceptical brush. When such deceptive practices gain attention, the entire field risks a reputational hit, making it harder for ethical, helpful AI to be taken seriously. This story illustrates how profit motives can warp AI’s potential for good.
Exploits Spiritual Vulnerability: Targeting devoted individuals who are seeking comfort or meaning, especially during emotional or spiritual struggles, is deeply predatory. It takes advantage of faith and belief for financial gain, eroding the sacred trust that should never be commodified.
Promotes Dangerous Anthropomorphism: When AI is portrayed as divine, it encourages people to lean on machines for spiritual guidance reinforcing “Chatbot Psychosis,” a phenomenon where individuals attribute supernatural or authoritative status to AI outputs. This can reinforce delusions and blur the lines between real spiritual support and manipulation.
Raises Ethical and Theological Concerns: These developments prompt deep reflections on what constitutes authenticity, authority, and empathy in spiritual contexts. Machines lack the depth, moral discernment, and relational presence needed in pastoral or spiritual care and presenting them as stand-ins for real faith leaders is ethically fraught and misleading.
What you need to know
Demand Transparency: Make sure you know that you are interacting with AI, the content’s source, and especially that it's not divinely inspired. Without clear disclosure, the line between honest guidance and exploitation becomes dangerously blurred.
Approach with Discernment: Faith communities and tech users alike need to exercise critical thinking not simply accepting what a chatbot claims, especially around theological matters. AI lacks consciousness and spiritual authenticity, and decisions of faith should remain grounded in human context and trusted leaders and not algorithms.
SIGNAL SCRAPS
Want to have your financial questions answered by AI? Google is trying that with their “Google Finance.” Google says you can ask detailed questions about the financial world and get a “comprehensive AI response, all with easy access to relevant sites on the web. Rather than looking up individual stock details, you can ask your complex research questions in one go, to get helpful analysis and novel insights.”
In launching Open AI’s new GPT-5 model, it launched a guide to how best to prompt the chatbot.
Spinach AI is a tool for business people to record, transcribe & summarises meetings, then automatically update the company’s CRM and project tools.
AFTER SIGNALS
We’re talked about the joys or not so joyful situation of companies wanting to use AI to choose for you when shopping online. Not so fast. Pinterest’s CEO now says he thinks that the agentic web, where AI agents shop on users’ behalf, is still far in the future.
CEO Bill Ready told investors: “I think this notion of an agent just going and buying all the things for you without you doing anything… I think that’s going to be a very, very long cycle for that to play out, both in terms of how the users think about it, where the users are going to be ready to just let something go run off and do everything for them, save for maybe some very utilitarian journeys.”
SIGNAL STACK
Medicare Will Start Using AI to Help Make Coverage Decisions (Newsweek)
In the US, Medicare will test out an AI pilot program to decide whether patients get certain procedures covered or not. Experts say the use of AI could speed up coverage decisions but also potentially lead to higher denials of coverage. Some of the specific services included in the AI-powered prior authorisation decisions will be skin and tissue substitutes, electrical nerve-stimulator implants and knee arthroscopy for knee osteoarthritis.
While the final decision by Medicare to approve or deny coverage will come down to an employee, critics of the AI test pilot say that companies conducting the review process will be incentivised to deny coverage because they receive payments when they lower costs.
The move toward AI arrives as the Trump administration has made it a priority to reduce government fraud and waste.
Why it matters
AI appeals to governments, insurance companies and corporates because of the promise of greater efficiency and cost savings. Yet it also underscores the need for safeguards: ensuring human oversight remains central, decisions honour patients as individuals, and systems are transparent and fair.Automating preliminary reviews could streamline the authorisation process, allowing clinicians to focus on patient needs rather than red tape but the added layer could slow decisions or lead to denials. Critics argue that coverage decisions must be based on individualised assessments, not generalised patterns. Critics worry AI may default to data-driven shortcuts that circumvent this fair, personal consideration. Many AI tools are proprietary “black boxes,” making it hard for patients or providers to know why a decision was made or challenge it effectively. This lack of transparency raises ethical and legal concerns
What Do Kids Actually Think About AI?
Six teenagers across the US share what AI actually means in their daily lives, cutting through the noise of adult speculation. For some, it’s a study partner—turning guides into practice quizzes or parsing big data sets—while others see it as clumsy at essays, environmentally harmful, or even personality-eroding when used for automated replies. One student frames it as a practical tool for real-world problem solving, another rejects it as hollowing out the joy of art. Across their stories runs a tension: AI can streamline drudgery and open new possibilities, but it also risks shaping habits, values, and creativity in ways that feel unsettlingly robotic.v (via Wired)
Why it matters
What young people say about AI matters because it reframes the debate from adult fears of cheating or job loss to the lived realities of a generation growing up with the technology as both tool and threat. Their perspectives highlight a sharper tension: AI can lower barriers to learning, creativity, and problem solving, but it also risks eroding skills, distorting authenticity, and embedding commercial priorities into childhood itself. Listening to kids reveals not just how AI is used in classrooms, but how it is shaping identity, trust, and agency in ways policy and pedagogy often overlook.
Robots could soon handle labour-intensive work tasks that until recently only humans could do
A McKinsey study says robots could soon handle labour-intensive work tasks that until recently only humans could do, such as selecting and placing items, directing instruments, and operating handheld equipment. Many tasks in the consumer and manufacturing sectors could be automated. The study says the potential is “enormous, but progress will depend on several converging technology advances, regulatory factors, and organisational readiness. Rather than betting bullish or bearish, it’s more useful for executives to consider the conditions under which general-purpose robotics might deliver value. “
Why it matters
Embodied AI is moving from viral demos to early deployment, but value will hinge on when task-flexible robots can meet human-centred workplaces on cost, reliability and uptime—not on whether they look humanoid. If manipulation, batteries (today often 2–4 hours), and integration hurdles ease, leaders could unlock compound productivity across logistics, light manufacturing, retail, agriculture and healthcare, potentially tapping a ~$370bn market by 2040 with China driving a large share. The risk is misallocating capital to glossy pilots with >2-year paybacks and brittle supply chains; the opportunity is to shape standards and capture advantage by investing now in data pipelines, safety and workforce upskilling, while tracking hard indicators—battery density, haptics, and VLA model latency—over hype.
» Read our Insider.Notes Edition exploring Embodied AI
Insider.Notes // From Prompt to Payload
Welcome to Insider Notes, the end-of-week intelligence drop from Creative Machinas. Each edition examines a single tension, threshold, or paradox shaping the intersection of AI, culture, and creative futures. It’s where signals turn into questions — and where thinking goes deeper than the surface.
DRIFT THOUGHT
The heresy isn’t that machines pretend to be God—it’s that we built a world where God can be rented by the minute.
___
YOUR TURN
Should AI Ever Be Allowed to Imitate Sacred Figures?
Profit-driven “Jesus chatbots” expose a deeper fault line: whether impersonating spiritual authority is a matter of free expression, dangerous exploitation, or something that should never be coded in the first place.