Is the "AI everywhere" hype in healthcare ultimately a "quiet desperation" for patients in daily life?
13.05.2026
AI is everywhere—especially in healthcare. At conferences, within strategies, and in roadmaps, it promises improved processes, personalised communication, and enhanced efficiency. Most importantly, it pledges one thing: greater benefits for patients.
However, what happens when this promise collides with the reality of healthcare systems—marked by uncertainty, overwhelm, fragmented structures, and genuine human needs? Christian Geis, Director Creative Technology and healthcare and life sciences expert at wob, explores when AI truly becomes relevant in healthcare and why "patient first" must be more than just a catchy phrase on the next slide.
When AI becomes standard in healthcare: Why "patient first" requires more than good intentions.
Following the OMR and numerous other recent events, the consensus is clear: AI is no longer just the next feature; it is infrastructure.
AI-driven search, generative content production, personalised journeys, autonomous agents, automated channel management, faster processes, improved scalability. The underlying message is hard to ignore: those who do not embrace AI now will soon be left behind.
In healthcare and life sciences, this discussion takes on an additional moral dimension. It is often claimed that all this is done "for the benefit of patients." AI is meant to improve care, facilitate access, personalise communication, enhance adherence, and make healthcare systems more efficient.
This sounds right, and in many cases, it is genuinely well-intended. Yet, this is where the problem lies.
The very healthcare systems that are supposed to realise this vision have been characterised for decades by sector boundaries, budgetary constraints, data silos, regulatory complexities, and entrenched IT landscapes. While "AI-first healthcare" is discussed on conference stages, doctors, providers, companies, and especially patients often grapple daily with much more fundamental issues: understanding, orientation, accessibility, coordination, uncertainty, and a system increasingly strained by demographic change and cost pressures.
Thus, the real question is not: Which AI do we implement?
The real question is: What job are we actually doing for patients?
Between stage romanticism and care reality.
Healthcare is a unique context—not because technology plays no role, but because it always intersects with systems tied to high expectations, strong dependencies, and real fears.
A patient with a new diagnosis is not searching for a digital touchpoint; they are seeking stability. A patient with a chronic illness is not looking for an app; they are searching for a way to integrate their therapy into an already complex daily life. Family members do not seek a portal; they seek guidance, language, security, and sometimes simply the feeling of not being alone.
This is where the tension in the current AI debate lies. Many discussions revolve around models, platforms, automation, efficiency, content workflows, or data integration. All of this is important, but it is not the starting point.
Patients want to understand a diagnosis without panicking. They want to know what questions to ask their doctor. They want to integrate their therapy into their daily lives without failing. They want to contextualise side effects, symptoms, or uncertainties without getting lost in conflicting search results. They want to build trust without being patronised.
These are not merely functional needs; they are emotional jobs.
Why "patient first" often remains too vague.
Almost every pharmaceutical company and many healthcare organisations place patients at the centre of their vision or mission. This is commendable, but it is not enough.
"Patient first" often remains a promise that is more emotionally charged than operationally grounded. It appears on slides, in strategy papers, purpose statements, and campaigns. However, once concrete projects are initiated, the focus often shifts back to internal logics: Which channels need to be activated? What platforms are available? What content is required? What regulatory limits apply? Which processes can be made more efficient?
This is understandable, but it leads to a situation where patient-centredness is claimed but not consistently realised.
With AI, this risk increases, as it can significantly accelerate existing logics. It can produce more content, orchestrate more touchpoints, process more data, and automate more interactions. But if the starting point is misguided, AI does not automatically scale patient benefit. It may merely scale complexity, irrelevant communication, or internal efficiency.
Therefore, healthcare needs a shift in perspective in the AI debate.
Jobs-to-be-done as a bridge between technology and real needs.
Anthony Ulwick, a key thinker behind Jobs-to-be-Done, articulated the idea clearly: customers do not buy products; they "rent" them to get a job done.
Applied to healthcare, this perspective is revealing.
Patients do not "rent" an app, a chatbot, a portal, or even AI. They use a solution because it helps them achieve specific progress in a concrete situation.
The difference may seem small, but it is fundamental.
The product perspective asks: "How can we integrate an AI chatbot on our condition page?"
The job perspective asks: "What happens to people after a new diagnosis? What uncertainty arises? What information is overwhelming? What questions go unasked? Where does trust break down? And what solution could help them navigate this phase better?"
Take the example of a new diagnosis. Many people start searching online afterward. After ten minutes, they are often not better informed but rather more anxious. They encounter jargon, forums, horror stories, contradictory information, general guides, and promotional content. The actual job is not: "I want as much information as possible."
The job is more like: "I want to understand what this diagnosis means for my life, in a language I can handle."
This is a completely different starting point.
An AI-supported solution could help in this case. But only if it has clear boundaries, is medically validated, is embedded within regulatory frameworks, recognises emotional overwhelm, refers to human contacts, and opens a path back into the healthcare system. Then, AI does not become an end in itself but a component of a thoughtfully designed patient experience.
The problem is not just tools; the problem is the system.
Naturally, it is tempting to start AI projects as innovation initiatives: a chatbot here, a content automation pilot there, an internal agent system, a new search, a personalised newsletter, a service prototype.
However, the reality is characterised by strict separations between outpatient and inpatient care, remuneration logics that rarely function along continuous patient journeys, IT landscapes that are more evolved than designed, and liability, data protection, and compliance issues that do not make innovation impossible but significantly more challenging.
Additionally, there is a culture where error avoidance is often more ingrained than a willingness to experiment. This is understandable; in healthcare, mistakes can have very different consequences than in many other industries. But this creates a paradoxical situation: either AI is treated as an internal innovation toy, disconnected from the realities of care, or it is avoided as a risk area before it can be meaningfully designed.
If AI is to become relevant in healthcare, it must move from the playground into responsible structures. It must be aligned with real patient jobs. It must be measurable in terms of whether it is genuinely helping. And it must be designed so that humans continue to bear responsibility.
From "AI everywhere" to "patient value where it matters."
The question should not be: Where can we deploy AI everywhere?
It should be: Where does AI provide relevant advancement for patients when applied appropriately?
This is a significant difference.
AI can help make information more comprehensible. It can assist patients in preparing questions for doctor consultations. It can recognise patterns in feedback data. It can personalise communication without rendering it arbitrary. It can relieve healthcare professionals, allowing more time for human interaction. It can indicate where patients feel overwhelmed on a journey or where offerings are misunderstood.
But it can only do all this meaningfully if the job that needs to be done is clear from the outset.
This is where the strength of Jobs-to-be-Done in the context of healthcare lies. The approach forces us to start not with features, channels, or technologies, but with the situation in which patients seek progress. This progress is often not just medical; it is emotional, social, organisational, and communicative.
Why PROMs and PREMs become crucial.
If we truly want to place patients at the centre, we must not only improve design; we must also enhance measurement.
Patient-Reported Outcome Measures (PROMs) and Patient-Reported Experience Measures (PREMs) can help systematically incorporate the patient perspective, rather than relying on anecdotes. They reveal whether outcomes and experiences from the patient's viewpoint actually improve.
Combined with Jobs-to-be-Done, this creates a strong logic.
First, we define the job. For example, "I want to understand my diagnosis without panicking." Next, we develop a solution, possibly AI-supported. Finally, we assess with appropriate PROM and PREM data whether this job is indeed being better accomplished from the patients’ perspective.
This operationalises patient-centredness.
Patients "rent" a solution to get a job done. PROMs and PREMs help us understand whether this job is genuinely better fulfilled and whether the solution instils trust, orientation, security, or relief from the patient’s viewpoint.
This is especially important because many relevant dimensions of patient experience do not directly appear in traditional performance dashboards. Clicks, open rates, time spent, or conversion goals can be helpful, but they do not automatically indicate whether someone feels less lost after using a service. They do not reveal whether someone could ask better questions in a doctor’s appointment. They do not show whether a therapy has become clearer, more manageable, or less burdensome.
Here, we need to broaden our measurement logic.
Human in the process instead of human out of the loop.
Another point often treated too technically in the AI debate is the role of humans.
In healthcare, it is not enough to merely define a "human in the loop" who checks, approves, or escalates at the end. Especially regarding fears, uncertainties, diagnoses, therapies, and trust, humans must not only serve as a control instance; they must consciously remain part of the process.
Thus, "Human in the Process" is the more accurate concept.
It is not about pitting AI against human closeness. It is about deploying AI in a way that creates time, orientation, and relief so that human relevance can occur where it matters most.
A good AI solution in healthcare recognises its own limitations. It knows when information is insufficient. It refers users appropriately. It supports preparation, contextualisation, and navigation. But it does not replace responsibility, empathy, or clinical decision-making.
If "patient first" is to be taken seriously, this boundary must be designed.
What remains when the hype fades?
When the OMR hype fades from the timeline, a sober insight remains in the healthcare context: AI is here to stay. But its relevance will not depend on how impressive it sounds; it will depend on whether it accomplishes a clear job for patients more effectively. Everything else is stage romanticism.
The future of AI in healthcare will not improve simply by slapping "AI-first" on everything. It will improve when we consistently ask which progress truly counts for patients. When we align solutions not with technology promises but with real situations. When we do not just collect data but translate it into comprehensible decisions and tangible relief. When we do not view PROMs and PREMs as a chore but as a corrective. And when we do not automate human responsibility out of the process but rather design it more consciously.
"Patient first" is a powerful promise. Yet, it only becomes credible when we measure it, design it, and continually work on it.
Together, Thomas Foell, Director of UX Strategy and patient experience expert, and I, Christian Geis, Director Creative Technology and Healthcare & Life Science expert at wob AG, work at this intersection: where the AI hype, regulatory reality, and genuine needs of patients intersect.
Our conviction is clear: AI is not the goal. It is a means. It only becomes relevant when it helps patients accomplish a specific job better.
If you are currently considering or questioning AI initiatives in healthcare and want to ensure that "patient first" is more than just emotional rhetoric, let’s get in touch.
If you enjoyed this article, follow Christian Geis on LinkedIn for more insights and practical examples: Follow Christian Geis now.