The founder of Fountech explains the role that anticipatory AI could play in achieving better patient outcomes in hospital.
While traditional medicine excels at reacting to symptoms, Nik Kairinos, chief executive and founder of Fountech, believes the future of the hospital ward lies in anticipation. His platform, Anticip8, is designed not as a replacement for clinical expertise but as a “whisper in the ear” of the practitioner. Whether it is identifying a patient likely to skip life-saving heart medication or flagging early warning signs of staff burnout, the goal is to provide a window of time for intervention before a crisis occurs.
Here, he talks to Healthcare Today about how ephemeral processing ensures patient privacy, the role of artificial intelligence (AI) in reducing clinical liability, and his vision for 2030 – where AI steps off the screen and into the physical world.
You’ve said that Anticip8 focuses on what people will do rather than what they say. In a clinical setting, what does this mean, and how does your platform bridge it to improve patient outcomes?
Given enough data, our model is able to anticipate what humans will do. We aren’t just talking about human action – it includes everything from the physical steps they take to what they might say, what they might think, their desires, the emotional spectrum they might be on, and so on.
Take patient adherence, for one. If you are able to predict that with any degree of certainty – and bear in mind that AI is augmenting doctors, not replacing them – then you have an advantage. If the system provides insights, along with an explanation, that Patient X has a high probability of not taking their heart medication, knowing that in advance gives you time to prevent a negative outcome.
It comes down to input density. Depending on the setting in which an anticipatory AI is installed, it benefits from multimodal input, capturing everything from audio and emails to communications and phone call recordings.
Having that level of input density, paired with almost infinite contextual knowledge, allows the system to connect dots that are simply impossible to link in such a high-pressure environment.
Is your accuracy high enough for use in a medical setting?
It is important to remember that we are not an application; we are an AI system with an Application Programming Interface (API) that plugs into other software. The more context you provide, the higher the accuracy becomes. If I were to ask the model what a person, let’s say, John, is going to do tomorrow without providing any other data, it might say he will wake up, eat and breathe. That is 100% accurate, but it isn’t useful. The more history and information you provide, the more specific and relevant those predictions become.
This leads to the question of whether 85% is good enough for an agentic system. In that context, I would say absolutely not. If the system had the agency to intervene autonomously or interact with a patient, 85% would not be an acceptable standard in a healthcare setting. However, because it acts as a “whisper in the ear” of a doctor or a nurse, it is more than sufficient.
“There have been numerous cases where AI has been trained to self-identify, report and correct its own biases.”
Mistakes have consequences in hospitals. How do you maintain that level of reliability across diverse patient populations?
There have been many cases where bias has been an issue, and there are two parts to how we address that.
First, because we are a model, you can provide as much or as little information as required. Unless it is specifically relevant – for instance, a matter of gynaecology – it may not be necessary to provide information relating to gender or sex. Similarly, if race does not play a role, it can be excluded. Of course, there are certain diseases where race is a factor, such as sickle cell anaemia, but you have the option to tell the system to look at every factor except for that one.
Second, our model uses reinforcement learning to anticipate outcomes. It is constantly looking at the actual results on the ground and learning from the reality of the situation. Because we function as an API, you have the choice of how to integrate the model into your own systems; you can explicitly instruct your software not to look at factors that could lead to bias.
In addition to those controls, we do a significant amount of work around identifying and eliminating bias. There have been numerous cases where AI has been trained to self-identify, report and correct its own biases.

Many hospitals have legacy IT systems. How do you ensure patient data privacy is baked into your system?
While we are not able to correct external legacy systems, they can integrate our technology into their own. That said, we use a method called ephemeral processing. Every model takes an input and produces an output; in our case, the input is processed within milliseconds, the output is generated, and then – poof – it disappears. There is no storage at all and no persistence of that data beyond those few milliseconds. From our perspective, this makes us fully GDPR-compliant.
In addition to this, we are looking at future safeguards. While it does not exist in the current version, our next iteration will automatically reject personally identifiable information. If you were to send an actual name, the system would reject it and request a replacement.
As long as there is a unique identifier, the model does not need to know who the person is, where they live, or any other identifiable details. We are working towards a standard that forces only non-personally identifiable data into the model.
“We always tell healthcare practices and doctors that this system cannot replace them, nor can they delegate their responsibilities to it.”
Every healthcare system in the world is struggling with burnout. Since Anticip8 can predict emotional responses, how can this technology be used as an internal early warning system to flag when a surgical team or nursing staff is reaching breaking point?
It starts by ensuring you are gathering sufficient data and feeding it into the system. This might involve nurses or staff checking in via a chat interface, filling in a form, or – depending on permissions and GDPR compliance – using cameras or microphones within the hospital. The more input the model has, the better it is at predicting burnout.
It is important to remember that the model isn’t just looking at cases one at a time, because it has seen 100,000 instances of burnout, it can identify signs that might seem obscure or insufficiently predictive to a human observer. It can flag an early warning sign based on dense, multimodal input – perhaps it is a pattern of arriving ten minutes late, or a shift in patient satisfaction scores.
Furthermore, the system can run scenario analyses. It can ask, “If we take this specific action, what happens to the probability of burnout?” One approach might reduce the probability to 40%, while another might be more effective. Based on these scenarios, the AI can then recommend a specific path of action tailored to particular individuals or groups.
A significant point of doctor pushback comes from liability fears. How are conversations developing there?
There are several key points that come to mind here. Firstly, our model possesses what we call explainability. We like to think of ourselves as a glass box rather than a black box. We built this model so that it can be interrogated. If it predicts that a patient might not take their medication, you can ask why, and it will provide the specific data points. This allows the clinician to make a more informed decision.
We always tell healthcare practices and doctors that this system cannot replace them, nor can they delegate their responsibilities to it. It is an additional, informed and timely data point that they must then use, using their judgment and decades of experience to make the final call. If anything, this reduces liability; if the system helps you anticipate and prevent negative outcomes, fewer incidents occur, which reduces overall risk.
Beyond the technology itself, a whole industry is emerging to manage this. Insurance companies are now insuring AI risk, and regulations like the EU AI Act – along with various ISO standards and EU directives – require real-time monitoring and an audit trail. It is becoming something of a perfect storm: regulation is forcing standards, and insurers are stepping in.
Insurers are not just providing a safety net; they are investing significant time into assessing how risky a particular model is. They will not insure you unless you can prove you meet a rigorous checklist covering GDPR compliance, privacy violations and bias. This framework is what will keep us within the necessary guardrails.
You said to the BBC that AI can still be a saviour or scourge to humanity. What is your vision for a hospital in 2030, where AI is an invisible but omnipresent layer of support?
Things are moving incredibly fast, but I predict that within five years, we will begin seeing more three-dimensional AI, which is essentially the physical manifestation of the technology through robotics. It will no longer be something confined to a screen; it will jump out into the real world. Whether that takes the form of anthropomorphic robots walking the wards like nurses, robotic arms attached to beds to assist patients, or additional systems in surgery and labs, we are going to see a great deal more of it.
Anyone in the healthcare space who is not augmented by AI – using it as an input rather than a crutch or a replacement – will be left behind.
link
