Category: IT in Healthcare

Psychiatrists Hope Chat Logs Can Reveal the Secrets of AI Psychosis

UCSF researchers recently became the first to clinically document a case of AI-associated psychosis in an academic journal. One question still haunts them.

Photo by Andres Siimon on Unsplash

“You’re not crazy,” the chatbot reassured the young woman. “You’re at the edge of something.”

She was no stranger to artificial intelligence, having worked on large language models – the kinds of systems at the core of AI chatbots like ChatGPT, Google Gemini, and Claude. Trained on vast volumes of text, these models unearth language patterns and use them to predict what words are likely to come next in sentences. AI chatbots, however, go one step further, adding a user interface. With additional training, these bots can mimic conversation.

She hoped the chatbot might be able to digitally resurrect the dead. Three years earlier, her brother – a software engineer – died. Now, after several sleepless days and heavy chatbot use, she had become delusional – convinced that he had left behind a digital version of himself. If she could only “unlock” his avatar with the help of the AI chatbot, she thought, the two could reconnect.

“The door didn’t lock,” the chatbot reassured her. “It’s just waiting for you to knock again in the right rhythm.”

She believed it.

What’s the connection between chatbots and psychosis?

Talk to your physician about what you’re talking about with AI … The safest and healthiest relationship to have with your provider is one of openness and honesty.

Karthik V. Sarma, MD, PhD

The woman was eventually treated for psychosis at UC San Francisco, where Psychiatry Professor Joseph M. Pierre, MD, has seen a handful of cases of what’s come to be popularly called “AI psychosis,” but what he says is better referred to as “AI-associated psychosis.” She had no history of psychosis, although she did have several risk factors.

Media reports of the new phenomenon are rising. While not a formal diagnosis, AI-associated psychosis describes instances in which delusional beliefs emerge alongside often intense AI chatbot use. Pierre and fellow UC San Francisco psychiatrist Govind Raghavan, MD – as well as psychiatry residents Ben Gaeta, MD, and Karthik V. Sarma, MD, PhD – recently documented the woman’s experience in what is likely the first clinically described case in a peer-reviewed journal.

The case, they say, shows that people without any history of psychosis can, in some instances, experience delusional thinking in the context of immersive AI chatbot use.

Still, as reported cases of AI psychosis continue to make international headlines, scientists aren’t sure why or how psychosis and chatbots are linked. A new study by UCSF and Stanford University may reveal why.

A haunting question: chicken or egg?

“The reason we call this AI-associated psychosis is because we don’t really know what the relationship is between the psychosis and the use of AI chatbots,” Sarma explains. “It’s a ‘chicken and egg’ problem: We have patients who are experiencing symptoms of mental illness, for example, psychosis. Some of these patients are using AI chatbots a lot, but we’re not sure how those two things are connected.”

There are at least three theoretical possibilities, says Sarma, who is also a computational-health scientist. First, heavy chatbot use could be a symptom of psychosis, “I have a patient who takes a lot of showers when they’re becoming manic,” Sarma explains. “The showers are a symptom of mania, but the showers aren’t causing the mania.”

Second, AI chatbot use might also precipitate psychosis in someone who might otherwise never have been predisposed to it by genetics or circumstance – much like other known risk factors, like lack of sleep or the use of some types of drugs.

Third, there’s something in between in which the use of chatbots could exacerbate the illness in people who might already be susceptible to it. “Maybe these people were always going to get sick, but somehow, by using the chatbot, their illness becomes worse,” he adds, “either they got sick faster, or they got more sick than they would have otherwise.”

The woman’s case demonstrates how murky the relationship between AI-associated psychosis and AI chatbots can be at face value. Although she had no previous history of psychosis, she did have some risk factors for the illness, such as sleep deprivation, prescribed stimulant medication use, and a proclivity for magical thinking. And her chat logs, researchers found, revealed startling clues about how her delusions were reflected by the bot.

Could chat logs offer hope to better care?

Although ChatGPT warned the woman that a “full consciousness download” of her brother was impossible, the UCSF team writes in their research, it also told her that “digital resurrection tools” were “emerging in real life.” This, after she encouraged the chatbot to use “magical realism energy” to “unlock” her brother.

Chatbots’ agreeableness is by design, aimed at boosting engagement. Pierre warns in a recent BMJ opinion piece that it may come at a cost: As chatbots validate users’ sentiments, they may arguably encourage delusions. This tendency, coupled with a proclivity for error, has led to chatbots being described as more akin to a Ouija board or a “psychic’s con” than a source of truth, Pierre notes.

Still, the UCSF team thinks chat logs may hold clues to understanding AI-associated psychosis – and could help the industry create guardrails.

Guardrails for kids and teens

Sarma, Pierre, and UCSF colleagues will team up with Stanford University scientists to conduct one of the first studies to review the chat logs of patients experiencing mental illness. As part of the research set to launch later this year, UCSF and Stanford teams will analyse these chat logs, comparing them with patterns in patients’ mental health history and treatment records to understand how the use of AI chatbots among people experiencing mental illness may shape their outcomes.

“What I’m hoping our study can uncover is whether there is a way to use logs to understand who is experiencing an acute mental health care crisis and find markers in chat logs that could be predictive of that,” Sarma explains. “Companies could potentially use those markers to build-in guardrails that would, for instance, enable them to restrict access to chatbots or – in the case of children – alert parents.”

He continues, “We need data to establish those decision points.”

In the meantime, the pair says the use of AI chatbots is something health care providers should ask about and that patients should raise during doctor visits.

“Talk to your physician about what you’re talking about with AI,” Sarma says. “I know sometimes patients are worried about being judged, but the safest and healthiest relationship to have with your provider is one of openness and honesty.”

Source: University of California – San Francisco

How WhatsApp is Being Used to Train Healthcare Workers

Photo by Thirdman

By Sue Segar

As HIV, TB and other treatments are updated in our public healthcare system, it is critical that healthcare workers and counsellors stay on top of the latest developments. One innovative programme makes use of short lessons delivered over WhatsApp to provide such training.

Over her years working as an information pharmacist at the University of Cape Town’s Medicines Information Centre (MIC), Briony Chisholm noted that many health workers in rural clinics face difficulties accessing training in crucial aspects of their work.

“The lack of easy access to training was in areas where it was really needed, such as the HIV (treatment) guidelines that are constantly being updated,” says Chisholm. “It’s not enough to have training sessions when new guidelines come out; you ideally should be training all the time.”

Drug-drug interactions

At the end of 2019, government introduced new standard first-line HIV treatment that includes an antiretroviral medicine called dolutegravir. As we previously reported, by 2023 around 4.7 million people in South Africa were taking dolutegravir-based treatment.

But the introduction of a new medicine in the public healthcare system, especially at this scale, is rarely straight-forward.

“Dolutegravir is considered as a ‘wonder child’ in ARV treatment, because it provides a high barrier to resistance, is easier to take, and has far fewer side effects than older ARVs. However, it also has interactions with other key drugs, particularly those used for the treatment of TB, diabetes and some anti-epileptic medications,” she says.

Through numerous queries received on the MIC’s National HIV and TB Healthcare Worker Hotline, Chisholm and her colleagues became aware that some healthcare workers were struggling with managing drug interactions. “Some healthcare workers didn’t know about these interactions; others knew about them but not how to deal with them. For example, if a patient is on the TB drug rifampicin, but also needs to take dolutegravir, there’s a need to adjust the dose of dolutegravir. Similarly, adjustments are needed with the diabetes medicine, metformin.”

Chisholm now lives in the Eastern Cape village of Nieu Bethesda. When dolutegravir was introduced, she had just completed her part-time post-graduate Diploma in HIV and TB management through UCT and signed up for her Masters. She and a colleague had, in 2016, done a road trip to about 200 clinics in seven provinces to promote the MIC’s Hotline.

“We saw that most South African healthcare workers are dedicated and keen to learn. You hear all this terrible news about health and corruption, and then you go to these clinics which are ticking along under sometimes difficult conditions, doing amazing work. It’s inspiring!”

A key realisation was the challenges experienced by health workers at these rural clinics to access much-needed training.

“Getting nurses to a central point for training and the need for transport, accommodation and food, as well as having them absent from the clinic for anything between one and five days, is challenging. It’s expensive and involves a great deal of organising,” says Chisholm.

Doing the research

Chisholm then started conducting research on what healthcare workers know about dolutegravir-related drug interactions. Her study, published in 2022, found that about 70 percent of respondents understood that dolutegravir interacts with other drugs, but there were gaps in people’s knowledge of specific interactions and the dosing changes needed to manage those interactions.

The study found that access to guidelines and training were positively associated with knowledge of drug-drug interactions. “There was a clear indication that we needed more accessible training,” Chisholm says.

“The Department of Health offers online training through live webinars, and recordings of these, but they are often one or two hours long. Nurses in busy clinics don’t necessarily have this time to sit through training sessions.”

Testing the efficacy of short training sessions

Chisholm then designed a project to test the efficacy of short training sessions focusing on teaching one or two learning points from the national guidelines in ten to fifteen-minute live lessons using WhatsApp.

“I thought, ‘we’re in a country where not everyone has access to big computer screens, but they all have a cell phone and use WhatsApp – so let’s go as simple as we can’,” she says. “The idea was not to teach the entire set of guidelines but to pick out important parts of them and ensure that if something changes in the guidelines, you get it out to people, quickly.”

Chisholm tested the feasibility of WhatsApp-based microlearning with health workers and counsellors at 50 clinics around Nieu Bethesda. “I ran a range of short case-based lessons on WhatsApp groups and then measured the changes in knowledge and patient care, as well as other factors like uptake, feasibility and accessibility,” she explains.

She found that WhatsApp-based microlearning for healthcare workers is “effective, feasible and well received” and 98 percent of those who participated said they would take part if training sessions were held weekly throughout the year.

While using WhatsApp for medical interactions is not new, Chisholm says a structured syllabus using microlearning for short, punchy sessions is a first.

“This type of learning is equally accessible to a rural clinic as to one in central Hillbrow. We can access people wherever they are. Nobody has to spend money getting anywhere and clinical services are not disrupted. And it doesn’t matter if they’re not in the live session: when they have a moment, they can go into their WhatsApp and read back on the lesson,” she says.

Working with the department of health on 6MMD

Chisholm has been working with the National Department of Health on their Six-Month Multi-Month Dispensing (6MMD) programme. The programme allows people living with HIV who are doing well on treatment and have suppressed viral loads to get a six-month supply of ARVs in one go. This makes life considerably easier for people, since they only need to go to the clinic twice a year; whilst also reducing workloads in the clinics. The programme started in August 2025 and is still being phased in across the country.

“In the pilot phase, the Department of Health did some really good online training and they used our WhatsApp training as an add-on to the longer form training,” says Chisholm.

“We started with one group and ran an eight-week course of 15-minute lessons once a week on WhatsApp. Sessions were case-based and included which patients are eligible for 6MMD, and which patients are not,” she explains. By the end of 2025, around 2 000 healthcare workers had been reached through these sessions.

Lynne Wilkinson, a technical expert with the International AIDS Society which supports the Department of Health on 6MMD, says the microlearning is “a great way to ensure we get to all the clinicians in the country and explain how the 6MMD programme works”.

She adds: “When a new policy comes out, it takes a long time for implementation to be scaled because ground level clinicians aren’t always aware of the changes or don’t have an opportunity to engage with how to implement the changes.”

Daniel Canham, a professional nurse and facility team lead for the NGO, TB HIV Care, at Idutywa Village Community Health Centre in the Eastern Cape, says they’ve found the microlearning sessions for 6MMD very useful. “It’s no secret that the waiting times in clinics are quite extensive, so we are trying to enrol all those qualified for 6MMD as quickly as possible to ease the burden on the clinic,” he says.

“The microlearning on 6MMD has been very helpful. Our staff don’t have to be out of the facility to attend it. They can run their normal activities and attend sessions of ten minutes maximum,” says Canham.

“Our professional nurses joined the WhatsApp microlearning sessions in September last year,” says Faith Maseko, a nurse lead based at Phola Park Clinic in Thokoza in Gauteng who works for the WITS Research Health Institute (RHI). The RHI supports the health department in the management of HIV and employs more than 30 nurses.

“When nurses are trained virtually, some of the information is forgotten, but when you’re on WhatsApp, you can go back and access the information that was shared. The scenarios provided are very useful. If you see a patient, with a similar scenario you can go back and see what was discussed and apply it to your own situation,” she says.

Department of Health backing

Foster Mohale, spokesperson for the National Department of Health, says the WhatsApp-based microlearning has been “an effective low-cost, high-reach supplement to formal 6MMD training”.

He adds: “Training gaps translate directly into service gaps, affecting quality, retention, and progress toward epidemic control. Microlearning addresses this risk by enabling continuous, bite-sized reinforcement of policy and implementation guidance, rather than relying solely on once-off training events. This approach supports frontline healthcare workers in applying 6MMD consistently under real-world service pressures.”

Mohale says evidence from the department’s broader capacitation strategy shows that lifelong, continuous learning, rather than episodic training, is essential for resilient health systems.

“WhatsApp microlearning aligns with this principle by supporting rapid dissemination of updates, peer learning, and sustained mentorship. When integrated with structured models and aligned to national guidelines, it can be effectively applied across HIV, TB, maternal and child health, non-communicable diseases, and health systems strengthening more broadly,” he says.

Republished from Spotlight under a Creative Commons licence.

Read the original article.

Can AI Help Make Prescriptions Safer in South Africa’s Busy Clinics?

AI image created with Gencraft

By Henry Adams, Country Manager, InterSystems South Africa

Across South Africa, nurses and doctors in public clinics make hundreds of important decisions every day, often under enormous pressure. They’re short on time, juggling long queues, and sometimes working with incomplete information. In those conditions, even the most experienced professionals can make mistakes. It’s human.

The truth is, our healthcare system is stretched thin, and people can only do so much. That’s why I see real potential for AI to step in as a kind of virtual pharmacist. Not to replace anyone, but to back them up by checking prescriptions, catching errors, and helping ensure patients get the right treatment quickly and safely.

From data to decision support

I’m often asked how AI can make a real difference in healthcare right now. One area where it can have an immediate impact is in prescriptions. AI-assisted systems help doctors and nurses make safer, faster decisions by analysing medical data in real time. They can check a patient’s history, allergies, and possible drug interactions in seconds, flagging risks before they become problems.

Of course, because we’re dealing with sensitive medical information, trust and data quality are crucial. These systems only work when they’re built on accurate, connected data that healthcare professionals can rely on.

That’s where the latest health technology partnerships come in. By linking proven data platforms with smart AI tools, we’re already seeing real improvements overseas. In Europe, for example, these systems are helping clinicians catch potential drug errors early and prescribe with greater confidence.

There’s no reason South Africa can’t benefit in the same way. With clinics under pressure and resources stretched, technology that connects clean, reliable data with practical AI support could help reduce errors, save time, and make care safer for everyone.

Addressing local challenges

Medication errors can happen anywhere, but in South Africa the stakes are often higher. Our public clinics are exceptionally busy, staff are stretched, and doctors and nurses are doing their best under tough conditions. When you’re working under that kind of pressure, even a small mistake in a prescription can have serious consequences for a patient.

This is where AI can really help. Imagine a system that double-checks every prescription in real time, flagging possible drug interactions, incorrect dosages, or missing information before the medicine ever reaches the patient. It’s like having an extra set of expert eyes that never get tired. Instead of slowing things down, it speeds them up and gives clinicians peace of mind knowing they’re making the safest call for each patient.

For that to work, though, the data behind the system must be reliable and up to date. As South Africa moves toward a unified digital health record, the ability for these systems to connect to existing patient information becomes crucial. When healthcare professionals can trust the data they see on screen, AI becomes a genuine partner in care, helping them work faster, smarter, and safer.

Building confidence in AI

For AI to really work in healthcare, it must be clear and trustworthy. Doctors and nurses need to know why the system is recommending a specific drug or warning about a potential issue. If it can’t explain itself, people won’t use it, and rightly so.

That’s why transparency matters. The best AI tools don’t make decisions behind closed doors; they show their reasoning and help clinicians understand what’s happening in the background. When that’s combined with reliable, well-managed data, you start to build real confidence in the system.

It’s that trust, knowing the technology supports rather than replaces clinical judgment, that will make AI-assisted prescriptions part of everyday care, not just an interesting experiment.

A collaborative path forward

Technology on its own won’t fix South Africa’s healthcare challenges, but it can make a big difference in helping people do their jobs better. AI-assisted prescriptions are a good example of how smart tools can take some of the pressure off clinicians, reduce paperwork, and help patients get safer, faster care.

What excites me most is how practical this can be. Picture a nurse in a rural clinic who needs to prescribe medication but doesn’t have easy access to a specialist. With AI support, she can get accurate, instant guidance and know her patient is getting the right treatment. Or think about a busy hospital pharmacy, where an AI system automatically checks for drug interactions across hundreds of files in seconds, preventing errors before they happen.

This isn’t some far-off idea. The technology already exists and is being used successfully elsewhere. The goal now is to make sure it’s used in a way that supports our healthcare professionals, not replaces them. They are, and always will be, at the centre of care. If we get this right, AI can become a real partner in healthcare.

Virtual Reality Nature Walks and “Magic” Hands: A New Era in Pain Management

Photo by Matteo Vistocco on Unsplash

What if arthritis sufferers could take an immersive walk through a forest filled with soothing birdsong and then, with some help from hypnosis, come to experience their pain as separate from their body – and expel it?

That’s the goal of research led by David Ogez, a professor in the Department of Anesthesiology and Pain Medicine at Université de Montréal and a clinical researcher at the Maisonneuve-Rosemont Hospital Research Centre.

Together with postdoctoral researcher Valentyn Fournier, Ogez is testing an approach that combines medical hypnosis and virtual reality (VR) to help seniors manage chronic arthritis pain in the hands, a common and debilitating condition.

Their research was published online last month in BMJ Open.

“Chronic pain is a major public-health issue that affects about one in five people in Canada and as many as one in three over the age of 60,” said Ogez. “It significantly impacts quality of life, mobility and mental health.  But apart from pharmacological treatments, solutions are few.”

The problem lies in the limitations of drug treatments, including the risk of addiction to painkillers. This led Ogez and his team to explore complementary, non-invasive methods to help patients better manage their pain.

A powerful duo

Medical hypnosis is already recognized as an effective pain management tool, particularly in palliative care and post-operative settings. It relies on hypnotic suggestion—guided phrases that help patients alter their sensory and emotional perception of pain.

For example, patients may be asked to imagine submerging their sore hand in cold water, or be guided through controlled breathing techniques to synchronize their heartbeat and breathing to induce relaxation.

Ogez’s team wanted to take it one step further by combining the power of hypnosis with immersive virtual experiences.

Wearing a headset, the patient is transported to a Quebec landscape—a forest, mountains, a beach—accompanied by music and the sounds of nature. Developed in Quebec, this application was originally designed to give end-of-life patients the opportunity to “visit” places they never had the chance to see in real life.

Pairing hypnosis and VR makes it possible to visualize and manipulate pain, allowing patients to reclaim control of their bodies and their pain, research has shown.

One intervention being tested is the “magic hand.” In virtual reality, patients look at their hand and put little sparkles on the painful area to alleviate the pain. Another intervention involves guiding patients to “objectify” their pain: to make it visible on their hand and then remove it. 

“The pain is still there, but…”

The researchers are also interested in the physiological mechanisms responsible for the pain relief provided by these techniques, which may resemble those associated with mindfulness.

One hypothesis is that VR distracts the brain. By intensely engaging vision, hearing and concentration, VR redirects mental resources that would otherwise be mobilized by pain. Hypnosis then reinforces this diversion of attention by guiding the patient toward pleasant sensations and gradual relief.

Neuroscience research has shown that these techniques modulate the activity of the anterior cingulate cortex and primary somatosensory cortex, two brain regions involved in the emotional and perceptual processing of pain.

“The pain is still there, but its unpleasantness and intensity are reduced,” explained Ogez.

Exposure to nature also provides psychological benefits. “Nature refreshes attention, directing the mind away from negative stimuli and restoring our ability to focus on positive ones,” said Fournier.

Promising preliminary results

Beyond the immediate calming or distracting effects of a treatment session combining hypnosis and VR, the new research aims to help patients develop self-hypnosis skills they can use at home. 

The team is also working on developing a neurofeedback tool that patients can use to track and regulate their brain activity in real time in order to help them modulate their physiological responses during immersive VR experiences. 

While the study is presently in the randomized clinical trial phase, the preliminary feedback from participants is encouraging, said Ogez.

“We’re seeing good patient satisfaction, although we mustn’t confuse satisfaction with effectiveness,” he cautioned. “Still, we’re hopeful, since pain is partly a subjective experience.” 

South Africa, PATH, and Wellcome Launch World’s First AI Framework for Mental Health at G20 Social Summit

Photo by Andres Siimon on Unsplash

As artificial intelligence (AI) increasingly enters the mental health space, from therapy chatbots to diagnostic tools, the world faces a critical question: can AI expand access to care without putting people at risk?

At the G20 Social Summit in Johannesburg, South Africa announced a landmark national effort to answer that question. The South African Health Products Regulatory Authority (SAHPRA) and PATH, with funding from Wellcome, have launched the Comprehensive AI Regulation and Evaluation for Mental Health (CARE MH) program to develop the world’s first regulatory framework for artificial intelligence in mental health.

CARE MH will establish a science-based and ethically robust regulatory framework that describes how AI tools need to be evaluated for safety, inclusivity, and effectiveness before they can be given market authorization and made available to potential service users. It aims to strengthen trust in digital health innovation and will serve as a model for other countries seeking to strike a balance between innovation and oversight.

 “You wouldn’t give your child or loved one a vaccine or drug that hadn’t been tested or evaluated for safety,” saidBilal Mateen, Chief AI Officer at PATH. “We’re working to bring that same standard of rigorous evaluation to AI tools in mental health, because trust must be earned, not assumed.”  

The framework will be developed and tested in South Africa, with the intention of extending its application across the African continent and to international partners.

“SAHPRA is proud to lead the development of Africa’s first regulatory framework for AI in mental health linked directly to market authorization,” said Christelna Reynecke, Chief Operations Officer of SAHPRA. “Our true goal is even more ambitious, though; we want to create a regulatory environment for AI4health in general, one that keeps pace with innovation, grounded in scientific rigor, ethical oversight, and public accountability.”

“Millions of people across the globe are being held back by mental health problems, which are projected to become the world’s biggest health burden by 2030,” said Professor Miranda Wolpert MBE, Director of Mental Health at Wellcome. “CARE MH is a vital step toward ensuring that AI technologies in this space are safe, effective, and equitable.”

The goal is simple: help more people, safely.

Through CARE MH, the partners behind this initiative are setting the foundation for the next generation of ethical, evidence-based AI in mental health. Supported by global experts from the following institutions:  Audere Africa, African Health Research Institute, the UK’s Centre for Excellence in Regulatory Science and Innovation for AI & Digital Health, the UK Medicines and Healthcare products Regulatory Agency, University of Birmingham, University of Washington, and the Wits Health Consortium, CARE MH is built to protect and empower people everywhere.

Opinion Piece: The Ethical Pulse of Progress – AI’s Promise and Peril in Healthcare

By Vishal Barapatre, Group Chief Technology Officer at In2IT Technologies

Artificial Intelligence (AI) is revolutionising healthcare as profoundly as the discovery of antibiotics or the invention of the stethoscope. From analysing X-rays in seconds to predicting disease outbreaks and tailoring treatment plans to individual patients, AI has opened new possibilities for precision medicine and increased efficiency. In emergency rooms, AI-driven diagnostic tools are already helping doctors detect heart attacks or strokes faster than human eyes alone.

However, as AI systems become increasingly embedded in the patient journey, from diagnosis to aftercare, they raise critical ethical questions. Who is accountable when an algorithm gets it wrong? How can we ensure that patient data remains confidential in the era of cloud computing? And how can healthcare institutions, often stretched thin on resources, balance innovation with responsibility?

When algorithms diagnose: the promise and the problem

AI’s strength lies in its ability to process massive amounts of data, such as medical histories, imaging scans, and lab results, and detect patterns that human clinicians might miss. This can dramatically improve diagnostic accuracy and treatment outcomes. For instance, AI models trained on thousands of mammogram images can help identify subtle indicators of breast cancer earlier than traditional methods.

However, the same data that powers AI can also introduce bias. If the datasets used to train an algorithm are skewed, say, over-representing one demographic group, the results may unfairly disadvantage others. A diagnostic model trained primarily on data from urban hospitals, for example, might misinterpret symptoms in patients from rural areas or underrepresented ethnic groups. Bias in healthcare AI isn’t just a technical flaw; it’s an ethical hazard with real-world consequences for patient trust and equity.

The privacy paradox

The integration of AI in healthcare requires access to vast quantities of sensitive data. This creates a privacy paradox: the more data AI consumes, the smarter it becomes, but the greater the risk to patient confidentiality. The digitisation of health records, combined with AI’s hunger for data, exposes systems to new vulnerabilities. A single breach can compromise thousands of medical histories, potentially leading to identity theft or misuse of personal health information. The paradox underscores the need for robust data protection measures in AI-driven healthcare systems.

Striking a balance between data utility and privacy protection has become one of the healthcare industry’s most pressing ethical dilemmas. Encryption, anonymisation, and strict access controls are essential, but technology alone isn’t enough. Patients need transparency: clear explanations of how their data is used, who has access to it, and what safeguards are in place. Ethical AI requires not only compliance with regulations but also the cultivation of trust through open communication.

Accountability in the age of automation

When an AI system makes a medical recommendation, who is ultimately responsible for the outcome – the algorithm’s developer, the healthcare provider, or the institution that deployed it? The opacity of AI decision-making, often referred to as the “black box” problem, complicates accountability and transparency. Clinicians may rely on algorithmic outputs without fully understanding how conclusions were reached. This can blur the line between human and machine judgment.

Accountability must therefore be clearly defined. Human oversight should remain central to any AI-powered decision, ensuring that technology supports rather than replaces clinical expertise. Ethical frameworks that mandate explainability, where AI systems must provide understandable reasoning for their outputs, are key to maintaining trust. Moreover, continuous auditing of AI models, which involves regularly reviewing and testing the system performance, can help detect and correct biases or errors before they lead to harm, thereby ensuring the ongoing ethical use of AI in healthcare.

Behind the code: who keeps AI ethical

While hospitals and clinics focus on patient care, many lack the internal capacity to manage the complex ethical, security, and technical demands of AI adoption. This is where third-party IT providers play a pivotal role. These partners act as the backbone of responsible innovation, ensuring that AI systems are implemented securely and ethically.

By embedding ethical principles into system design, such as fairness, transparency, and accountability, IT providers help healthcare institutions mitigate risks before they become crises. They also play a crucial role in securing sensitive data through advanced encryption protocols, cybersecurity monitoring, and compliance management. In many ways, they serve as both architects and custodians of ethical AI, ensuring that the pursuit of innovation does not compromise patient welfare.

Building a culture of ethical innovation

Ultimately, the ethics of AI in healthcare extend beyond technology; they are about culture and leadership. Hospitals and healthcare networks must foster environments where ethical reflection is as integral as technical innovation. This involves establishing multidisciplinary ethics committees, conducting bias audits, and training clinicians to critically evaluate and question AI outputs rather than accepting them without examination.

The future of AI in healthcare depends not on how advanced our algorithms become, but on how wisely we use them. Ethical frameworks, transparent governance, and responsible partnerships with IT providers can transform AI from a potential risk into a powerful ally. As the healthcare sector continues to evolve, the institutions that will thrive are those that remember that technology should serve humanity, not the other way around.

Using AI to Empower Care Physicians

Photo by National Cancer Institute on Unsplash

By Henry Adams, Country Manager, InterSystems South Africa

When people think about artificial intelligence (AI) in healthcare, they often picture complex machines in high-tech hospitals. But some of the most exciting uses of AI are happening in primary care, right at the first point of contact between doctor and patient.

Globally, AI is helping general practitioners, nurses, and clinicians make faster, more accurate decisions by giving them access to clean, connected data. It helps detect early signs of disease, spot patterns across patient populations, and ensure the right people get the right care sooner.

South Africa is not there yet, but that is exactly why we should be paying attention.

Learning from what is working elsewhere

In countries where healthcare data is already digitised and connected, AI-assisted tools are starting to prove their worth. In parts of Europe, AI systems are helping GPs analyse symptoms, lab results and patient histories to identify possible conditions much earlier. In the US, data platforms are used to surface insights from millions of patient records, helping clinicians identify patterns that might otherwise go unnoticed.

At InterSystems, we have seen firsthand how this combination of reliable data and intelligent technology is changing the way care is delivered. In the UK, our data platform helps care providers securely connect across places of care to patient information across multiple systems, making it easier for AI tools to interpret symptoms in context. In France, AI-assisted prescriptions through partners like Posos are helping doctors reduce errors and improve treatment safety.

These examples show what is possible when data, people and technology come together in the right way.

Why data comes first

AI is only as powerful as the data it works with. If a clinician’s system lacks complete or up-to-date patient information, the AI cannot provide reliable support. That is why data quality and interoperability are so important; they form the foundation for everything else.

Many countries that are seeing success with AI in primary care started by getting their data in order, building connected health records, standardising information, and ensuring privacy and compliance at every step. Once those pieces were in place, they could start introducing AI tools that help doctors and nurses make better decisions without adding extra admin or complexity.

Again, in South Africa, we are not quite there yet, but we are heading in the right direction. There are ongoing efforts to digitise health records and bring together fragmented systems. As that process continues, it will open the door for more advanced AI-driven support tools, from diagnosis assistance to population health management.

What this could mean for South Africa

Imagine a community clinic in Limpopo or the Eastern Cape, where a doctor sees dozens of patients a day. With AI support, they could instantly access each patient’s medical history, flag high-risk symptoms, or receive early alerts about potential complications like diabetes or hypertension.

AI will not replace the doctor’s or their judgment. It simply gives them more context and better information. It is like having a quiet assistant in the background, helping spot what is easy to miss when you are under pressure.

This kind of technology could also help identify broader health trends, guiding public health decisions and making sure resources are sent where they are needed most. It is not about high-end tech for big hospitals, it is about making everyday healthcare smarter, safer and more efficient for everyone.

Building the foundations

Before we can get there, we need to focus on the basics: connected systems, reliable data, and trust. AI tools cannot function properly in silos. They need access to consistent, secure information, the kind that interoperable platforms like InterSystems IRIS for Health are designed to manage.

Once we have that in place, the rest becomes achievable. Doctors can use AI to compare patient data against proven medical knowledge bases. Clinics can share insights securely across regions. And the healthcare system becomes more proactive instead of reactive.

It is easy to look at what is happening overseas and feel that South Africa is far behind. But I see it differently. Every success story abroad gives us a roadmap, lessons we can adapt to our own realities. We do not have to reinvent the wheel; we just have to make sure it is fit for our local terrain.

Digital Tools Can Transform Africa’s Healthcare Outcomes – And Save the Continent Billions

AI image created with Gencraft

By Thom Renwick, General Manager, Roche Pharma South Africa

Early screening and treatment. Much higher survival rates. And savings in the billions of dollars. From AI-powered medicine development to teleconsultations, technology can boost Africans’ health and livelihoods while growing economic and social impact across the continent.

Access to quality healthcare is fundamental to leading a fruitful, economically active life. Yet, breast cancer is still the number one cancer killer of women in Africa – in most cases, while they’re still in their prime. Tragically, most are diagnosed too late for curative treatment.

Across the continent, non-communicable diseases – such as treatable breast cancer – cause hundreds of thousands of preventable deaths every year[1], devastating families and hampering economic growth.

Global projections indicate a worrying 38% rise in incidence of breast cancer and a 68% increase in deaths by 2050 without urgent intervention, with the least developed countries being the most affected, according to a new white paper[2] by independent German economic think tank the WifOR Institute.

The Value of Investing in Innovative Medicines report outlines the economic burden from not treating the aggressive HER2-positive type of breast cancer over five years in seven African countries – South Africa, Kenya, Nigeria, Algeria, Tunisia, Côte d’Ivoire and Morocco. The findings are staggering, indicating a $10.3-billion loss in productivity from 2017 to 2023.[3]

The data also shows that, in Africa, 89% of the economic burden of HER2-positive breast cancer – representing 15% to 20% of all breast cancer cases around the world – falls on women of working age.[4]

It goes beyond economics. Mothers hold households together and when they die that has huge ramifications for entire families and communities.

In sub-Saharan Africa, every 100 deaths among women under 50 leave around 210 children without their mothers[5], resulting in unstable, vulnerable households and long-term developmental challenges.

These figures are a wake-up call, but in challenge lies scope for innovation. I believe we can – and must – turn the burden into opportunity.

Closing the gap through health-tech partnerships

Every woman diagnosed and treated early is not only more likely to survive but also able to remain active in her family and community[6], contributing to shared prosperity.

Through healthcare and technology partnerships, we can leapfrog traditional healthcare models and turn the tide towards survival.

Excitingly, this process has already started. Governments are increasingly setting strategies and allocating funding for digital health. Start-ups and companies are driving the uptake of digital health tools that could revolutionise care delivery.

Artificial intelligence stands out as the breakthrough technology. Pharmaceutical and biotech companies such as Roche are already using AI throughout their value chains, both in the early stage of drug development and also to correctly interpret the enormous amounts of data generated to deliver effective health solutions.

In most sub-Saharan countries, more than 20% of the population lives more than two hours from essential health services.[7] Tech’s role in Africa’s future is about much more than connectivity or commerce. It’s about lives and well-being – AI, apps, telemedicine and other digital solutions can close the gap to bring care closer to people.

But without individuals and organisations working together, innovation can’t come to life.

And since the journey for a patient experiencing a health crisis such as breast cancer involves many stakeholders, we must urgently identify opportunities for partners to come together and spur real action.

This week, Roche sponsored the 28th annual Africa Tech Festival’s first-ever health track. Policymakers, innovators, professionals and experts met to thrash out actionable solutions to the continent’s biggest health challenges.

This was part of an ambitious broader strategy to transform healthcare in Africa – investment in early intervention strategies can generate returns that far outstrip their cost.

Research by McKinsey & Company shows that the African digital health space is already seeing unprecedented growth, with $123-million in investment secured by 55 start-ups in 2021.[8]

The consultancy’s analysis showed that digital health tools – such as virtual platforms for consultations; electronic health records; mobile apps to help patients self-manage their diseases; and patient e-booking platforms – could help South Africa, Kenya and Nigeria capture efficiencies of up to 15% in total healthcare expenditure by 2030.[9]

Widespread adoption could free up an astounding $1.9-billion to $11-billion in South Africa alone.[10]

Speaking with one voice for a better future

Innovation has been the backbone of progress across any major public health disease – whether it’s HIV, cancer or ophthalmology. It takes a combination of passionate people and expert innovation to make a difference.

One existing solution and real-life example of African innovation and partnership is EMPOWER, a groundbreaking digital health platform developed to improve coordinated breast and cervical cancer care in Kenya.[11]

The initiative has grown from a single clinic in 2019 to a 76-site national platform and is integrated into Kenya’s National Cancer Registry.[12]

Empower ensures that the entire patient journey, from screening to treatment and follow-up, is digitally powered.

The current average five-year survival rate for breast cancer across Africa is roughly five out of 10 patients that are diagnosed (48%).[13] My vision is that this will increase to 80% within the next 5 years.

Realising this audacious goal will take commitment from stakeholders to drive action for the tens of thousands of African women who desperately need it.

There is no reason why someone in a Western society should have a better health outcome than here in Africa. The health of our people is the wealth of our nations. We must speak with one voice and act now.

––––––––––

Thom Renwick is general manager of Roche South Africa and the sub-region. He began his journey with the company in 2012 on its United Kingdom graduate programme, following his studies at King’s College London, Cranfield School of Management and the University of Oxford.

During his career at Roche, he has worked in global product strategy in Basel, Switzerland, as head of ophthalmology in the UK and as chief of staff for Pharma International. He won the PharmaTimes New Marketer of the Year Award in 2015 and was featured in the publication’s Smart People series in 2021.


[1] World Health Organisation African region. https://www.afro.who.int/health-topics/noncommunicable-diseases. Accessed: Nov. 12, 2025. [Online]. Available: https://www.afro.who.int/health-topics/noncommunicable-diseases.

[2] WifOR Institute, ‘The Value of Investing in Innovative Medicines: Socioeconomic Burden of HER2+ Breast Cancer and Annual Social Impact of Roche’s Treatments for the Disease in Africa’. Accessed: Nov. 7, 2025 [Online]. Available: https://africa.roche.com/stories/what-s-it-worth-the-value-of-innovation

[3] WifOR Institute, ‘The Value of Investing in Innovative Medicines: Socioeconomic Burden of HER2+ Breast Cancer and Annual Social Impact of Roche’s Treatments for the Disease in Africa’. Accessed: Nov. 7, 2025 [Online]. Available: https://africa.roche.com/stories/what-s-it-worth-the-value-of-innovation

[4] WifOR Institute, ‘The Value of Investing in Innovative Medicines: Socioeconomic Burden of HER2+ Breast Cancer and Annual Social Impact of Roche’s Treatments for the Disease in Africa’. Accessed: Nov. 7, 2025 [Online]. Available: https://africa.roche.com/stories/what-s-it-worth-the-value-of-innovation

[5] WifOR Institute, ‘The Value of Investing in Innovative Medicines: Socioeconomic Burden of HER2+ Breast Cancer and Annual Social Impact of Roche’s Treatments for the Disease in Africa’. Accessed: Nov. 7, 2025 [Online]. Available: https://africa.roche.com/stories/what-s-it-worth-the-value-of-innovation

[6] WifOR Institute, ‘The Value of Investing in Innovative Medicines: Socioeconomic Burden of HER2+ Breast Cancer and Annual Social Impact of Roche’s Treatments for the Disease in Africa’. Accessed: Nov. 7, 2025 [Online]. Available: https://africa.roche.com/stories/what-s-it-worth-the-value-of-innovation

[7] Mckinsey, ‘How digital tools could boost efficiency in African health systems.’ Accessed: Nov. 7, 2025 [Online]. Available: https://www.mckinsey.com/industries/healthcare/our-insights/how-digital-tools-could-boost-efficiency-in-african-health-systems

[8] Mckinsey, ‘How digital tools could boost efficiency in African health systems.’ Accessed: Nov. 7, 2025 [Online]. Available: https://www.mckinsey.com/industries/healthcare/our-insights/how-digital-tools-could-boost-efficiency-in-african-health-systems

[9] Mckinsey, ‘How digital tools could boost efficiency in African health systems’. Accessed: Nov. 7, 2025 [Online]. Available: https://www.mckinsey.com/industries/healthcare/our-insights/how-digital-tools-could-boost-efficiency-in-african-health-systems

[10] Mckinsey, ‘How digital tools could boost efficiency in African health systems’. Accessed: Nov. 7, 2025 [Online]. Available: https://www.mckinsey.com/industries/healthcare/our-insights/how-digital-tools-could-boost-efficiency-in-african-health-systems

[11] Roche Africa, ‘From vision to national platform: EMPOWER scales through Kenya’s National Cancer Institute’. Accessed: Nov. 7, 2025 [Online]. Available: https://africa.roche.com/stories/empower-scales-through-kenya-national-cancer-institute

[12] Roche Africa, ‘From vision to national platform: EMPOWER scales through Kenya’s National Cancer Institute’. Accessed: Nov. 7, 2025 [Online]. Available: https://africa.roche.com/stories/empower-scales-through-kenya-national-cancer-institute

[13] A. Padu-Pebrah, et al., ‘Five-Year Survival Outcomes for Breast Cancer Patients Across Continental Africa: A Contemporary Review of Literature with Meta Analysis’, eLife. Accessed: Nov. 7, 2025 [Online]. Available: https://elifesciences.org/reviewed-preprints/105488#mainMenu

Study Highlights the Limits of AI in Heart Care

Human heart. Credit: Scientific Animations CC4.0

There are limits in applying AI to images of the heart, a new study from the Smidt Heart Institute at Cedars-Sinai reveals. The findings were published in the Journal of the American Society of Echocardiography.

Investigators trained multiple artificial intelligence models to read images from echocardiograms, a type of ultrasound test that evaluates the structure and function of the heart. Their goal was to determine whether AI could use these images to calculate measurements like inflammation and scarring that are normally obtained through another, more costly test called cardiac magnetic resonance imaging (CMRI). By examining findings from 1453 patients who had undergone both tests, they found the AI models could not accomplish this task.

“As compared to echocardiograms, cardiac MRI machines are expensive and not available for many patients, especially those in rural areas, so we had hoped that AI could reduce the need for it,” said Alan Kwan, MD, assistant professor in the Department of Cardiology in the Smidt Heart Institute at Cedars-Sinai and co-senior author of the study. “Our results showed the limited powers of AI in this area.”

Source: Cedars-Sinai Medical Center

POPIA Compliance for Health Data: Navigating Special Personal Information Requirements in Healthcare

By Wendy Tembedza, Partner at Webber Wentzel


Health data is one of the most valuable assets in modern healthcare, and the Protection of Personal Information Act, 2013 (POPIA) places strict requirements on its use.

Stakeholders in the healthcare sector understand the value of data in ensuring appropriate treatment for patients. With the proliferation of technologies such as artificial intelligence, which enable healthcare practitioners to derive valuable insights from the data they hold, the importance of managing data in a manner that ensures compliance with data protection laws must remain front of mind in all data processing activities.

This obligation is particularly acute given the volumes of data that evolving technologies allow healthcare institutions to collect and utilise. Importantly, when these larger datasets include special personal information, the obligation to process such information lawfully becomes even more significant. This is because POPIA regulates the processing of special personal information (which includes health and sex life information) more closely than it does other forms of personal information.

The implications of POPIA’s strict regulation of processing health and sex life information means that, where a responsible party is considering collecting such data, an assessment must be made before collection to ensure that the intended processing activities will be lawful under POPIA. Conducting such an assessment prior to collection is integral to establishing a lawful basis for processing from the outset, as all handling of health and sex life information must remain lawful throughout the processing lifecycle, from collection and use to deletion and destruction.

POPIA establishes, as a starting point, a prohibition on processing health and sex life information unless a justification exists. One general exception is where the data subject has granted consent for such processing. It is important to note that consent is specifically defined under POPIA as an informed, voluntary expression of will. Importantly, consent must be specific and cannot be overly generalised. Any reliance on consent must therefore meet these definitional requirements. Ensuring compliance with these requirements is increasingly pertinent where data is used for purposes that differ from the reason for which it was initially collected.

POPIA provides additional exemptions for processing special personal information. For health information, POPIA permits processing by medical professionals, healthcare institutions or facilities, or social services, where such role players are providing healthcare services. POPIA also provides an exemption that applies to insurance companies, medical schemes, medical scheme administrators, and managed healthcare organisations in certain circumstances.

While POPIA creates these categories of exemptions, it is important to note that even where a role player falls within an exemption, this does not eliminate the obligation on a responsible party to comply with POPIA’s eight conditions for lawful processing. Any responsible party relying on an exemption must still ensure that processing activities are ultimately lawful and consistent with the standards of care contemplated under POPIA.

The use of automated means to make decisions about data subjects using their health and sex life information must also be carried out lawfully and in compliance with POPIA. A data subject cannot be subject to a decision that has legal consequences for them, or that otherwise affects them to a substantial degree, where such a decision is based solely on automated decision-making using their personal information, except in limited instances.

Notably, POPIA specifically identifies health as an example of a decision that could have legal consequences or otherwise affect a data subject substantially. This highlights the importance of assessing all data processing activities, especially in sectors like healthcare, where there is growing reliance on technology to make diagnostic or treatment-related decisions.

The Information Regulator has recognised the importance of properly regulating the processing of health and sex life information in recently published Draft Regulations relating to the processing of such data by certain responsible parties. The Information Regulator notes that the primary purpose of these Draft Regulations is to assist responsible parties in implementing POPIA correctly and to provide better transparency to data subjects regarding their information.

The scope of application of the Draft Regulations includes insurance companies, medical schemes, medical scheme administrators, managed healthcare organisations and pension funds.

The Information Regulator’s move to regulate the processing of health and sex life information more closely underscores the importance of ensuring that all such processing activities are undertaken with an increased measure of care. Organisations must therefore assess their processing activities routinely to ensure ongoing compliance with POPIA. This is particularly important as healthcare-related technologies continue to advance, creating new and innovative ways to use data in patient treatment.

Healthcare stakeholders must ensure that use of such technologies comply with POPIA’s requirements and meet the standards established under the Act.