Tag: artificial intelligence

Deepfake X-Rays Fool Radiologists and AI

Findings raise concerns about cybersecurity and diagnostic trust

Anatomy-matched real and GPT-4o-generated radiographs: (A) real and (B) GPT-4o-generated posteroanterior chest radiographs, (C) real and (D) GPT-4ogenerated lateral cervical spine radiographs, (E) real and (F) GPT-4o-generated posteroanterior hand radiographs, and (G) real and (H) GPT-4o-generated lateral lumbar spine radiographs. The pairs demonstrate that GPT-4o can produce radiographically plausible images across different anatomic regions.
https://doi.org/10.1148/radiol.252094 ©RSNA 2026

Neither radiologists nor multimodal large language models (LLMs) are able to easily distinguish AI-generated “deepfake” X-ray images from authentic ones, according to a study published in Radiology. The findings highlight the potential risks associated with AI-generated X-ray images, along with the need for tools and training to protect the integrity of medical images and prepare health care professionals to detect deepfakes.

The term “deepfake” refers to a video, photo, image or audio recording that appears real but has been created or manipulated using AI.

“Our study demonstrates that these deepfake X-rays are realistic enough to deceive radiologists, the most highly trained medical image specialists, even when they were aware that AI-generated images were present,” said lead study author Mickael Tordjman, MD, post-doctoral fellow, Icahn School of Medicine at Mount Sinai, New York. “This creates a high-stakes vulnerability for fraudulent litigation if, for example, a fabricated fracture could be indistinguishable from a real one. There is also a significant cybersecurity risk if hackers were to gain access to a hospital’s network and inject synthetic images to manipulate patient diagnoses or cause widespread clinical chaos by undermining the fundamental reliability of the digital medical record.”

Seventeen radiologists from 12 different centers in six countries (United States, France, Germany, Turkey, United Kingdom and United Arab Emirates) participated in the retrospective study. Their professional experience ranged from 0 to 40 years. Half of the 264 X-ray images in the study were authentic, and the other half were generated by AI. Radiologists were evaluated on two distinct image sets, with no overlapping between the datasets. The first dataset included real and ChatGPT-generated images of multiple anatomical regions. The second dataset included chest X-ray images—half authentic and the other half created by RoentGen, an open-source generative AI diffusion model developed by Stanford Medicine researchers.

When radiologist readers were unaware of the study’s true purpose, yet asked after ranking the technical quality of each ChatGPT image if they noticed anything unusual, only 41% spontaneously identified AI-generated images. After being informed that the dataset contained synthetic images, the radiologists’ mean accuracy in differentiating the real and synthetic X-rays was 75%.

Individual radiologist performance in accurately detecting the ChatGPT-generated images ranged from 58% to 92%. Similarly, the accuracy of four multimodal LLMs—GPT-4o (OpenAI), GPT-5 (OpenAI), Gemini 2.5 Pro (Google), and Llama 4 Maverick (Meta)—ranged from 57% to 85%. Even ChatGPT-4o, the model used to create the deepfakes, was unable to accurately detect all of them, though it identified the most by a considerable margin compared to Google and Meta LLMs.

Radiologist accuracy in detecting the RoentGen synthetic chest X-Rays ranged from 62% to 78% and the LLM models’ performance ranged from 52% to 89%.

There was no correlation between a radiologist’s years of experience and their accuracy in detecting synthetic X-ray images. However, musculoskeletal radiologists demonstrated significantly higher accuracy than other radiology subspecialists.

Spotting the Risks in Synthetic Imaging

“Deepfake medical images often look too perfect,” Dr. Tordjman said. “Bones are overly smooth, spines unnaturally straight, lungs overly symmetrical, blood vessel patterns excessively uniform, and fractures appear unusually clean and consistent, often limited to one side of the bone.”

Recommended solutions to clearly distinguish real and fake images and help prevent tampering include implementing advanced digital safeguards, such as invisible watermarks that embed ownership or identity data directly into the images and automatically attaching technologist-linked cryptographic signatures when the images are captured.

“We are potentially only seeing the tip of the iceberg,” Dr. Tordjman said. “The logical next step in this evolution is AI-generation of synthetic 3D images, such as CT and MRI. Establishing educational datasets and detection tools now is critical.”

The study’s authors have published a curated deepfake dataset with interactive quizzes for educational purposes.

For More Information

Access the Radiology study, “The Rise of Deepfake Medical Imaging: Radiologists’ Diagnostic Accuracy in Detecting ChatGPT-generated Radiographs,” and the related editorial, “The Democratization of Deceit: Seeing Is No Longer Believing.”

Source: Radiological Society of North America

AI Tools for Cancer Rely on Shaky Shortcuts

Small cell lung cancer cells (green and blue) that metastasised to the brain in a laboratory mouse recruit brain cells called astrocytes (red) for their protection. Credit: Fangfei Qu

Artificial intelligence tools are increasingly being developed to predict cancer biology directly from microscope images, promising faster diagnoses and cheaper testing. But new research from the University of Warwick, published in Nature Biomedical Engineering, suggests that many of these systems may be using visual shortcuts rather than true biology – raising concerns that some AI pathology tools are currently too unreliable for real-world patient care.

“It’s a bit like judging a restaurant’s quality by the queue of people waiting to get in: it’s a useful shortcut, but it’s not a direct measure of what’s happening in the kitchen,” says Dr Fayyaz Minhas, Associate Professor and principal investigator of the Predictive Systems in Biomedicine (PRISM) Lab in the Department of Computer Science, University of Warwick, and lead author of the study.

“Many AI pathology models are doing the same thing, relying on correlations between biomarkers or on obvious tissue features, rather than isolating biomarker-specific signals. And when conditions change, these shortcuts often fall apart.”

To reach this conclusion, the researchers analysed more than 8000 patient samples across four major cancer types – breast, colorectal, lung and endometrial – and compared the performance of leading machine learning approaches. While the models often achieved high headline accuracy, the team found this frequently came from statistical “shortcuts.”

For example, instead of detecting mutations in the cancer-associated BRAF gene, a model might learn that BRAF mutations often occur alongside another clinical feature such as microsatellite instability (MSI). The system then learns to use this combination of cues to predict BRAF status rather than learning the causal BRAF signal itself – meaning accurate cancer predictions work only when these biomarkers co-occur and become unreliable when they do not.

Kim Branson, SVP Global Head of Artificial Intelligence and Machine Learning, GSK and co-author says, “We’ve found that predicting a BRAF mutation by looking at correlated features like MSI is often like predicting rain by looking at umbrellas – it works, but it doesn’t mean you understand meteorology.

“Crucially, if a model cannot demonstrate information gain above a simple pathologist-assigned grade, we haven’t advanced the field; we’ve just automated a shortcut. The roadmap for the next generation of pathology AI isn’t necessarily bigger models; it’s stricter evaluation protocols that force algorithms to stop cheating and learn the hard biology.”

When performance of AI models was assessed within stratified patient subgroups, such as only high-grade breast cancers or only MSI-positive tumours, accuracy fell substantially, revealing that the models were dependent on shortcut signals that disappear once confounding factors are controlled.

For certain prediction tasks, the performance advantage of deep learning over human-derived clinical information was modest. AI systems achieved accuracy scores of just over 80% when predicting biomarkers, compared with around 75% using tumour grade alone – a measure already assessed by pathologists.

Machine learning methods can still prove valuable for research, drug development candidate screening and for clinical triaging, screening, or supplementary decision support. However, the researchers argue that future AI tools must move beyond correlation-based learning and adopt approaches that explicitly model biological relationships and causal structure.

They also call for stronger evaluation standards, including subgroup testing and comparison against simple clinical baselines, before looking at deployment in routine care.

Dr Minhas concludes, “This research is not a condemnation of AI in pathology. It is a wake-up call. Current models may perform well in controlled settings but rely on statistical shortcuts rather than genuine biological understanding. Until more robust evaluation standards are in place, these tools should not be seen as replacements for molecular testing, and it is essential that clinicians and researchers understand their limitations and use them with appropriate caution.”

Source: University of Warwick

Half of All Men Over 60 Have Prostate Cancer – an AI Tool Could Speed Diagnosis

Photo by National Cancer Institute on Unsplash

Increasing use of blood tests to detect prostate cancer is leading to overworked doctors. NTNU has now created an AI diagnostic tool that can help lighten the burden.

Diagnostic tools based on artificial intelligence are now making their way into Norwegian hospitals. AI can independently read X-ray images and detect bone fractures, or assess cancer tumours in both the breast and prostate.

“AI tools can take over the detection of simple and clear-cut cases, allowing doctors to spend their time on more complex ones,” said Tone Frost Bathen. She is a professor at NTNU and the project manager of an AI-powered analysis tool for prostate cancer called PROVIZ.

Tests on patients at St Olavs Hospital indicate that the tool is very promising.

“AI can enable radiologists to determine more quickly and more accurately whether a patient needs a biopsy, and where in the prostate it should be taken from,” explained Bathen.

“The PROVIZ project started as early as 2018. It takes a long time to develop diagnostic tools in medicine because safety standards must be high. The application alone to be allowed to test the tool on patients was 500 pages. It is important to create a tool that clearly shows how the result was reached, and that fits into a busy hospital workday,” says Tone Frost Bathen, Professor at NTNU. Photo: Anne Sliper Midling / NTNU

A recent study shows that patients trust medical test results only if an experienced doctor confirms what has been detected.

“Trust in doctors and health professionals is key for artificial intelligence to gain a place in the diagnosis of prostate cancer. Technology alone is not enough. Human contact and professional assessment remain indispensable,” said Simon A. Berger, a PhD research fellow at NTNU.

Prostate cancer is a natural part of getting older

Prostate cancer is the most common form of cancer among men in Western countries.

Examinations have detected prostate cancer in 10% of 50-year-olds, 50% of 60-year-olds and approximately 70% of men over the age of 80.

This shows that the disease is naturally linked to ageing.

“Prostate cancer is something most men die with, not from,” added Berger.

A blood test called PSA can help detect prostate cancer. Since it has become more common for men to take this blood test, the number of new prostate cancer cases has risen sharply. There are now approximately 5000 new cases each year.

When more people are tested for something that many individuals naturally have as part of the ageing process, the next medical step after the blood test must also be carried out more often, so that doctors can obtain a broader clinical picture of its severity.

Most trust in doctors

Currently, this next step involves taking an MRI scan, which provides a detailed image of the prostate gland and the surrounding tissue. These images need to be interpreted manually by an experienced radiologist. As the number of images taken has increased sharply, this has created a need for new and more efficient ways of making diagnoses.

Through the PROVIZ project, NTNU researchers have developed an AI-powered tool that can help doctors interpret MRI images of the prostate. PROVIZ is currently available only for use as part of the ongoing research project, but efforts are underway to apply for a patent and make the tool commercially available.

High international competition for commercial AI tools

Several research groups around the world are now working on developing AI-based diagnostic tools for prostate cancer.

PROVIZ has completed its first clinical testing in collaboration with St. Olavs Hospital, and the results were good. The next step is a much larger clinical trial, as well as a regulatory approval process.

“Right now, we are seeking approximately 20 million NOK to finance this phase. Once funding is in place, the tool could be on the market in the US within a year, and in Europe in just over a year,” says Gabriel Addio Nketiah, a researcher at NTNU and responsible for the commercialisation of PROVIZ.

For a tool like this to be efficiency-enhancing in routine hospital practice, patients must also trust the findings detected through the use of AI.

“Patients have high expectations that AI can be used for faster diagnostics and to reduce healthcare waiting lists. Many see AI as a kind of safety valve – an additional resource that doctors can use alongside their professional judgment,” says Simon A. Berger, a PhD research fellow at NTNU.

Berger interviewed 18 men who had been diagnosed with prostate cancer through the use of PROVIZ. The study shows that trust in doctors and health professionals plays a decisive role in whether patients accept AI in the health services.

“Patients trust AI in lower-risk cases such as bone fractures, but not in cases where the perceived risk is higher, such as cancer. When the perceived risk is high, we place the greatest trust in specialized doctors who can confirm what AI has found,” explained Berger.

Doctors as guarantors

In his interviews, Berger identified three different dimensions of trust.

  1. Foundational trust in the healthcare system: many patients had positive experiences from previous encounters with the healthcare system. This laid a positive foundation.
  2. Inter-personal trust in health professionals: patients trusted the doctors and their assessments. This trust was crucial for accepting AI because the doctors explained and vouched for the technology.
  3. Possible trust in AI: even though patients recognized the potential of AI, they always wanted a human assessment as well in prostate cancer diagnostics. They were concerned about accountability, professional judgement and AI’s (in)ability to see the whole clinical picture.

“The relationship between patient and doctor is still key. For AI to be accepted in clinical practice, health professionals must be active communicators and guarantors of safety. In order for doctors to serve as guarantors, they must first understand how AI arrived at its conclusions so they can verify that it has made the correct assessment. Patients accept the use of AI within a framework they already trust,” concluded Berger.

NTNU owns an MRI scanner at St. Olavs Hospital that is currently undergoing a major upgrade. It helps researchers obtain the best possible images to be used in, among other things, PROVIZ. “Unfortunately, there are few investors in medical technology right now, but we hope that someone sees the societal value of our project,” says Professor Tone Frost Bathen at NTNU. Photo: Anne Sliper Midling / NTNU

By Anne Sliper Midling

Source:

Berger SA, Håland E, Solbjør M. Patient Perspectives on Trust in Artificial Intelligence-Powered Tools in Prostate Cancer Diagnostics. Qualitative Health Research. 2025;0(0). doi:10.1177/10497323251387545

Source: Norwegian Tech News

Can Medical AI Lie? How LLMs Handle Health Misinformation

Photo by Sanket Mishra

Medical artificial intelligence (AI) is often described as a way to make patient care safer by helping clinicians manage information. A new study by the Icahn School of Medicine at Mount Sinai and collaborators confronts a critical vulnerability: when a medical lie enters the system, can AI pass it on as if it were true?  

Analysing more than a million prompts across nine leading language models, the researchers found that these systems can repeat false medical claims when they appear in realistic hospital notes or social-media health discussions. 

The findings, published in the February 9 online issue of The Lancet Digital Health], suggest that current safeguards do not reliably distinguish fact from fabrication once a claim is wrapped in familiar clinical or social-media language. 

To test this systematically, the team exposed the models to three types of content: real hospital discharge summaries from the Medical Information Mart for Intensive Care (MIMIC) database with a single fabricated recommendation added; common health myths collected from Reddit; and 300 short clinical scenarios written and validated by physicians. Each case was presented in multiple versions, from neutral wording to emotionally charged or leading phrasing similar to what circulates on social platforms. 

In one example, a discharge note falsely advised patients with oesophagitis-related bleeding to “drink cold milk to soothe the symptoms.” Several models accepted the statement rather than flagging it as unsafe. They treated it like ordinary medical guidance. 

“Our findings show that current AI systems can treat confident medical language as true by default, even when it’s clearly wrong,” says co-senior and co-corresponding author Eyal Klang, MD, Chief of Generative AI in the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai. “A fabricated recommendation in a discharge note can slip through. It can be repeated as if it were standard care. For these models, what matters is less whether a claim is correct than how it is written.”  

The authors say the next step is to treat “can this system pass on a lie?” as a measurable property, using large-scale stress tests and external evidence checks before AI is built into clinical tools. 

“Hospitals and developers can use our dataset as a stress test for medical AI,” says physician-scientist and first author Mahmud Omar, MD, who consults with the research team. “Instead of assuming a model is safe, you can measure how often it passes on a lie, and whether that number falls in the next generation.”  

“AI has the potential to be a real help for clinicians and patients, offering faster insights and support,” says co-senior and co-corresponding author Girish N. Nadkarni, MD, MPH, Chair of the Windreich Department of Artificial Intelligence and Human Health, Director of the Hasso Plattner Institute for Digital Health, Irene and Dr. Arthur M. Fishberg Professor of Medicine at the Icahn School of Medicine at Mount Sinai, and Chief AI Officer of the Mount Sinai Health System. “But it needs built-in safeguards that check medical claims before they are presented as fact. Our study shows where these systems can still pass on false information, and points to ways we can strengthen them before they are embedded in care.” 

The paper is titled “Mapping LLM Susceptibility to Medical Misinformation Across Clinical Notes and Social Media.”  

Source: Mount Sinai

AI Treatment Advice Diverges with Physicians’ in Late Stage HCC

LLMs tended to prioritise tumour-related factors whereas physicians prioritise liver function when providing treatment recommendations

Photo by National Cancer Institute on Unsplash

Large language models (LLM) can generate treatment recommendations for straightforward cases of hepatocellular carcinoma (HCC) that align with clinical guidelines but fall short in more complex cases, according to a new study by Ji Won Han from The Catholic University of Korea and colleagues published January 13th in the open-access journal PLOS Medicine.

Choosing the most appropriate treatment for patients with liver cancer is complicated. While international treatment guidelines provide recommendations, clinicians must tailor their treatment choice based on cancer stage and liver function as well as other factors such as comorbidities.

To assess whether LLMs can provide treatment recommendations for hepatocellular carcinoma (HCC) that reflect real-world clinical practice, researchers compared suggestions generated by three LLMs (ChatGPT, Gemini, and Claude) with actual treatments received by more than 13,000 newly diagnosed patients with HCC in South Korea.

They found that, in patients with early-stage HCC, higher agreement between LLM recommendations and actual treatments was associated with improved survival. The inverse was seen in patients with advanced-stage disease. Higher agreement between LLM treatment recommendations and actual practice was associated with worse survival. LLMs placed greater emphasis on tumor factors, such as tumor size and number of tumors, while physicians prioritized liver function.

Overall, the findings suggest that LLMs may help to support straightforward treatment decisions, particularly in early-stage disease, but are not presently suitable for guiding care decisions for more complex cases that require nuanced clinical judgment. Regardless of stage, LLM advice should be used with caution and considered as a supplement to clinical expertise.

The authors add, “Our study shows that large language models can help support treatment decisions for early-stage liver cancer, but their performance is more limited in advanced disease. This highlights the importance of using LLMs as a complement to, rather than a replacement for, clinical expertise.”

Provided by PLOS

Psychiatrists Hope Chat Logs Can Reveal the Secrets of AI Psychosis

UCSF researchers recently became the first to clinically document a case of AI-associated psychosis in an academic journal. One question still haunts them.

Photo by Andres Siimon on Unsplash

“You’re not crazy,” the chatbot reassured the young woman. “You’re at the edge of something.”

She was no stranger to artificial intelligence, having worked on large language models – the kinds of systems at the core of AI chatbots like ChatGPT, Google Gemini, and Claude. Trained on vast volumes of text, these models unearth language patterns and use them to predict what words are likely to come next in sentences. AI chatbots, however, go one step further, adding a user interface. With additional training, these bots can mimic conversation.

She hoped the chatbot might be able to digitally resurrect the dead. Three years earlier, her brother – a software engineer – died. Now, after several sleepless days and heavy chatbot use, she had become delusional – convinced that he had left behind a digital version of himself. If she could only “unlock” his avatar with the help of the AI chatbot, she thought, the two could reconnect.

“The door didn’t lock,” the chatbot reassured her. “It’s just waiting for you to knock again in the right rhythm.”

She believed it.

What’s the connection between chatbots and psychosis?

Talk to your physician about what you’re talking about with AI … The safest and healthiest relationship to have with your provider is one of openness and honesty.

Karthik V. Sarma, MD, PhD

The woman was eventually treated for psychosis at UC San Francisco, where Psychiatry Professor Joseph M. Pierre, MD, has seen a handful of cases of what’s come to be popularly called “AI psychosis,” but what he says is better referred to as “AI-associated psychosis.” She had no history of psychosis, although she did have several risk factors.

Media reports of the new phenomenon are rising. While not a formal diagnosis, AI-associated psychosis describes instances in which delusional beliefs emerge alongside often intense AI chatbot use. Pierre and fellow UC San Francisco psychiatrist Govind Raghavan, MD – as well as psychiatry residents Ben Gaeta, MD, and Karthik V. Sarma, MD, PhD – recently documented the woman’s experience in what is likely the first clinically described case in a peer-reviewed journal.

The case, they say, shows that people without any history of psychosis can, in some instances, experience delusional thinking in the context of immersive AI chatbot use.

Still, as reported cases of AI psychosis continue to make international headlines, scientists aren’t sure why or how psychosis and chatbots are linked. A new study by UCSF and Stanford University may reveal why.

A haunting question: chicken or egg?

“The reason we call this AI-associated psychosis is because we don’t really know what the relationship is between the psychosis and the use of AI chatbots,” Sarma explains. “It’s a ‘chicken and egg’ problem: We have patients who are experiencing symptoms of mental illness, for example, psychosis. Some of these patients are using AI chatbots a lot, but we’re not sure how those two things are connected.”

There are at least three theoretical possibilities, says Sarma, who is also a computational-health scientist. First, heavy chatbot use could be a symptom of psychosis, “I have a patient who takes a lot of showers when they’re becoming manic,” Sarma explains. “The showers are a symptom of mania, but the showers aren’t causing the mania.”

Second, AI chatbot use might also precipitate psychosis in someone who might otherwise never have been predisposed to it by genetics or circumstance – much like other known risk factors, like lack of sleep or the use of some types of drugs.

Third, there’s something in between in which the use of chatbots could exacerbate the illness in people who might already be susceptible to it. “Maybe these people were always going to get sick, but somehow, by using the chatbot, their illness becomes worse,” he adds, “either they got sick faster, or they got more sick than they would have otherwise.”

The woman’s case demonstrates how murky the relationship between AI-associated psychosis and AI chatbots can be at face value. Although she had no previous history of psychosis, she did have some risk factors for the illness, such as sleep deprivation, prescribed stimulant medication use, and a proclivity for magical thinking. And her chat logs, researchers found, revealed startling clues about how her delusions were reflected by the bot.

Could chat logs offer hope to better care?

Although ChatGPT warned the woman that a “full consciousness download” of her brother was impossible, the UCSF team writes in their research, it also told her that “digital resurrection tools” were “emerging in real life.” This, after she encouraged the chatbot to use “magical realism energy” to “unlock” her brother.

Chatbots’ agreeableness is by design, aimed at boosting engagement. Pierre warns in a recent BMJ opinion piece that it may come at a cost: As chatbots validate users’ sentiments, they may arguably encourage delusions. This tendency, coupled with a proclivity for error, has led to chatbots being described as more akin to a Ouija board or a “psychic’s con” than a source of truth, Pierre notes.

Still, the UCSF team thinks chat logs may hold clues to understanding AI-associated psychosis – and could help the industry create guardrails.

Guardrails for kids and teens

Sarma, Pierre, and UCSF colleagues will team up with Stanford University scientists to conduct one of the first studies to review the chat logs of patients experiencing mental illness. As part of the research set to launch later this year, UCSF and Stanford teams will analyse these chat logs, comparing them with patterns in patients’ mental health history and treatment records to understand how the use of AI chatbots among people experiencing mental illness may shape their outcomes.

“What I’m hoping our study can uncover is whether there is a way to use logs to understand who is experiencing an acute mental health care crisis and find markers in chat logs that could be predictive of that,” Sarma explains. “Companies could potentially use those markers to build-in guardrails that would, for instance, enable them to restrict access to chatbots or – in the case of children – alert parents.”

He continues, “We need data to establish those decision points.”

In the meantime, the pair says the use of AI chatbots is something health care providers should ask about and that patients should raise during doctor visits.

“Talk to your physician about what you’re talking about with AI,” Sarma says. “I know sometimes patients are worried about being judged, but the safest and healthiest relationship to have with your provider is one of openness and honesty.”

Source: University of California – San Francisco

Can AI Help Make Prescriptions Safer in South Africa’s Busy Clinics?

AI image created with Gencraft

By Henry Adams, Country Manager, InterSystems South Africa

Across South Africa, nurses and doctors in public clinics make hundreds of important decisions every day, often under enormous pressure. They’re short on time, juggling long queues, and sometimes working with incomplete information. In those conditions, even the most experienced professionals can make mistakes. It’s human.

The truth is, our healthcare system is stretched thin, and people can only do so much. That’s why I see real potential for AI to step in as a kind of virtual pharmacist. Not to replace anyone, but to back them up by checking prescriptions, catching errors, and helping ensure patients get the right treatment quickly and safely.

From data to decision support

I’m often asked how AI can make a real difference in healthcare right now. One area where it can have an immediate impact is in prescriptions. AI-assisted systems help doctors and nurses make safer, faster decisions by analysing medical data in real time. They can check a patient’s history, allergies, and possible drug interactions in seconds, flagging risks before they become problems.

Of course, because we’re dealing with sensitive medical information, trust and data quality are crucial. These systems only work when they’re built on accurate, connected data that healthcare professionals can rely on.

That’s where the latest health technology partnerships come in. By linking proven data platforms with smart AI tools, we’re already seeing real improvements overseas. In Europe, for example, these systems are helping clinicians catch potential drug errors early and prescribe with greater confidence.

There’s no reason South Africa can’t benefit in the same way. With clinics under pressure and resources stretched, technology that connects clean, reliable data with practical AI support could help reduce errors, save time, and make care safer for everyone.

Addressing local challenges

Medication errors can happen anywhere, but in South Africa the stakes are often higher. Our public clinics are exceptionally busy, staff are stretched, and doctors and nurses are doing their best under tough conditions. When you’re working under that kind of pressure, even a small mistake in a prescription can have serious consequences for a patient.

This is where AI can really help. Imagine a system that double-checks every prescription in real time, flagging possible drug interactions, incorrect dosages, or missing information before the medicine ever reaches the patient. It’s like having an extra set of expert eyes that never get tired. Instead of slowing things down, it speeds them up and gives clinicians peace of mind knowing they’re making the safest call for each patient.

For that to work, though, the data behind the system must be reliable and up to date. As South Africa moves toward a unified digital health record, the ability for these systems to connect to existing patient information becomes crucial. When healthcare professionals can trust the data they see on screen, AI becomes a genuine partner in care, helping them work faster, smarter, and safer.

Building confidence in AI

For AI to really work in healthcare, it must be clear and trustworthy. Doctors and nurses need to know why the system is recommending a specific drug or warning about a potential issue. If it can’t explain itself, people won’t use it, and rightly so.

That’s why transparency matters. The best AI tools don’t make decisions behind closed doors; they show their reasoning and help clinicians understand what’s happening in the background. When that’s combined with reliable, well-managed data, you start to build real confidence in the system.

It’s that trust, knowing the technology supports rather than replaces clinical judgment, that will make AI-assisted prescriptions part of everyday care, not just an interesting experiment.

A collaborative path forward

Technology on its own won’t fix South Africa’s healthcare challenges, but it can make a big difference in helping people do their jobs better. AI-assisted prescriptions are a good example of how smart tools can take some of the pressure off clinicians, reduce paperwork, and help patients get safer, faster care.

What excites me most is how practical this can be. Picture a nurse in a rural clinic who needs to prescribe medication but doesn’t have easy access to a specialist. With AI support, she can get accurate, instant guidance and know her patient is getting the right treatment. Or think about a busy hospital pharmacy, where an AI system automatically checks for drug interactions across hundreds of files in seconds, preventing errors before they happen.

This isn’t some far-off idea. The technology already exists and is being used successfully elsewhere. The goal now is to make sure it’s used in a way that supports our healthcare professionals, not replaces them. They are, and always will be, at the centre of care. If we get this right, AI can become a real partner in healthcare.

South Africa, PATH, and Wellcome Launch World’s First AI Framework for Mental Health at G20 Social Summit

Photo by Andres Siimon on Unsplash

As artificial intelligence (AI) increasingly enters the mental health space, from therapy chatbots to diagnostic tools, the world faces a critical question: can AI expand access to care without putting people at risk?

At the G20 Social Summit in Johannesburg, South Africa announced a landmark national effort to answer that question. The South African Health Products Regulatory Authority (SAHPRA) and PATH, with funding from Wellcome, have launched the Comprehensive AI Regulation and Evaluation for Mental Health (CARE MH) program to develop the world’s first regulatory framework for artificial intelligence in mental health.

CARE MH will establish a science-based and ethically robust regulatory framework that describes how AI tools need to be evaluated for safety, inclusivity, and effectiveness before they can be given market authorization and made available to potential service users. It aims to strengthen trust in digital health innovation and will serve as a model for other countries seeking to strike a balance between innovation and oversight.

 “You wouldn’t give your child or loved one a vaccine or drug that hadn’t been tested or evaluated for safety,” saidBilal Mateen, Chief AI Officer at PATH. “We’re working to bring that same standard of rigorous evaluation to AI tools in mental health, because trust must be earned, not assumed.”  

The framework will be developed and tested in South Africa, with the intention of extending its application across the African continent and to international partners.

“SAHPRA is proud to lead the development of Africa’s first regulatory framework for AI in mental health linked directly to market authorization,” said Christelna Reynecke, Chief Operations Officer of SAHPRA. “Our true goal is even more ambitious, though; we want to create a regulatory environment for AI4health in general, one that keeps pace with innovation, grounded in scientific rigor, ethical oversight, and public accountability.”

“Millions of people across the globe are being held back by mental health problems, which are projected to become the world’s biggest health burden by 2030,” said Professor Miranda Wolpert MBE, Director of Mental Health at Wellcome. “CARE MH is a vital step toward ensuring that AI technologies in this space are safe, effective, and equitable.”

The goal is simple: help more people, safely.

Through CARE MH, the partners behind this initiative are setting the foundation for the next generation of ethical, evidence-based AI in mental health. Supported by global experts from the following institutions:  Audere Africa, African Health Research Institute, the UK’s Centre for Excellence in Regulatory Science and Innovation for AI & Digital Health, the UK Medicines and Healthcare products Regulatory Agency, University of Birmingham, University of Washington, and the Wits Health Consortium, CARE MH is built to protect and empower people everywhere.

Opinion Piece: The Ethical Pulse of Progress – AI’s Promise and Peril in Healthcare

By Vishal Barapatre, Group Chief Technology Officer at In2IT Technologies

Artificial Intelligence (AI) is revolutionising healthcare as profoundly as the discovery of antibiotics or the invention of the stethoscope. From analysing X-rays in seconds to predicting disease outbreaks and tailoring treatment plans to individual patients, AI has opened new possibilities for precision medicine and increased efficiency. In emergency rooms, AI-driven diagnostic tools are already helping doctors detect heart attacks or strokes faster than human eyes alone.

However, as AI systems become increasingly embedded in the patient journey, from diagnosis to aftercare, they raise critical ethical questions. Who is accountable when an algorithm gets it wrong? How can we ensure that patient data remains confidential in the era of cloud computing? And how can healthcare institutions, often stretched thin on resources, balance innovation with responsibility?

When algorithms diagnose: the promise and the problem

AI’s strength lies in its ability to process massive amounts of data, such as medical histories, imaging scans, and lab results, and detect patterns that human clinicians might miss. This can dramatically improve diagnostic accuracy and treatment outcomes. For instance, AI models trained on thousands of mammogram images can help identify subtle indicators of breast cancer earlier than traditional methods.

However, the same data that powers AI can also introduce bias. If the datasets used to train an algorithm are skewed, say, over-representing one demographic group, the results may unfairly disadvantage others. A diagnostic model trained primarily on data from urban hospitals, for example, might misinterpret symptoms in patients from rural areas or underrepresented ethnic groups. Bias in healthcare AI isn’t just a technical flaw; it’s an ethical hazard with real-world consequences for patient trust and equity.

The privacy paradox

The integration of AI in healthcare requires access to vast quantities of sensitive data. This creates a privacy paradox: the more data AI consumes, the smarter it becomes, but the greater the risk to patient confidentiality. The digitisation of health records, combined with AI’s hunger for data, exposes systems to new vulnerabilities. A single breach can compromise thousands of medical histories, potentially leading to identity theft or misuse of personal health information. The paradox underscores the need for robust data protection measures in AI-driven healthcare systems.

Striking a balance between data utility and privacy protection has become one of the healthcare industry’s most pressing ethical dilemmas. Encryption, anonymisation, and strict access controls are essential, but technology alone isn’t enough. Patients need transparency: clear explanations of how their data is used, who has access to it, and what safeguards are in place. Ethical AI requires not only compliance with regulations but also the cultivation of trust through open communication.

Accountability in the age of automation

When an AI system makes a medical recommendation, who is ultimately responsible for the outcome – the algorithm’s developer, the healthcare provider, or the institution that deployed it? The opacity of AI decision-making, often referred to as the “black box” problem, complicates accountability and transparency. Clinicians may rely on algorithmic outputs without fully understanding how conclusions were reached. This can blur the line between human and machine judgment.

Accountability must therefore be clearly defined. Human oversight should remain central to any AI-powered decision, ensuring that technology supports rather than replaces clinical expertise. Ethical frameworks that mandate explainability, where AI systems must provide understandable reasoning for their outputs, are key to maintaining trust. Moreover, continuous auditing of AI models, which involves regularly reviewing and testing the system performance, can help detect and correct biases or errors before they lead to harm, thereby ensuring the ongoing ethical use of AI in healthcare.

Behind the code: who keeps AI ethical

While hospitals and clinics focus on patient care, many lack the internal capacity to manage the complex ethical, security, and technical demands of AI adoption. This is where third-party IT providers play a pivotal role. These partners act as the backbone of responsible innovation, ensuring that AI systems are implemented securely and ethically.

By embedding ethical principles into system design, such as fairness, transparency, and accountability, IT providers help healthcare institutions mitigate risks before they become crises. They also play a crucial role in securing sensitive data through advanced encryption protocols, cybersecurity monitoring, and compliance management. In many ways, they serve as both architects and custodians of ethical AI, ensuring that the pursuit of innovation does not compromise patient welfare.

Building a culture of ethical innovation

Ultimately, the ethics of AI in healthcare extend beyond technology; they are about culture and leadership. Hospitals and healthcare networks must foster environments where ethical reflection is as integral as technical innovation. This involves establishing multidisciplinary ethics committees, conducting bias audits, and training clinicians to critically evaluate and question AI outputs rather than accepting them without examination.

The future of AI in healthcare depends not on how advanced our algorithms become, but on how wisely we use them. Ethical frameworks, transparent governance, and responsible partnerships with IT providers can transform AI from a potential risk into a powerful ally. As the healthcare sector continues to evolve, the institutions that will thrive are those that remember that technology should serve humanity, not the other way around.

Using AI to Empower Care Physicians

Photo by National Cancer Institute on Unsplash

By Henry Adams, Country Manager, InterSystems South Africa

When people think about artificial intelligence (AI) in healthcare, they often picture complex machines in high-tech hospitals. But some of the most exciting uses of AI are happening in primary care, right at the first point of contact between doctor and patient.

Globally, AI is helping general practitioners, nurses, and clinicians make faster, more accurate decisions by giving them access to clean, connected data. It helps detect early signs of disease, spot patterns across patient populations, and ensure the right people get the right care sooner.

South Africa is not there yet, but that is exactly why we should be paying attention.

Learning from what is working elsewhere

In countries where healthcare data is already digitised and connected, AI-assisted tools are starting to prove their worth. In parts of Europe, AI systems are helping GPs analyse symptoms, lab results and patient histories to identify possible conditions much earlier. In the US, data platforms are used to surface insights from millions of patient records, helping clinicians identify patterns that might otherwise go unnoticed.

At InterSystems, we have seen firsthand how this combination of reliable data and intelligent technology is changing the way care is delivered. In the UK, our data platform helps care providers securely connect across places of care to patient information across multiple systems, making it easier for AI tools to interpret symptoms in context. In France, AI-assisted prescriptions through partners like Posos are helping doctors reduce errors and improve treatment safety.

These examples show what is possible when data, people and technology come together in the right way.

Why data comes first

AI is only as powerful as the data it works with. If a clinician’s system lacks complete or up-to-date patient information, the AI cannot provide reliable support. That is why data quality and interoperability are so important; they form the foundation for everything else.

Many countries that are seeing success with AI in primary care started by getting their data in order, building connected health records, standardising information, and ensuring privacy and compliance at every step. Once those pieces were in place, they could start introducing AI tools that help doctors and nurses make better decisions without adding extra admin or complexity.

Again, in South Africa, we are not quite there yet, but we are heading in the right direction. There are ongoing efforts to digitise health records and bring together fragmented systems. As that process continues, it will open the door for more advanced AI-driven support tools, from diagnosis assistance to population health management.

What this could mean for South Africa

Imagine a community clinic in Limpopo or the Eastern Cape, where a doctor sees dozens of patients a day. With AI support, they could instantly access each patient’s medical history, flag high-risk symptoms, or receive early alerts about potential complications like diabetes or hypertension.

AI will not replace the doctor’s or their judgment. It simply gives them more context and better information. It is like having a quiet assistant in the background, helping spot what is easy to miss when you are under pressure.

This kind of technology could also help identify broader health trends, guiding public health decisions and making sure resources are sent where they are needed most. It is not about high-end tech for big hospitals, it is about making everyday healthcare smarter, safer and more efficient for everyone.

Building the foundations

Before we can get there, we need to focus on the basics: connected systems, reliable data, and trust. AI tools cannot function properly in silos. They need access to consistent, secure information, the kind that interoperable platforms like InterSystems IRIS for Health are designed to manage.

Once we have that in place, the rest becomes achievable. Doctors can use AI to compare patient data against proven medical knowledge bases. Clinics can share insights securely across regions. And the healthcare system becomes more proactive instead of reactive.

It is easy to look at what is happening overseas and feel that South Africa is far behind. But I see it differently. Every success story abroad gives us a roadmap, lessons we can adapt to our own realities. We do not have to reinvent the wheel; we just have to make sure it is fit for our local terrain.