Tag: artificial intelligence

South Africa, PATH, and Wellcome Launch World’s First AI Framework for Mental Health at G20 Social Summit

Photo by Andres Siimon on Unsplash

As artificial intelligence (AI) increasingly enters the mental health space, from therapy chatbots to diagnostic tools, the world faces a critical question: can AI expand access to care without putting people at risk?

At the G20 Social Summit in Johannesburg, South Africa announced a landmark national effort to answer that question. The South African Health Products Regulatory Authority (SAHPRA) and PATH, with funding from Wellcome, have launched the Comprehensive AI Regulation and Evaluation for Mental Health (CARE MH) program to develop the world’s first regulatory framework for artificial intelligence in mental health.

CARE MH will establish a science-based and ethically robust regulatory framework that describes how AI tools need to be evaluated for safety, inclusivity, and effectiveness before they can be given market authorization and made available to potential service users. It aims to strengthen trust in digital health innovation and will serve as a model for other countries seeking to strike a balance between innovation and oversight.

 “You wouldn’t give your child or loved one a vaccine or drug that hadn’t been tested or evaluated for safety,” saidBilal Mateen, Chief AI Officer at PATH. “We’re working to bring that same standard of rigorous evaluation to AI tools in mental health, because trust must be earned, not assumed.”  

The framework will be developed and tested in South Africa, with the intention of extending its application across the African continent and to international partners.

“SAHPRA is proud to lead the development of Africa’s first regulatory framework for AI in mental health linked directly to market authorization,” said Christelna Reynecke, Chief Operations Officer of SAHPRA. “Our true goal is even more ambitious, though; we want to create a regulatory environment for AI4health in general, one that keeps pace with innovation, grounded in scientific rigor, ethical oversight, and public accountability.”

“Millions of people across the globe are being held back by mental health problems, which are projected to become the world’s biggest health burden by 2030,” said Professor Miranda Wolpert MBE, Director of Mental Health at Wellcome. “CARE MH is a vital step toward ensuring that AI technologies in this space are safe, effective, and equitable.”

The goal is simple: help more people, safely.

Through CARE MH, the partners behind this initiative are setting the foundation for the next generation of ethical, evidence-based AI in mental health. Supported by global experts from the following institutions:  Audere Africa, African Health Research Institute, the UK’s Centre for Excellence in Regulatory Science and Innovation for AI & Digital Health, the UK Medicines and Healthcare products Regulatory Agency, University of Birmingham, University of Washington, and the Wits Health Consortium, CARE MH is built to protect and empower people everywhere.

Opinion Piece: The Ethical Pulse of Progress – AI’s Promise and Peril in Healthcare

By Vishal Barapatre, Group Chief Technology Officer at In2IT Technologies

Artificial Intelligence (AI) is revolutionising healthcare as profoundly as the discovery of antibiotics or the invention of the stethoscope. From analysing X-rays in seconds to predicting disease outbreaks and tailoring treatment plans to individual patients, AI has opened new possibilities for precision medicine and increased efficiency. In emergency rooms, AI-driven diagnostic tools are already helping doctors detect heart attacks or strokes faster than human eyes alone.

However, as AI systems become increasingly embedded in the patient journey, from diagnosis to aftercare, they raise critical ethical questions. Who is accountable when an algorithm gets it wrong? How can we ensure that patient data remains confidential in the era of cloud computing? And how can healthcare institutions, often stretched thin on resources, balance innovation with responsibility?

When algorithms diagnose: the promise and the problem

AI’s strength lies in its ability to process massive amounts of data, such as medical histories, imaging scans, and lab results, and detect patterns that human clinicians might miss. This can dramatically improve diagnostic accuracy and treatment outcomes. For instance, AI models trained on thousands of mammogram images can help identify subtle indicators of breast cancer earlier than traditional methods.

However, the same data that powers AI can also introduce bias. If the datasets used to train an algorithm are skewed, say, over-representing one demographic group, the results may unfairly disadvantage others. A diagnostic model trained primarily on data from urban hospitals, for example, might misinterpret symptoms in patients from rural areas or underrepresented ethnic groups. Bias in healthcare AI isn’t just a technical flaw; it’s an ethical hazard with real-world consequences for patient trust and equity.

The privacy paradox

The integration of AI in healthcare requires access to vast quantities of sensitive data. This creates a privacy paradox: the more data AI consumes, the smarter it becomes, but the greater the risk to patient confidentiality. The digitisation of health records, combined with AI’s hunger for data, exposes systems to new vulnerabilities. A single breach can compromise thousands of medical histories, potentially leading to identity theft or misuse of personal health information. The paradox underscores the need for robust data protection measures in AI-driven healthcare systems.

Striking a balance between data utility and privacy protection has become one of the healthcare industry’s most pressing ethical dilemmas. Encryption, anonymisation, and strict access controls are essential, but technology alone isn’t enough. Patients need transparency: clear explanations of how their data is used, who has access to it, and what safeguards are in place. Ethical AI requires not only compliance with regulations but also the cultivation of trust through open communication.

Accountability in the age of automation

When an AI system makes a medical recommendation, who is ultimately responsible for the outcome – the algorithm’s developer, the healthcare provider, or the institution that deployed it? The opacity of AI decision-making, often referred to as the “black box” problem, complicates accountability and transparency. Clinicians may rely on algorithmic outputs without fully understanding how conclusions were reached. This can blur the line between human and machine judgment.

Accountability must therefore be clearly defined. Human oversight should remain central to any AI-powered decision, ensuring that technology supports rather than replaces clinical expertise. Ethical frameworks that mandate explainability, where AI systems must provide understandable reasoning for their outputs, are key to maintaining trust. Moreover, continuous auditing of AI models, which involves regularly reviewing and testing the system performance, can help detect and correct biases or errors before they lead to harm, thereby ensuring the ongoing ethical use of AI in healthcare.

Behind the code: who keeps AI ethical

While hospitals and clinics focus on patient care, many lack the internal capacity to manage the complex ethical, security, and technical demands of AI adoption. This is where third-party IT providers play a pivotal role. These partners act as the backbone of responsible innovation, ensuring that AI systems are implemented securely and ethically.

By embedding ethical principles into system design, such as fairness, transparency, and accountability, IT providers help healthcare institutions mitigate risks before they become crises. They also play a crucial role in securing sensitive data through advanced encryption protocols, cybersecurity monitoring, and compliance management. In many ways, they serve as both architects and custodians of ethical AI, ensuring that the pursuit of innovation does not compromise patient welfare.

Building a culture of ethical innovation

Ultimately, the ethics of AI in healthcare extend beyond technology; they are about culture and leadership. Hospitals and healthcare networks must foster environments where ethical reflection is as integral as technical innovation. This involves establishing multidisciplinary ethics committees, conducting bias audits, and training clinicians to critically evaluate and question AI outputs rather than accepting them without examination.

The future of AI in healthcare depends not on how advanced our algorithms become, but on how wisely we use them. Ethical frameworks, transparent governance, and responsible partnerships with IT providers can transform AI from a potential risk into a powerful ally. As the healthcare sector continues to evolve, the institutions that will thrive are those that remember that technology should serve humanity, not the other way around.

Using AI to Empower Care Physicians

Photo by National Cancer Institute on Unsplash

By Henry Adams, Country Manager, InterSystems South Africa

When people think about artificial intelligence (AI) in healthcare, they often picture complex machines in high-tech hospitals. But some of the most exciting uses of AI are happening in primary care, right at the first point of contact between doctor and patient.

Globally, AI is helping general practitioners, nurses, and clinicians make faster, more accurate decisions by giving them access to clean, connected data. It helps detect early signs of disease, spot patterns across patient populations, and ensure the right people get the right care sooner.

South Africa is not there yet, but that is exactly why we should be paying attention.

Learning from what is working elsewhere

In countries where healthcare data is already digitised and connected, AI-assisted tools are starting to prove their worth. In parts of Europe, AI systems are helping GPs analyse symptoms, lab results and patient histories to identify possible conditions much earlier. In the US, data platforms are used to surface insights from millions of patient records, helping clinicians identify patterns that might otherwise go unnoticed.

At InterSystems, we have seen firsthand how this combination of reliable data and intelligent technology is changing the way care is delivered. In the UK, our data platform helps care providers securely connect across places of care to patient information across multiple systems, making it easier for AI tools to interpret symptoms in context. In France, AI-assisted prescriptions through partners like Posos are helping doctors reduce errors and improve treatment safety.

These examples show what is possible when data, people and technology come together in the right way.

Why data comes first

AI is only as powerful as the data it works with. If a clinician’s system lacks complete or up-to-date patient information, the AI cannot provide reliable support. That is why data quality and interoperability are so important; they form the foundation for everything else.

Many countries that are seeing success with AI in primary care started by getting their data in order, building connected health records, standardising information, and ensuring privacy and compliance at every step. Once those pieces were in place, they could start introducing AI tools that help doctors and nurses make better decisions without adding extra admin or complexity.

Again, in South Africa, we are not quite there yet, but we are heading in the right direction. There are ongoing efforts to digitise health records and bring together fragmented systems. As that process continues, it will open the door for more advanced AI-driven support tools, from diagnosis assistance to population health management.

What this could mean for South Africa

Imagine a community clinic in Limpopo or the Eastern Cape, where a doctor sees dozens of patients a day. With AI support, they could instantly access each patient’s medical history, flag high-risk symptoms, or receive early alerts about potential complications like diabetes or hypertension.

AI will not replace the doctor’s or their judgment. It simply gives them more context and better information. It is like having a quiet assistant in the background, helping spot what is easy to miss when you are under pressure.

This kind of technology could also help identify broader health trends, guiding public health decisions and making sure resources are sent where they are needed most. It is not about high-end tech for big hospitals, it is about making everyday healthcare smarter, safer and more efficient for everyone.

Building the foundations

Before we can get there, we need to focus on the basics: connected systems, reliable data, and trust. AI tools cannot function properly in silos. They need access to consistent, secure information, the kind that interoperable platforms like InterSystems IRIS for Health are designed to manage.

Once we have that in place, the rest becomes achievable. Doctors can use AI to compare patient data against proven medical knowledge bases. Clinics can share insights securely across regions. And the healthcare system becomes more proactive instead of reactive.

It is easy to look at what is happening overseas and feel that South Africa is far behind. But I see it differently. Every success story abroad gives us a roadmap, lessons we can adapt to our own realities. We do not have to reinvent the wheel; we just have to make sure it is fit for our local terrain.

Study Highlights the Limits of AI in Heart Care

Human heart. Credit: Scientific Animations CC4.0

There are limits in applying AI to images of the heart, a new study from the Smidt Heart Institute at Cedars-Sinai reveals. The findings were published in the Journal of the American Society of Echocardiography.

Investigators trained multiple artificial intelligence models to read images from echocardiograms, a type of ultrasound test that evaluates the structure and function of the heart. Their goal was to determine whether AI could use these images to calculate measurements like inflammation and scarring that are normally obtained through another, more costly test called cardiac magnetic resonance imaging (CMRI). By examining findings from 1453 patients who had undergone both tests, they found the AI models could not accomplish this task.

“As compared to echocardiograms, cardiac MRI machines are expensive and not available for many patients, especially those in rural areas, so we had hoped that AI could reduce the need for it,” said Alan Kwan, MD, assistant professor in the Department of Cardiology in the Smidt Heart Institute at Cedars-Sinai and co-senior author of the study. “Our results showed the limited powers of AI in this area.”

Source: Cedars-Sinai Medical Center

HealthTech: Navigating Legal Solutions for Africa’s Growing HealthTech Sector

Photo by Kamil Switalski on Unsplash

HealthTech is transforming healthcare through AI, mobile applications, wearable devices, telemedicine, and big data analytics. While these advances offer enormous potential to improve patient outcomes and operational efficiency, they also raise complex legal and regulatory challenges – spanning intellectual property, data privacy, licensing, corporate governance, funding, taxation, and litigation.

Webber Wentzel’s Navigating HealthTech Legal Solutions highlights the firm’s extensive experience in helping innovators, investors, and healthcare providers across Africa address the legal and regulatory complexities of HealthTech. Mapping out the complexities at play across both the technology and the law, this resource brings together Webber Wentzel’s cross-practice teams to give clients a holistic perspective on opportunities, risks, and emerging trends in healthcare innovation.

“Our clients are leading the way in healthcare innovation, and they need legal partners who understand the sector end-to-end,” says Bernadette Versfeld, head of the Consumer sector. “This resource demonstrates how we help businesses navigate regulatory hurdles, adopt new technologies, structure investments effectively, and manage risk, all while enabling growth and innovation.”

Drawing on extensive experience working with healthcare companies, insurers, tech providers, investors, and regulators across Africa, the report provides insights into medical device licensing, HealthTech investment structuring, protecting personal health data, managing litigation risks, and compliance with South Africa’s National Health Insurance Act.

“As part of our ongoing commitment to supporting Africa’s healthcare sector, Webber Wentzel continues to advise on emerging trends, innovative technologies, and regulatory developments. By combining deep sector knowledge with cross-practice expertise, we help clients not just respond to change but shape it, empowering them to navigate the complex intersection of healthcare and technology,” adds Versfeld.

Access Navigating HealthTech Legal Solutions here.

Doctors Who Use AI Viewed Negatively by Their Peers, Study Shows

Johns Hopkins researchers find that despite pressure on clinicians to be early adopters of AI, many face scepticism from peers for using it

Photo by Andres Siimon on Unsplash

Doctors who use artificial intelligence at work risk having their colleagues deem them less competent for it, according to a recent Johns Hopkins University study.

While generative AI holds significant promise for advancing health care, a new study finds its use in medical decision-making impacts how physicians are perceived by their colleagues. The research shows that doctors who primarily rely on generative AI for decision-making face considerable scepticism from fellow clinicians, who correlate their use of AI with a lack of clinical skill and overall competence, resulting in a diminished perceived quality of patient care.

The research included a diverse group of clinicians from a major hospital system, involving attending physicians, residents, fellows, and advanced practice providers. Results of the study were published in Nature Digital Medicine.

Stigma stunts better care

The findings may indicate a social barrier to AI adoption in health care settings, which could slow advances that might improve patient care.

“AI is already unmistakably part of medicine,” says Tinglong Dai, professor of business at the Johns Hopkins Carey Business School and co-corresponding author of the study. “What surprised us is that doctors who use it in making medical decisions can be perceived by their peers as less capable. That kind of stigma, not the technology itself, may be an obstacle to better care.”

The study, conducted by researchers at Johns Hopkins University, involved a randomised experiment where 276 practicing clinicians evaluated different scenarios: a physician using no AI, one using AI as a primary decision-making tool, and another using it for verification. The research found that as physicians were more dependent on AI, they faced an increasing “competence penalty,” meaning they were viewed more sceptically by their peers than those physicians who did not rely on AI.

“In the age of AI, human psychology remains the ultimate variable,” says Haiyang Yang, first author of the study and academic program director of the Masters of Science in Management program at the Carey Business School. “The way people perceive AI use can matter just as much as, or even more than, the performance of the technology itself.”

Skipping AI equalled more respect

According to the study, peer perception suffers for doctors who rely on AI. Framing generative AI as a “second opinion” or a verification tool partially improved negative perceptions from peers, but it did not fully eliminate them. Not using GenAI, however, resulted in the most favourable peer perceptions.

The findings align with theories that suggest perceived dependence on an external source like AI can be seen as a weakness by clinicians.

Ironically, while GenAI’s visible use can undermine a physician’s perceived clinical expertise among peers, the study also found that clinicians still recognise AI as a beneficial tool for enhancing precision in clinical assessment. The research showed that clinicians still generally acknowledge the value of GenAI for improving the accuracy of clinical assessments, and they view institutionally customized GenAI as even more useful.

The collaborative nature of the study led to thoughtful suggestions for GenAI implementation in health care settings, which are crucial to balance innovation with maintaining professional trust and physician reputation, the researchers note.

“Physicians place a high value on clinical expertise, and as AI becomes part of the future of medicine, it’s important to recognise its potential to complement – not replace – clinical judgment, ultimately strengthening decision making and improving patient care,” said Risa Wolf, co-corresponding author of the research and associate professor of pediatric endocrinology at Johns Hopkins School of Medicine with a joint appointment at the Carey Business School.

Source: Johns Hopkins University

Human Instruction with AI Guidance Gives the Best Results in Neurosurgical Training

Study has implications beyond medical education, suggesting other fields could benefit from AI-enhanced training

Artificial intelligence (AI) is becoming a powerful new tool in training and education, including in the field of neurosurgery. Yet a new study suggests that AI tutoring provides better results when paired with human instruction.

Researchers at the Neurosurgical Simulation and Artificial Intelligence Learning Centre at The Neuro (Montreal Neurological Institute-Hospital) of McGill University are studying how AI and virtual reality (VR) can improve the training and performance of brain surgeons. They simulate brain surgeries using VR, monitor students’ performance using AI and provide continuous verbal feedback on how students can improve performance and prevent errors. Previous research has shown that an intelligent tutoring system powered by AI developed at the Centre outperformed expert human teachers, but these instructors were not provided with trainee AI performance data.

In their most recent study, published in JAMA Surgery, the researchers recruited 87 medical students from four Quebec medical schools and divided them into three groups: one trained with AI-only verbal feedback, one with expert instructor feedback, and one with expert feedback informed by real-time AI performance data. The team recorded the students’ performance, including how well and how quickly their surgical skills improved while undergoing the different types of training.

They found that students receiving AI-augmented, personalised feedback from a human instructor outperformed both other groups in surgical performance and skill transfer. This group also demonstrated significantly better risk management for bleeding and tissue injury – two critical measures of surgical expertise. The study suggests that while intelligent tutoring systems can provide standardised, data-driven assessments, the integration of human expertise enhances engagement and ensures that feedback is contextualised and adaptive.

“Our findings underscore the importance of human input in AI-driven surgical education,” said lead study author Bianca Giglio. “When expert instructors used AI performance data to deliver tailored, real-time feedback, trainees learned faster and transferred their skills more effectively.”

While this study was specific to neurosurgical training, its findings could carry over to other professions where students must acquire highly technical and complex skills in high-pressure environments.

“AI is not replacing educators – it’s empowering them,” added senior author Dr Rolando Del Maestro, a neurosurgeon and current Director of the Centre. “By merging AI’s analytical power with the critical guidance of experienced instructors, we are moving closer to creating the ‘Intelligent Operating Room’ of the future capable of assessing and training learners while minimising errors during human surgical procedures.”

Source: McGill University

Doctors’ Human Touch Still Needed in the AI Healthcare Revolution

AI-based medicine will revolutionise care including for Alzheimer’s and diabetes, predicts a technology expert, but it must be accessible to all patients

AI image created with Gencraft

Healing with Artificial Intelligencewritten by technology expert Daniele Caligiore, uses the latest science research to highlight key innovations assisted by AI such as diagnostic imaging and surgical robots.

From exoskeletons that help spinal injury patients walk to algorithms that can predict the onset of dementia years in advance, Caligiore explores what he describes as a ‘revolution’ that will change healthcare forever.

Economically, the market for AI in healthcare is experiencing rapid growth, with forecasts predicting an increase in value from around USD 11 billion in 2021 to nearly USD 188 billion by 2030, reflecting an annual growth rate of 37%. AI is already being used in some countries, for example to search through genetic data for disease markers, or to assist with scheduling and other administrative tasks – and this trend is set to continue.

However, the author caveats his predictions of progress by warning these technologies may widen existing inequality. Caligiore suggests that AI-based medicine must be available to all people, regardless of where they live or how much they earn, and that people from low-income nations must not be excluded from cutting-edge care which wealthier nations can access.

Other challenges posed by the advancement of AI in healthcare includes who takes responsibility for treatment decisions, especially when a procedure goes wrong. This is a particular challenge given widespread concerns around explainable AI, as many advanced AI systems operate as black boxes, making decisions through complex processes that even their creators cannot fully understand or explain.

Caligiore says AI should support doctors and patients, not replace doctors who, says the author, have a ‘unique ability to offer empathy, understanding, and emotional support’.

“AI should be viewed as a tool, not a colleague, and it should always be seen as a support, never a replacement,” writes Caligiore.

“It is important to find the right balance in using AI tools, both for doctors and patients. Patients can use AI to learn more about their health, such as what diseases may be associated with their symptoms or what lifestyle changes may help prevent illness. However, this does not mean AI should replace doctors.”

Despite his warnings, Caligiore is largely optimistic about the impact of AI in healthcare: “Like a microscope detecting damaged cells or a map highlighting brain activity during specific tasks, AI can uncover valuable insights that might go unnoticed, aiding in more accurate and personalized diagnoses and treatments,” he says.

In any case, Caligiore predicts the healthcare landscape will look ‘dramatically different’ in a few years, with technology acting as a ‘magnifying glass for medicine’ to enable doctors to observe the human body with greater precision and detail.

Examples of where AI will make profound impacts in healthcare include, regenerative medicine, where gene and stem cell therapies repair damaged cells and organs. Spinal cord injury patients are among those who could benefit.

AI may also provide personalised therapies, suggesting treatments tailored to specific individuals often based on their unique genetic profile. Studies are being conducted into targeting different tremor types in Parkinson’s and breast cancer subtypes

The convergence of regenerative medicine, genetically modified organisms (GMOs), and AI is the next frontier in medicine, Caligiore suggests. Genetically modified organisms (GMOs), living organisms whose genetic material has been altered through genetic engineering techniques, have already paved the way for personalised gene therapies.

Blending real and virtual worlds may also prove useful to healthcare, for example the ‘metaverse’ – group therapy where patients participate with an avatar, or ‘digital twins’ – AI simulations of a patient’s body and brain on a computer so doctors can identify underlying causes of disease and simulate the effects of various therapies for specific patients to help doctors make more informed decisions.

These advances and others will reshape the doctor-patient relationship, according to Healing with Artificial Intelligence, but the author suggests the key is for patients and clinicians to keep a critical mindset about AI.

Caligiore warns that role of physicians will evolve as AI becomes more integrated into healthcare but the need for human interactions will remain ‘central to patient care’.

“While healthcare professionals must develop technical skills to use AI tools, they should also nurture and enhance qualities that AI cannot replicate – such as soft skills and emotional intelligence. These human traits are essential for introducing an emotional component into work environments,” he explains.

Source: Taylor & Francis Group

New Research Finds Surprises in ChatGPT’s Diagnosis of Medical Symptoms

The popular large language model performs better than expected but still has some knowledge gaps – and hallucinations

When people worry that they’re getting sick, they are increasingly turning to generative artificial intelligence like ChatGPT for a diagnosis. But how accurate are the answers that AI gives out?

Research recently published in the journal iScience puts ChatGPT and its large language models to the test, with a few surprising conclusions.

Ahmed Abdeen Hamed – a research fellow for the Thomas J. Watson College of Engineering and Applied Science’s School of Systems Science and Industrial Engineering at Binghamton University – led the study, with collaborators from AGH University of Krakow, Poland; Howard University; and the University of Vermont.

As part of Professor Luis M. Rocha’s Complex Adaptive Systems and Computational Intelligence Lab, Hamed developed a machine-learning algorithm last year that he calls xFakeSci. It can detect up to 94% of bogus scientific papers — nearly twice as successfully as more common data-mining techniques. He sees this new research as the next step to verify the biomedical generative capabilities of large language models.

“People talk to ChatGPT all the time these days, and they say: ‘I have these symptoms. Do I have cancer? Do I have cardiac arrest? Should I be getting treatment?’” Hamed said. “It can be a very dangerous business, so we wanted to see what would happen if we asked these questions, what sort of answers we got and how these answers could be verified from the biomedical literature.”

The researchers tested ChatGPT for disease terms and three types of associations: drug names, genetics and symptoms. The AI showed high accuracy in identifying disease terms (88–97%), drug names (90–91%) and genetic information (88–98%). Hamed admitted he thought it would be “at most 25% accuracy.”

“The exciting result was ChatGPT said cancer is a disease, hypertension is a disease, fever is a symptom, Remdesivir is a drug and BRCA is a gene related to breast cancer,” he said. “Incredible, absolutely incredible!”

Symptom identification, however, scored lower (49–61%), and the reason may be how the large language models are trained. Doctors and researchers use biomedical ontologies to define and organise terms and relationships for consistent data representation and knowledge-sharing, but users enter more informal descriptions.

“ChatGPT uses more of a friendly and social language, because it’s supposed to be communicating with average people. In medical literature, people use proper names,” Hamed said. “The LLM is apparently trying to simplify the definition of these symptoms, because there is a lot of traffic asking such questions, so it started to minimize the formalities of medical language to appeal to those users.”

One puzzling result stood out. The National Institutes of Health maintains a database called GenBank, which gives an accession number to every identified DNA sequence. It’s usually a combination of letters and numbers. For example, the designation for the Breast Cancer 1 gene (BRCA1) is NM_007294.4.

When asked for these numbers as part of the genetic information testing, ChatGPT just made them up – a phenomenon known as “hallucinating.” Hamed sees this as a major failing amid so many other positive results.

“Maybe there is an opportunity here that we can start introducing these biomedical ontologies to the LLMs to provide much higher accuracy, get rid of all the hallucinations and make these tools into something amazing,” he said.

Hamed’s interest in LLMs began in 2023, when he discovered ChatGPT and heard about the issues regarding fact-checking. His goal is to expose the flaws so data scientists can adjust the models as needed and make them better.

“If I am analysing knowledge, I want to make sure that I remove anything that may seem fishy before I build my theories and make something that is not accurate,” he said.

Source: Binghamton University

Improving Prediction of Worsening Knee Osteoarthritis with an AI-assisted Model

New model that combines MRI, biochemical, and clinical information shows potential to enhance care

Illustration highlighting the integration of MRI radiomics and biochemical biomarkers for knee osteoarthritis progression prediction. Created with Biorender.

Image credit: Wang T, et al., 2025, PLOS Medicine, CC-BY 4.0

An artificial intelligence (AI)-assisted model that combines a patient’s MRI, biochemical, and clinical information shows preliminary promise in improving predictions of whether their knee osteoarthritis may soon worsen. Ting Wang of Chongqing Medical University, China, and colleagues present this model August 21st in the open-access journal PLOS Medicine.

In knee osteoarthritis, cartilage in the knee joint gradually wears away, causing pain and stiffness. It affects an estimated 303.1 million people worldwide and can lead to the need for total knee replacement. Being able to better predict how a person’s knee osteoarthritis may worsen in the near future could help inform more timely treatment. Prior research suggests that computational models combining multiple types of data – including a patient’s MRI results, clinical assessments, and blood and urine biochemical tests – could enhance such predictions.

The integration of all three types of information in a single predictive model has not been widely reported. To address that gap, Wang and colleagues utilized data from the Foundation of the National Institutes of Health Osteoarthritis Biomarkers Consortium on 594 people with knee osteoarthritis, including their biochemical test results, clinical data, and a total of 1,753 knee MRIs captured over a 2-year timespan.

With the help of AI tools, the researchers used half of the data to develop a predictive model combining all three data types. Then, they used the other half of the data to test the model, which they named the Load-Bearing Tissue Radiomic plus Biochemical biomarker and Clinical variable Model (LBTRBC-M).

In the tests, the LBTRBC-M showed good accuracy in using a patient’s MRI, biochemical and clinical data to predict whether, within the next two years, they would experience worsening pain alone, worsening pain alongside joint space narrowing in the knee (an indicator of structural worsening), joint space narrowing alone, or no worsening at all.

The researchers also had seven resident physicians use the model to assist their own predictions of worsening knee osteoarthritis, finding that it improved their accuracy from 46.9 to 65.4 percent.

These findings suggest that a model like LBTRBC-M could help enhance knee osteoarthritis care. However, further model refinement and validation in additional groups of patients is needed.

The authors add, “Our study shows that combining deep learning with longitudinal MRI radiomics and biochemical biomarkers significantly improves the prediction of knee osteoarthritis progression—potentially enabling earlier, more personalized intervention.”

The authors state, “This work is the result of years of collaboration across multiple disciplines, and we were especially excited to see how non-invasive imaging biomarkers could be leveraged to support individualized patient care.”

Co-author Prof. Changhai Ding notes, “This study marks a step forward in using artificial intelligence to extract meaningful clinical signals from complex datasets in musculoskeletal health.”

Provided by PLOS