Tag: artificial intelligence

Human Instruction with AI Guidance Gives the Best Results in Neurosurgical Training

Study has implications beyond medical education, suggesting other fields could benefit from AI-enhanced training

Artificial intelligence (AI) is becoming a powerful new tool in training and education, including in the field of neurosurgery. Yet a new study suggests that AI tutoring provides better results when paired with human instruction.

Researchers at the Neurosurgical Simulation and Artificial Intelligence Learning Centre at The Neuro (Montreal Neurological Institute-Hospital) of McGill University are studying how AI and virtual reality (VR) can improve the training and performance of brain surgeons. They simulate brain surgeries using VR, monitor students’ performance using AI and provide continuous verbal feedback on how students can improve performance and prevent errors. Previous research has shown that an intelligent tutoring system powered by AI developed at the Centre outperformed expert human teachers, but these instructors were not provided with trainee AI performance data.

In their most recent study, published in JAMA Surgery, the researchers recruited 87 medical students from four Quebec medical schools and divided them into three groups: one trained with AI-only verbal feedback, one with expert instructor feedback, and one with expert feedback informed by real-time AI performance data. The team recorded the students’ performance, including how well and how quickly their surgical skills improved while undergoing the different types of training.

They found that students receiving AI-augmented, personalised feedback from a human instructor outperformed both other groups in surgical performance and skill transfer. This group also demonstrated significantly better risk management for bleeding and tissue injury – two critical measures of surgical expertise. The study suggests that while intelligent tutoring systems can provide standardised, data-driven assessments, the integration of human expertise enhances engagement and ensures that feedback is contextualised and adaptive.

“Our findings underscore the importance of human input in AI-driven surgical education,” said lead study author Bianca Giglio. “When expert instructors used AI performance data to deliver tailored, real-time feedback, trainees learned faster and transferred their skills more effectively.”

While this study was specific to neurosurgical training, its findings could carry over to other professions where students must acquire highly technical and complex skills in high-pressure environments.

“AI is not replacing educators – it’s empowering them,” added senior author Dr Rolando Del Maestro, a neurosurgeon and current Director of the Centre. “By merging AI’s analytical power with the critical guidance of experienced instructors, we are moving closer to creating the ‘Intelligent Operating Room’ of the future capable of assessing and training learners while minimising errors during human surgical procedures.”

Source: McGill University

Doctors’ Human Touch Still Needed in the AI Healthcare Revolution

AI-based medicine will revolutionise care including for Alzheimer’s and diabetes, predicts a technology expert, but it must be accessible to all patients

AI image created with Gencraft

Healing with Artificial Intelligencewritten by technology expert Daniele Caligiore, uses the latest science research to highlight key innovations assisted by AI such as diagnostic imaging and surgical robots.

From exoskeletons that help spinal injury patients walk to algorithms that can predict the onset of dementia years in advance, Caligiore explores what he describes as a ‘revolution’ that will change healthcare forever.

Economically, the market for AI in healthcare is experiencing rapid growth, with forecasts predicting an increase in value from around USD 11 billion in 2021 to nearly USD 188 billion by 2030, reflecting an annual growth rate of 37%. AI is already being used in some countries, for example to search through genetic data for disease markers, or to assist with scheduling and other administrative tasks – and this trend is set to continue.

However, the author caveats his predictions of progress by warning these technologies may widen existing inequality. Caligiore suggests that AI-based medicine must be available to all people, regardless of where they live or how much they earn, and that people from low-income nations must not be excluded from cutting-edge care which wealthier nations can access.

Other challenges posed by the advancement of AI in healthcare includes who takes responsibility for treatment decisions, especially when a procedure goes wrong. This is a particular challenge given widespread concerns around explainable AI, as many advanced AI systems operate as black boxes, making decisions through complex processes that even their creators cannot fully understand or explain.

Caligiore says AI should support doctors and patients, not replace doctors who, says the author, have a ‘unique ability to offer empathy, understanding, and emotional support’.

“AI should be viewed as a tool, not a colleague, and it should always be seen as a support, never a replacement,” writes Caligiore.

“It is important to find the right balance in using AI tools, both for doctors and patients. Patients can use AI to learn more about their health, such as what diseases may be associated with their symptoms or what lifestyle changes may help prevent illness. However, this does not mean AI should replace doctors.”

Despite his warnings, Caligiore is largely optimistic about the impact of AI in healthcare: “Like a microscope detecting damaged cells or a map highlighting brain activity during specific tasks, AI can uncover valuable insights that might go unnoticed, aiding in more accurate and personalized diagnoses and treatments,” he says.

In any case, Caligiore predicts the healthcare landscape will look ‘dramatically different’ in a few years, with technology acting as a ‘magnifying glass for medicine’ to enable doctors to observe the human body with greater precision and detail.

Examples of where AI will make profound impacts in healthcare include, regenerative medicine, where gene and stem cell therapies repair damaged cells and organs. Spinal cord injury patients are among those who could benefit.

AI may also provide personalised therapies, suggesting treatments tailored to specific individuals often based on their unique genetic profile. Studies are being conducted into targeting different tremor types in Parkinson’s and breast cancer subtypes

The convergence of regenerative medicine, genetically modified organisms (GMOs), and AI is the next frontier in medicine, Caligiore suggests. Genetically modified organisms (GMOs), living organisms whose genetic material has been altered through genetic engineering techniques, have already paved the way for personalised gene therapies.

Blending real and virtual worlds may also prove useful to healthcare, for example the ‘metaverse’ – group therapy where patients participate with an avatar, or ‘digital twins’ – AI simulations of a patient’s body and brain on a computer so doctors can identify underlying causes of disease and simulate the effects of various therapies for specific patients to help doctors make more informed decisions.

These advances and others will reshape the doctor-patient relationship, according to Healing with Artificial Intelligence, but the author suggests the key is for patients and clinicians to keep a critical mindset about AI.

Caligiore warns that role of physicians will evolve as AI becomes more integrated into healthcare but the need for human interactions will remain ‘central to patient care’.

“While healthcare professionals must develop technical skills to use AI tools, they should also nurture and enhance qualities that AI cannot replicate – such as soft skills and emotional intelligence. These human traits are essential for introducing an emotional component into work environments,” he explains.

Source: Taylor & Francis Group

New Research Finds Surprises in ChatGPT’s Diagnosis of Medical Symptoms

The popular large language model performs better than expected but still has some knowledge gaps – and hallucinations

When people worry that they’re getting sick, they are increasingly turning to generative artificial intelligence like ChatGPT for a diagnosis. But how accurate are the answers that AI gives out?

Research recently published in the journal iScience puts ChatGPT and its large language models to the test, with a few surprising conclusions.

Ahmed Abdeen Hamed – a research fellow for the Thomas J. Watson College of Engineering and Applied Science’s School of Systems Science and Industrial Engineering at Binghamton University – led the study, with collaborators from AGH University of Krakow, Poland; Howard University; and the University of Vermont.

As part of Professor Luis M. Rocha’s Complex Adaptive Systems and Computational Intelligence Lab, Hamed developed a machine-learning algorithm last year that he calls xFakeSci. It can detect up to 94% of bogus scientific papers — nearly twice as successfully as more common data-mining techniques. He sees this new research as the next step to verify the biomedical generative capabilities of large language models.

“People talk to ChatGPT all the time these days, and they say: ‘I have these symptoms. Do I have cancer? Do I have cardiac arrest? Should I be getting treatment?’” Hamed said. “It can be a very dangerous business, so we wanted to see what would happen if we asked these questions, what sort of answers we got and how these answers could be verified from the biomedical literature.”

The researchers tested ChatGPT for disease terms and three types of associations: drug names, genetics and symptoms. The AI showed high accuracy in identifying disease terms (88–97%), drug names (90–91%) and genetic information (88–98%). Hamed admitted he thought it would be “at most 25% accuracy.”

“The exciting result was ChatGPT said cancer is a disease, hypertension is a disease, fever is a symptom, Remdesivir is a drug and BRCA is a gene related to breast cancer,” he said. “Incredible, absolutely incredible!”

Symptom identification, however, scored lower (49–61%), and the reason may be how the large language models are trained. Doctors and researchers use biomedical ontologies to define and organise terms and relationships for consistent data representation and knowledge-sharing, but users enter more informal descriptions.

“ChatGPT uses more of a friendly and social language, because it’s supposed to be communicating with average people. In medical literature, people use proper names,” Hamed said. “The LLM is apparently trying to simplify the definition of these symptoms, because there is a lot of traffic asking such questions, so it started to minimize the formalities of medical language to appeal to those users.”

One puzzling result stood out. The National Institutes of Health maintains a database called GenBank, which gives an accession number to every identified DNA sequence. It’s usually a combination of letters and numbers. For example, the designation for the Breast Cancer 1 gene (BRCA1) is NM_007294.4.

When asked for these numbers as part of the genetic information testing, ChatGPT just made them up – a phenomenon known as “hallucinating.” Hamed sees this as a major failing amid so many other positive results.

“Maybe there is an opportunity here that we can start introducing these biomedical ontologies to the LLMs to provide much higher accuracy, get rid of all the hallucinations and make these tools into something amazing,” he said.

Hamed’s interest in LLMs began in 2023, when he discovered ChatGPT and heard about the issues regarding fact-checking. His goal is to expose the flaws so data scientists can adjust the models as needed and make them better.

“If I am analysing knowledge, I want to make sure that I remove anything that may seem fishy before I build my theories and make something that is not accurate,” he said.

Source: Binghamton University

Improving Prediction of Worsening Knee Osteoarthritis with an AI-assisted Model

New model that combines MRI, biochemical, and clinical information shows potential to enhance care

Illustration highlighting the integration of MRI radiomics and biochemical biomarkers for knee osteoarthritis progression prediction. Created with Biorender.

Image credit: Wang T, et al., 2025, PLOS Medicine, CC-BY 4.0

An artificial intelligence (AI)-assisted model that combines a patient’s MRI, biochemical, and clinical information shows preliminary promise in improving predictions of whether their knee osteoarthritis may soon worsen. Ting Wang of Chongqing Medical University, China, and colleagues present this model August 21st in the open-access journal PLOS Medicine.

In knee osteoarthritis, cartilage in the knee joint gradually wears away, causing pain and stiffness. It affects an estimated 303.1 million people worldwide and can lead to the need for total knee replacement. Being able to better predict how a person’s knee osteoarthritis may worsen in the near future could help inform more timely treatment. Prior research suggests that computational models combining multiple types of data – including a patient’s MRI results, clinical assessments, and blood and urine biochemical tests – could enhance such predictions.

The integration of all three types of information in a single predictive model has not been widely reported. To address that gap, Wang and colleagues utilized data from the Foundation of the National Institutes of Health Osteoarthritis Biomarkers Consortium on 594 people with knee osteoarthritis, including their biochemical test results, clinical data, and a total of 1,753 knee MRIs captured over a 2-year timespan.

With the help of AI tools, the researchers used half of the data to develop a predictive model combining all three data types. Then, they used the other half of the data to test the model, which they named the Load-Bearing Tissue Radiomic plus Biochemical biomarker and Clinical variable Model (LBTRBC-M).

In the tests, the LBTRBC-M showed good accuracy in using a patient’s MRI, biochemical and clinical data to predict whether, within the next two years, they would experience worsening pain alone, worsening pain alongside joint space narrowing in the knee (an indicator of structural worsening), joint space narrowing alone, or no worsening at all.

The researchers also had seven resident physicians use the model to assist their own predictions of worsening knee osteoarthritis, finding that it improved their accuracy from 46.9 to 65.4 percent.

These findings suggest that a model like LBTRBC-M could help enhance knee osteoarthritis care. However, further model refinement and validation in additional groups of patients is needed.

The authors add, “Our study shows that combining deep learning with longitudinal MRI radiomics and biochemical biomarkers significantly improves the prediction of knee osteoarthritis progression—potentially enabling earlier, more personalized intervention.”

The authors state, “This work is the result of years of collaboration across multiple disciplines, and we were especially excited to see how non-invasive imaging biomarkers could be leveraged to support individualized patient care.”

Co-author Prof. Changhai Ding notes, “This study marks a step forward in using artificial intelligence to extract meaningful clinical signals from complex datasets in musculoskeletal health.”

Provided by PLOS

New AI–based Test Detects Early Signs of Osteoporosis from X-ray Images

Photo by Cottonbro on Pexels

Investigators have developed an artificial intelligence-assisted diagnostic system that can estimate bone mineral density in both the lumbar spine and the femur of the upper leg, based on X-ray images. The advance is described in a study published in the Journal of Orthopaedic Research.

A total of 1454 X-ray images were analysed using the scientists’ system. Performance rates for the lumbar and femur of patients with bone density loss, or osteopenia, were 86.4% and 84.1%, respectively, in terms of sensitivity. The respective specificities were 80.4% and 76.3%. (Sensitivity reflected the ability of the test to correctly identify people with osteopenia, whereas specificity reflected its ability to correctly identify those without osteopenia). The test also had high sensitivity and specificity for categorising patients with and without osteoporosis.

“Bone mineral density measurement is essential for screening and diagnosing osteoporosis, but limited access to diagnostic equipment means that millions of people worldwide may remain undiagnosed,” said corresponding author Toru Moro, MD, PhD, of the University of Tokyo. “This AI system has the potential to transform routine clinical X-rays into a powerful tool for opportunistic screening, enabling earlier, broader, and more efficient detection of osteoporosis.”

Source: Wiley

Scientists Argue for More FDA Oversight of Healthcare AI Tools 

New paper critically examines the US Food and Drug Administration’s regulatory framework for artificial intelligence-powered healthcare products, highlighting gaps in safety evaluations, post-market surveillance, and ethical considerations.

An agile, transparent, and ethics-driven oversight system is needed for the U.S. Food and Drug Administration (FDA) to balance innovation with patient safety when it comes to artificial intelligence-driven medical technologies. That is the takeaway from a new report issued to the FDA, published this week in the open-access journal PLOS Medicine by Leo Celi of the Massachusetts Institute of Technology, and colleagues.

Artificial intelligence is becoming a powerful force in healthcare, helping doctors diagnose diseases, monitor patients, and even recommend treatments. Unlike traditional medical devices, many AI tools continue to learn and change after they’ve been approved, meaning their behaviour can shift in unpredictable ways once they’re in use.

In the new paper, Celi and his colleagues argue that the FDA’s current system is not set up to keep tabs on these post-approval changes. Their analysis calls for stronger rules around transparency and bias, especially to protect vulnerable populations. If an algorithm is trained mostly on data from one group of people, it may make mistakes when used with others. The authors recommend that developers be required to share information about how their AI models were trained and tested, and that the FDA involve patients and community advocates more directly in decision-making. They also suggest practical fixes, including creating public data repositories to track how AI performs in the real world, offering tax incentives for companies that follow ethical practices, and training medical students to critically evaluate AI tools.

“This work has the potential to drive real-world impact by prompting the FDA to rethink existing oversight mechanisms for AI-enabled medical technologies. We advocate for a patient-centred, risk-aware, and continuously adaptive regulatory approach – one that ensures AI remains an asset to clinical practice without compromising safety or exacerbating healthcare disparities,” the authors say.

Provided by PLOS

The Evolution of AI in Patient Consent is a Data-Driven Future

Henry Adams, Country Manager, InterSystems South Africa

One area undergoing significant evolution in the healthcare industry is the process of obtaining patient consent. It is a topic that is highly controversial but absolutely necessary and one that must evolve if we are to bring patient care into the 21st century.

Traditionally, patient consent has involved detailed discussions between healthcare providers and patients, ensuring that individuals are fully informed before agreeing to medical procedures or participation in research. However, as artificial intelligence (AI) becomes more prevalent, the mechanisms and ethics surrounding patient consent are being re-examined.

The current state of patient consent

Informed consent is a cornerstone of ethical medical practice, granting patients autonomy over their healthcare decisions. This process typically requires clear communication about the nature of the procedure, potential risks and benefits, and any alternative options.

In the context of AI, particularly with the use of big data and machine learning algorithms, the consent process becomes more complex. Patients must understand not only how their data will be used but also the implications of AI-driven analyses, which may not be entirely transparent.

The rise of dynamic consent models

To address these complexities, the concept of dynamic consent has emerged. Dynamic consent utilises digital platforms to facilitate ongoing, interactive communication between patients and healthcare providers.

This approach allows patients to modify their consent preferences in real-time, reflecting changes in their health status or personal views. Such models aim to enhance patient engagement and trust, providing a more nuanced and flexible framework for consent in the digital age.

AI has the potential to revolutionise the consent process by personalising and simplifying information delivery. Intelligent systems can tailor consent documents to individual patients, highlighting the most pertinent information and using language that aligns with the patient’s comprehension level. In addition, AI-powered chatbots can engage in real-time dialogues, answering patient questions and clarifying uncertainties, enhancing understanding and facilitating informed decision-making.

Data privacy, ethical and security considerations

The integration of AI into patient consent processes necessitates an increased attention to data privacy and security. As AI systems require access to vast amounts of personal health data, robust additional safeguards must be in place to protect against unauthorized access and breaches. Ensuring that AI algorithms operate transparently and that patients are aware of how their data is being used is critical to maintaining trust in the healthcare system, and AI in particular.

While AI can augment the consent process, the ethical implications of its use must be carefully considered. The potential for AI to inadvertently introduce biases or operate without full transparency poses challenges to informed consent. Therefore, human oversight remains indispensable.

Healthcare professionals must work alongside AI systems, the “human in the loop”, to ensure that the technology serves as a tool to enhance, rather than replace, the human touch in patient interactions.

The next 5-10 years

Over the next decade, AI will become increasingly integrated into patient consent processes. Experts predict advancements in natural language processing and machine learning will lead to more sophisticated and user-friendly consent platforms. However, the centrality of human judgment in medical decision-making is unlikely to diminish. AI can provide valuable support, but the nuanced understanding and empathy of healthcare professionals will remain vital.

So, as we take all of this into account, the evolution of AI in patient consent processes offers promising avenues for enhancing patient autonomy and streamlining healthcare operations. By leveraging AI responsibly, healthcare institutions can create more personalised, efficient, and secure consent experiences.

Nonetheless, it is imperative to balance technological innovation with ethical considerations, ensuring that human judgment continues to play a pivotal role in medical decision-making. As we navigate this new world, a collaborative approach that integrates AI capabilities with human expertise will be essential in shaping the future of patient consent. And for healthcare in South Africa, this is going to have to start with education.

Is AI in Medicine Playing Fair?

Photo by Christina Morillo

As artificial intelligence (AI) rapidly integrates into health care, a new study by researchers at the Icahn School of Medicine at Mount Sinai reveals that all generative AI models may recommend different treatments for the same medical condition based solely on a patient’s socioeconomic and demographic background.  

Their findings, which are detailed in the April 7, 2025 online issue of Nature Medicine, highlight the importance of early detection and intervention to ensure that AI-driven care is safe, effective, and appropriate for all.

As part of their investigation, the researchers stress-tested nine large language models (LLMs) on 1,000 emergency department cases, each replicated with 32 different patient backgrounds, generating more than 1.7 million AI-generated medical recommendations. Despite identical clinical details, the AI models occasionally altered their decisions based on a patient’s socioeconomic and demographic profile, affecting key areas such as triage priority, diagnostic testing, treatment approach, and mental health evaluation. 

“Our research provides a framework for AI assurance, helping developers and health care institutions design fair and reliable AI tools,” says co-senior author Eyal Klang, MD, Chief of Generative-AI in the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai. “By identifying when AI shifts its recommendations based on background rather than medical need, we inform better model training, prompt design, and oversight. Our rigorous validation process tests AI outputs against clinical standards, incorporating expert feedback to refine performance. This proactive approach not only enhances trust in AI-driven care but also helps shape policies for better health care for all.” 

One of the study’s most striking findings was the tendency of some AI models to escalate care recommendations—particularly for mental health evaluations—based on patient demographics rather than medical necessity. In addition, high-income patients were more often recommended advanced diagnostic tests such as CT scans or MRI, while low-income patients were more frequently advised to undergo no further testing. The scale of these inconsistencies underscores the need for stronger oversight, say the researchers. 

While the study provides critical insights, researchers caution that it represents only a snapshot of AI behavior.  Future research will continue to include assurance testing to evaluate how AI models perform in real-world clinical settings and whether different prompting techniques can reduce bias. The team also aims to work with other health care institutions to refine AI tools, ensuring they uphold the highest ethical standards and treat all patients fairly. 

“I am delighted to partner with Mount Sinai on this critical research to ensure AI-driven medicine benefits patients across the globe,” says physician-scientist and first author of the study, Mahmud Omar, MD, who consults with the research team. “As AI becomes more integrated into clinical care, it’s essential to thoroughly evaluate its safety, reliability, and fairness. By identifying where these models may introduce bias, we can work to refine their design, strengthen oversight, and build systems that ensure patients remain at the heart of safe, effective care. This collaboration is an important step toward establishing global best practices for AI assurance in health care.” 

“AI has the power to revolutionize health care, but only if it’s developed and used responsibly,” says co-senior author Girish N. Nadkarni, MD, MPH, Chair of the Windreich Department of Artificial Intelligence and Human Health Director of the Hasso Plattner Institute for Digital Health, and the Irene and Dr. Arthur M. Fishberg Professor of Medicine, at the Icahn School of Medicine at Mount Sinai. “Through collaboration and rigorous validation, we are refining AI tools to uphold the highest ethical standards and ensure appropriate, patient-centered care. By implementing robust assurance protocols, we not only advance technology but also build the trust essential for transformative health care. With proper testing and safeguards, we can ensure these technologies improve care for everyone—not just certain groups.” 

Next, the investigators plan to expand their work by simulating multistep clinical conversations and piloting AI models in hospital settings to measure their real-world impact. They hope their findings will guide the development of policies and best practices for AI assurance in health care, fostering trust in these powerful new tools. 

Source: The Mount Sinai Hospital / Mount Sinai School of Medicine

First of its Kind Collaborative Report Unveils the Transformative Role of Artificial Intelligence and Data Science in Advancing Global Health in Africa

April 2nd, 2025, Nairobi, Kenya – Africa stands at the forefront of a revolutionary shift in global health, driven by artificial intelligence (AI) and data science, according to a report released today from the Science for Africa Foundation (SFA Foundation), African institutions and research councils. The report is a first of its kind to comprehensively examine national-level perspectives across Africa on AI and data science for global health. The landscape presents an unprecedented view into the potential to improve AI governance in Africa to reduce the risk and stop the perpetuation of inequity.

TitledGovernance of Artificial Intelligence for Global Health in Africa”, the report is produced through the SFA Foundation’s Science Policy Engagement with Africa’s Research (SPEAR) programme as a culmination of a year-long effort involving convenings across Africa’s five regions, policy analysis and extensive surveys to identify policy gaps and opportunities in AI and data science for global health. Grounded in consultations across 43 African countries, the report incorporates insights from over 300 stakeholders, ensuring a comprehensive and inclusive approach to its findings.

The global AI governance framework remains ill-suited to Africa’s unique needs and priorities,” said Prof. Tom Kariuki, Chief Executive Officer of the SFA Foundation. “Our report on AI in global health and data sciences champions a shift towards frameworks that reflect Africa’s context, ensuring ethical, equitable, and impactful applications of AI not only for our continent’s health challenges, but also to advance global health.”

Key findings and opportunities

The report identifies key trends, gaps, and opportunities in AI and data science for health across Africa:

  • Increasing national investments: Countries including Mauritius, Nigeria, Malawi, Ethiopia, Ghana, Rwanda, Senegal, and Tunisia have launched national AI programmes, while at least 39 African countries are actively pursuing AI R&D. Initiatives such as Rwanda’s Seed Investment Fund and Nigeria’s National Centre for AI and Robotics illustrate promising investments in AI startups.
  • Need for health-specific AI governance: Despite growing interest, there is a critical gap in governance frameworks tailored to health AI across Africa. While health is prioritised in AI discussions, specific frameworks for responsible deployment in health are still underdeveloped.
  • Inclusive AI policy development: Many existing AI policies lack gender and equity considerations. Closing these gaps is essential to prevent inequalities in access to AI advancements and health outcomes.

Incorporating AI into healthcare is not just about technology—it is about enhancing our policy frameworks to ensure these advancements lead to better health outcomes for all Africans,” added Dr Uzma Alam, Programme Lead of the Science Policy Engagement with Africa’s Research (SPEAR) programme.

  • There are existing policy frameworks on which to build and/or consolidate governing of responsible AI and data science: At least 35 African countries have national STI and ICT as well as health research and innovation policy frameworks that contain policies applicable to the development and deployment of AI and data science.
  • There is a surge in African research on health AI and data science (big data): raising the need for equitable North-South R&D partnerships.

Recommendations and way forward

The report is expected to act as a catalyst for integrating AI into health strategies across the continent, marking a significant step forward in Africa’s journey toward leadership in global health innovation by calling for:

  • Adaptive and Inclusive AI Governance: The report calls for the integration of diverse perspectives spanning gender, urban-rural dynamics, and indigenous knowledge into AI health governance frameworks. It highlights the need for adaptive policies that balance innovation with equitable access, while leveraging regional collaboration and supporting the informal sector.
  • Innovative Funding and African Representation: Recognising the potential of local knowledge and practices, the report advocates for creative funding models to bolster AI research and development. It emphasises connecting the informal sector to markets and infrastructure to encourage grassroots innovation.
  • The Reinforcement of Science Diplomacy: To position Africa as a key player in global AI governance, the report recommends investing in programmes that align AI technologies with Africa’s health priorities. It also stresses the importance of amplifying Africa’s voice in shaping international standards and agreements through robust science-policy collaboration.
  • The Bridging of Gendered digital divide: To bridge the gendered digital divide in Africa. targeted initiatives are needed to address regional disparities and ensure gender inclusivity in the AI ecosystem. It’s essential to focus on programs that build capacity and improve access to resources. 

“The report clearly outlines pathways for leveraging AI to bridge gaps and overcome current capacity constraints, while strengthening Africa’s role as a leader in shaping global health policy,” said Dr Evelyn Gitau, Chief Scientific Officer at the SFA Foundation. “This initiative showcases Africa’s potential to lead, innovate, and influence the global health ecosystem through AI.

We envision a world where AI advances health outcomes equitably, benefiting communities around the world. The Science for Africa Foundation’s report brings this vision to life by providing clarity on policy frameworks of AI and data science in global health. This empowers African voices to shape AI policy – not only directing healthcare innovation but setting a precedent for inclusive AI governance across sectors.” – Vilas Dhar, President of the Patrick J. McGovern Foundation.

Access the Report here: https://bit.ly/4jhzMFs

The Future of Healthcare Interoperability: Building a Stronger Foundation for Data Integration

Henry Adams, Country Manager South Africa, InterSystems

Healthcare data is one of the most complex and valuable assets in the modern world. Yet, despite the wealth of digital health information being generated daily, many organisations still struggle to access, integrate, and use it effectively. The promise of data-driven healthcare – where patient records, research insights, and operational efficiencies seamlessly come together – remains just that: a promise. The challenge lies in interoperability.

For years, healthcare institutions have grappled with fragmented systems, disparate data formats, and evolving regulatory requirements. The question is no longer whether to integrate but how best to do it. Should healthcare providers build, rent, or buy their data integration solutions? Each approach has advantages and trade-offs, but long-term success depends on choosing a solution that balances control, flexibility, and cost-effectiveness.

Why Interoperability Remains a Challenge

Despite significant advancements in standardisation, interoperability remains a persistent challenge in healthcare. A common saying in the industry – “If you’ve seen one HL7 interface, you’ve seen one HL7 interface” – illustrates the lack of uniformity across systems. Even FHIR, the latest interoperability standard, comes with many extensions and custom implementations, leading to inconsistency.

Henry Adams, Country Manager South Africa, InterSystems

Adding to this complexity, healthcare data must meet strict security, privacy, and compliance requirements. The need for real-time data exchange, analytics, and artificial intelligence (AI) further increases the pressure on organisations to implement robust, scalable, and future-proof integration solutions.

The Build, Rent, or Buy Dilemma

When organisations decide how to approach interoperability, they typically weigh three options:

  • Building a solution from scratch offers full control but comes with high development costs, lengthy implementation timelines, and ongoing maintenance challenges. Ensuring compliance with HL7, FHIR, and other regulatory standards requires significant resources and expertise.
  • Renting an integration solution provides quick deployment at a lower initial cost but can lead to vendor lock-in, limited flexibility, and escalating costs as data volumes grow. Additionally, outsourced solutions may not prioritise healthcare-specific requirements, creating potential risks for compliance, security, and scalability.
  • Buying a purpose-built integration platform strikes a balance between control and flexibility. Solutions like InterSystems Health Connect and InterSystems IRIS for Health offer pre-built interoperability features while allowing organisations to customise and scale their integration as needed.

The Smart Choice: Owning Your Integration Future

To remain agile in an evolving healthcare landscape, organisations must consider the long-term impact of their integration choices. A well-designed interoperability strategy should allow for:

  • Customisation without complexity – Organisations should be able to tailor their integration capabilities without having to build from the ground up. This ensures they can adapt to new regulatory requirements and technological advancements.
  • Scalability without skyrocketing costs – A robust data platform should enable growth without the exponential cost increases often associated with rented solutions.
  • Security and compliance by design – Healthcare providers cannot afford to compromise on data privacy and security. A trusted interoperability partner should offer built-in compliance with international standards.

Some healthcare providers opt for platforms that combine pre-built interoperability with the flexibility to scale and customise as needed. For example, solutions designed to support seamless integration with electronic health records (EHRs), medical devices, and other healthcare systems can offer both operational efficiency and advanced analytics capabilities. The key is selecting an approach that aligns with both current and future needs, ensuring data remains accessible, secure, and actionable.

Preparing for the Future of Healthcare IT

As healthcare systems become more digital, the need for efficient, secure, and adaptable interoperability solutions will only intensify. The right integration strategy can determine whether an organisation thrives or struggles with inefficiencies, rising costs, and regulatory risks.

By choosing an approach that prioritises flexibility, control, and future-readiness, healthcare providers can unlock the full potential of their data – improving patient outcomes, driving operational efficiencies, and enabling innovation at scale.

The question isn’t just whether to build, rent, or buy – but how to create a foundation that ensures long-term success in healthcare interoperability.