Category: IT in Healthcare

Human Instruction with AI Guidance Gives the Best Results in Neurosurgical Training

Study has implications beyond medical education, suggesting other fields could benefit from AI-enhanced training

Artificial intelligence (AI) is becoming a powerful new tool in training and education, including in the field of neurosurgery. Yet a new study suggests that AI tutoring provides better results when paired with human instruction.

Researchers at the Neurosurgical Simulation and Artificial Intelligence Learning Centre at The Neuro (Montreal Neurological Institute-Hospital) of McGill University are studying how AI and virtual reality (VR) can improve the training and performance of brain surgeons. They simulate brain surgeries using VR, monitor students’ performance using AI and provide continuous verbal feedback on how students can improve performance and prevent errors. Previous research has shown that an intelligent tutoring system powered by AI developed at the Centre outperformed expert human teachers, but these instructors were not provided with trainee AI performance data.

In their most recent study, published in JAMA Surgery, the researchers recruited 87 medical students from four Quebec medical schools and divided them into three groups: one trained with AI-only verbal feedback, one with expert instructor feedback, and one with expert feedback informed by real-time AI performance data. The team recorded the students’ performance, including how well and how quickly their surgical skills improved while undergoing the different types of training.

They found that students receiving AI-augmented, personalised feedback from a human instructor outperformed both other groups in surgical performance and skill transfer. This group also demonstrated significantly better risk management for bleeding and tissue injury – two critical measures of surgical expertise. The study suggests that while intelligent tutoring systems can provide standardised, data-driven assessments, the integration of human expertise enhances engagement and ensures that feedback is contextualised and adaptive.

“Our findings underscore the importance of human input in AI-driven surgical education,” said lead study author Bianca Giglio. “When expert instructors used AI performance data to deliver tailored, real-time feedback, trainees learned faster and transferred their skills more effectively.”

While this study was specific to neurosurgical training, its findings could carry over to other professions where students must acquire highly technical and complex skills in high-pressure environments.

“AI is not replacing educators – it’s empowering them,” added senior author Dr Rolando Del Maestro, a neurosurgeon and current Director of the Centre. “By merging AI’s analytical power with the critical guidance of experienced instructors, we are moving closer to creating the ‘Intelligent Operating Room’ of the future capable of assessing and training learners while minimising errors during human surgical procedures.”

Source: McGill University

Doctors’ Human Touch Still Needed in the AI Healthcare Revolution

AI-based medicine will revolutionise care including for Alzheimer’s and diabetes, predicts a technology expert, but it must be accessible to all patients

AI image created with Gencraft

Healing with Artificial Intelligencewritten by technology expert Daniele Caligiore, uses the latest science research to highlight key innovations assisted by AI such as diagnostic imaging and surgical robots.

From exoskeletons that help spinal injury patients walk to algorithms that can predict the onset of dementia years in advance, Caligiore explores what he describes as a ‘revolution’ that will change healthcare forever.

Economically, the market for AI in healthcare is experiencing rapid growth, with forecasts predicting an increase in value from around USD 11 billion in 2021 to nearly USD 188 billion by 2030, reflecting an annual growth rate of 37%. AI is already being used in some countries, for example to search through genetic data for disease markers, or to assist with scheduling and other administrative tasks – and this trend is set to continue.

However, the author caveats his predictions of progress by warning these technologies may widen existing inequality. Caligiore suggests that AI-based medicine must be available to all people, regardless of where they live or how much they earn, and that people from low-income nations must not be excluded from cutting-edge care which wealthier nations can access.

Other challenges posed by the advancement of AI in healthcare includes who takes responsibility for treatment decisions, especially when a procedure goes wrong. This is a particular challenge given widespread concerns around explainable AI, as many advanced AI systems operate as black boxes, making decisions through complex processes that even their creators cannot fully understand or explain.

Caligiore says AI should support doctors and patients, not replace doctors who, says the author, have a ‘unique ability to offer empathy, understanding, and emotional support’.

“AI should be viewed as a tool, not a colleague, and it should always be seen as a support, never a replacement,” writes Caligiore.

“It is important to find the right balance in using AI tools, both for doctors and patients. Patients can use AI to learn more about their health, such as what diseases may be associated with their symptoms or what lifestyle changes may help prevent illness. However, this does not mean AI should replace doctors.”

Despite his warnings, Caligiore is largely optimistic about the impact of AI in healthcare: “Like a microscope detecting damaged cells or a map highlighting brain activity during specific tasks, AI can uncover valuable insights that might go unnoticed, aiding in more accurate and personalized diagnoses and treatments,” he says.

In any case, Caligiore predicts the healthcare landscape will look ‘dramatically different’ in a few years, with technology acting as a ‘magnifying glass for medicine’ to enable doctors to observe the human body with greater precision and detail.

Examples of where AI will make profound impacts in healthcare include, regenerative medicine, where gene and stem cell therapies repair damaged cells and organs. Spinal cord injury patients are among those who could benefit.

AI may also provide personalised therapies, suggesting treatments tailored to specific individuals often based on their unique genetic profile. Studies are being conducted into targeting different tremor types in Parkinson’s and breast cancer subtypes

The convergence of regenerative medicine, genetically modified organisms (GMOs), and AI is the next frontier in medicine, Caligiore suggests. Genetically modified organisms (GMOs), living organisms whose genetic material has been altered through genetic engineering techniques, have already paved the way for personalised gene therapies.

Blending real and virtual worlds may also prove useful to healthcare, for example the ‘metaverse’ – group therapy where patients participate with an avatar, or ‘digital twins’ – AI simulations of a patient’s body and brain on a computer so doctors can identify underlying causes of disease and simulate the effects of various therapies for specific patients to help doctors make more informed decisions.

These advances and others will reshape the doctor-patient relationship, according to Healing with Artificial Intelligence, but the author suggests the key is for patients and clinicians to keep a critical mindset about AI.

Caligiore warns that role of physicians will evolve as AI becomes more integrated into healthcare but the need for human interactions will remain ‘central to patient care’.

“While healthcare professionals must develop technical skills to use AI tools, they should also nurture and enhance qualities that AI cannot replicate – such as soft skills and emotional intelligence. These human traits are essential for introducing an emotional component into work environments,” he explains.

Source: Taylor & Francis Group

New Research Finds Surprises in ChatGPT’s Diagnosis of Medical Symptoms

The popular large language model performs better than expected but still has some knowledge gaps – and hallucinations

When people worry that they’re getting sick, they are increasingly turning to generative artificial intelligence like ChatGPT for a diagnosis. But how accurate are the answers that AI gives out?

Research recently published in the journal iScience puts ChatGPT and its large language models to the test, with a few surprising conclusions.

Ahmed Abdeen Hamed – a research fellow for the Thomas J. Watson College of Engineering and Applied Science’s School of Systems Science and Industrial Engineering at Binghamton University – led the study, with collaborators from AGH University of Krakow, Poland; Howard University; and the University of Vermont.

As part of Professor Luis M. Rocha’s Complex Adaptive Systems and Computational Intelligence Lab, Hamed developed a machine-learning algorithm last year that he calls xFakeSci. It can detect up to 94% of bogus scientific papers — nearly twice as successfully as more common data-mining techniques. He sees this new research as the next step to verify the biomedical generative capabilities of large language models.

“People talk to ChatGPT all the time these days, and they say: ‘I have these symptoms. Do I have cancer? Do I have cardiac arrest? Should I be getting treatment?’” Hamed said. “It can be a very dangerous business, so we wanted to see what would happen if we asked these questions, what sort of answers we got and how these answers could be verified from the biomedical literature.”

The researchers tested ChatGPT for disease terms and three types of associations: drug names, genetics and symptoms. The AI showed high accuracy in identifying disease terms (88–97%), drug names (90–91%) and genetic information (88–98%). Hamed admitted he thought it would be “at most 25% accuracy.”

“The exciting result was ChatGPT said cancer is a disease, hypertension is a disease, fever is a symptom, Remdesivir is a drug and BRCA is a gene related to breast cancer,” he said. “Incredible, absolutely incredible!”

Symptom identification, however, scored lower (49–61%), and the reason may be how the large language models are trained. Doctors and researchers use biomedical ontologies to define and organise terms and relationships for consistent data representation and knowledge-sharing, but users enter more informal descriptions.

“ChatGPT uses more of a friendly and social language, because it’s supposed to be communicating with average people. In medical literature, people use proper names,” Hamed said. “The LLM is apparently trying to simplify the definition of these symptoms, because there is a lot of traffic asking such questions, so it started to minimize the formalities of medical language to appeal to those users.”

One puzzling result stood out. The National Institutes of Health maintains a database called GenBank, which gives an accession number to every identified DNA sequence. It’s usually a combination of letters and numbers. For example, the designation for the Breast Cancer 1 gene (BRCA1) is NM_007294.4.

When asked for these numbers as part of the genetic information testing, ChatGPT just made them up – a phenomenon known as “hallucinating.” Hamed sees this as a major failing amid so many other positive results.

“Maybe there is an opportunity here that we can start introducing these biomedical ontologies to the LLMs to provide much higher accuracy, get rid of all the hallucinations and make these tools into something amazing,” he said.

Hamed’s interest in LLMs began in 2023, when he discovered ChatGPT and heard about the issues regarding fact-checking. His goal is to expose the flaws so data scientists can adjust the models as needed and make them better.

“If I am analysing knowledge, I want to make sure that I remove anything that may seem fishy before I build my theories and make something that is not accurate,” he said.

Source: Binghamton University

New AI–based Test Detects Early Signs of Osteoporosis from X-ray Images

Photo by Cottonbro on Pexels

Investigators have developed an artificial intelligence-assisted diagnostic system that can estimate bone mineral density in both the lumbar spine and the femur of the upper leg, based on X-ray images. The advance is described in a study published in the Journal of Orthopaedic Research.

A total of 1454 X-ray images were analysed using the scientists’ system. Performance rates for the lumbar and femur of patients with bone density loss, or osteopenia, were 86.4% and 84.1%, respectively, in terms of sensitivity. The respective specificities were 80.4% and 76.3%. (Sensitivity reflected the ability of the test to correctly identify people with osteopenia, whereas specificity reflected its ability to correctly identify those without osteopenia). The test also had high sensitivity and specificity for categorising patients with and without osteoporosis.

“Bone mineral density measurement is essential for screening and diagnosing osteoporosis, but limited access to diagnostic equipment means that millions of people worldwide may remain undiagnosed,” said corresponding author Toru Moro, MD, PhD, of the University of Tokyo. “This AI system has the potential to transform routine clinical X-rays into a powerful tool for opportunistic screening, enabling earlier, broader, and more efficient detection of osteoporosis.”

Source: Wiley

Scientists Argue for More FDA Oversight of Healthcare AI Tools 

New paper critically examines the US Food and Drug Administration’s regulatory framework for artificial intelligence-powered healthcare products, highlighting gaps in safety evaluations, post-market surveillance, and ethical considerations.

An agile, transparent, and ethics-driven oversight system is needed for the U.S. Food and Drug Administration (FDA) to balance innovation with patient safety when it comes to artificial intelligence-driven medical technologies. That is the takeaway from a new report issued to the FDA, published this week in the open-access journal PLOS Medicine by Leo Celi of the Massachusetts Institute of Technology, and colleagues.

Artificial intelligence is becoming a powerful force in healthcare, helping doctors diagnose diseases, monitor patients, and even recommend treatments. Unlike traditional medical devices, many AI tools continue to learn and change after they’ve been approved, meaning their behaviour can shift in unpredictable ways once they’re in use.

In the new paper, Celi and his colleagues argue that the FDA’s current system is not set up to keep tabs on these post-approval changes. Their analysis calls for stronger rules around transparency and bias, especially to protect vulnerable populations. If an algorithm is trained mostly on data from one group of people, it may make mistakes when used with others. The authors recommend that developers be required to share information about how their AI models were trained and tested, and that the FDA involve patients and community advocates more directly in decision-making. They also suggest practical fixes, including creating public data repositories to track how AI performs in the real world, offering tax incentives for companies that follow ethical practices, and training medical students to critically evaluate AI tools.

“This work has the potential to drive real-world impact by prompting the FDA to rethink existing oversight mechanisms for AI-enabled medical technologies. We advocate for a patient-centred, risk-aware, and continuously adaptive regulatory approach – one that ensures AI remains an asset to clinical practice without compromising safety or exacerbating healthcare disparities,” the authors say.

Provided by PLOS

The Evolution of AI in Patient Consent is a Data-Driven Future

Henry Adams, Country Manager, InterSystems South Africa

One area undergoing significant evolution in the healthcare industry is the process of obtaining patient consent. It is a topic that is highly controversial but absolutely necessary and one that must evolve if we are to bring patient care into the 21st century.

Traditionally, patient consent has involved detailed discussions between healthcare providers and patients, ensuring that individuals are fully informed before agreeing to medical procedures or participation in research. However, as artificial intelligence (AI) becomes more prevalent, the mechanisms and ethics surrounding patient consent are being re-examined.

The current state of patient consent

Informed consent is a cornerstone of ethical medical practice, granting patients autonomy over their healthcare decisions. This process typically requires clear communication about the nature of the procedure, potential risks and benefits, and any alternative options.

In the context of AI, particularly with the use of big data and machine learning algorithms, the consent process becomes more complex. Patients must understand not only how their data will be used but also the implications of AI-driven analyses, which may not be entirely transparent.

The rise of dynamic consent models

To address these complexities, the concept of dynamic consent has emerged. Dynamic consent utilises digital platforms to facilitate ongoing, interactive communication between patients and healthcare providers.

This approach allows patients to modify their consent preferences in real-time, reflecting changes in their health status or personal views. Such models aim to enhance patient engagement and trust, providing a more nuanced and flexible framework for consent in the digital age.

AI has the potential to revolutionise the consent process by personalising and simplifying information delivery. Intelligent systems can tailor consent documents to individual patients, highlighting the most pertinent information and using language that aligns with the patient’s comprehension level. In addition, AI-powered chatbots can engage in real-time dialogues, answering patient questions and clarifying uncertainties, enhancing understanding and facilitating informed decision-making.

Data privacy, ethical and security considerations

The integration of AI into patient consent processes necessitates an increased attention to data privacy and security. As AI systems require access to vast amounts of personal health data, robust additional safeguards must be in place to protect against unauthorized access and breaches. Ensuring that AI algorithms operate transparently and that patients are aware of how their data is being used is critical to maintaining trust in the healthcare system, and AI in particular.

While AI can augment the consent process, the ethical implications of its use must be carefully considered. The potential for AI to inadvertently introduce biases or operate without full transparency poses challenges to informed consent. Therefore, human oversight remains indispensable.

Healthcare professionals must work alongside AI systems, the “human in the loop”, to ensure that the technology serves as a tool to enhance, rather than replace, the human touch in patient interactions.

The next 5-10 years

Over the next decade, AI will become increasingly integrated into patient consent processes. Experts predict advancements in natural language processing and machine learning will lead to more sophisticated and user-friendly consent platforms. However, the centrality of human judgment in medical decision-making is unlikely to diminish. AI can provide valuable support, but the nuanced understanding and empathy of healthcare professionals will remain vital.

So, as we take all of this into account, the evolution of AI in patient consent processes offers promising avenues for enhancing patient autonomy and streamlining healthcare operations. By leveraging AI responsibly, healthcare institutions can create more personalised, efficient, and secure consent experiences.

Nonetheless, it is imperative to balance technological innovation with ethical considerations, ensuring that human judgment continues to play a pivotal role in medical decision-making. As we navigate this new world, a collaborative approach that integrates AI capabilities with human expertise will be essential in shaping the future of patient consent. And for healthcare in South Africa, this is going to have to start with education.

A New Way of Visualising BP Data to Better Manage Hypertension

Photo by National Cancer Institute on Unsplash

If a picture is worth a thousand words, how much is a graph worth? For doctors trying to determine whether a patient’s blood pressure is within normal range, the answer may depend on the type of graph they’re looking at.

A new study from the University of Missouri highlights how different graph formats can affect clinical decision-making. Because blood pressure fluctuates moment to moment, day to day, it can be tricky for doctors to accurately assess it.

“Sometimes a patient’s blood pressure is high at the doctor’s office but normal at home, a condition called white coat hypertension,” said Victoria Shaffer, a psychology professor in the College of Arts and Science and lead author of the study published in the Journal of General Internal Medicine. “There are some estimates that 10% to 20% of the high blood pressure that gets diagnosed in the clinic is actually controlled – it’s just white coat hypertension – and if you take those same people’s blood pressure at home, it is really controlled.”

In the study, Shaffer and the team showed 57 doctors how a hypothetical patient’s blood pressure data would change over time using two different types of graphs. One raw graph showed the actual numbers, which displayed peaks and valleys, while the other graph was a new visual tool they created: a smoothed graph that averages out fluctuations in data.  

When the blood pressure of the patient was under control but had a lot of fluctuation, the doctors were more likely to accurately assess the patient’s health using the new smoothed graph compared to the raw graph.

“Raw data can be visually noisy and hard to interpret because it is easy to get distracted by outliers in the data,” Shaffer said. “At the end of the day, patients and their doctors just want to know if blood pressure is under control, and this new smoothed graph can be an additional tool to make it easier and faster for busy doctors to accurately assess that.”

This proof-of-concept study is the foundation for Shaffer’s ongoing research with Richelle Koopman, a professor in the School of Medicine, which includes working with Vanderbilt University and Oregon Health & Science University to determine whether the new smoothed graph can one day be shown to patients taking their own blood pressure at home. The research team is working to get the technology integrated with HIPAA-compliant electronic health records that patients and their care team have access to.

This could alleviate pressure on the health care system by potentially reducing the need for in-person visits when blood pressure is under control, reducing the risk for false positives that may lead to over-treatment.

 “There are some people who are being over-treated with unnecessary blood pressure medication that can make them dizzy and lower their heart rate,” Shaffer said. “This is particularly risky for older adults who are more at risk for falling. Hopefully, this work can help identify those who are being over-treated.”

The findings were not particularly surprising to Shaffer.

“As a psychologist, I know that, as humans, we have these biases that underlie a lot of our judgments and decisions,” Shaffer said. “We tend to be visually drawn to extreme cases and perceive extreme cases as threats. It’s hard to ignore, whether you’re a patient or a provider. We are all humans.”

Given the increasing popularity of health informatics and smart wearable devices that track vital signs, the smoothed graphs could one day be applied to interpreting other health metrics.

“We have access to all this data now like never before, but how do we make use of it in a meaningful way, so we are not constantly overwhelming people?” Shaffer said. “With better visualisation tools, we can give people better context for their health information and help them take action when needed.”

Source: EurekAlert!

Taking Your Medical Practice to the Next Level

As a healthcare professional, you’re used to taking care of the health of your patients. But what about the health of your practice? If you’re not sure, that’s understandable – after all, doctors and practice managers have enough on their plate without worrying about finding opportunities for more revenue. Luckily, there’s a new, easy to use tool from a provider with a strong track record of developing actionable, real-world solutions for the South African market.

A control room for your practice

“Think of the new Engage Mx report as the control room for your practice,” says Dr Benji Ozynski, who developed the platform in partnership with Altron HealthTech. “With Engage Mx, everything you need to know about your practice is in one place with one easy to use interface.”

“Engage Mx on Elixir Live has been in the market for a couple of years and already proven popular with doctors eager to embrace the advantages of data-driven healthcare,” says Ntombizanele Gxamza, Head of Product Strategy at Altron HealthTech. “The new Engage Mx report functionality on the platform brings data about the financial health of the practice and most importantly the health of a practice’s population of patients to a patient-centred approach. At a glance, the report makes it possible to see a range of statistics, presented in the form of easy to read graphics. It can be accessed on any device, making it convenient for even the busiest doctor.”

When using the new Engage Mx report, healthcare professionals can see:

  • Revenue by week
  • Number of patients seen, compared month by month and year by year
  • Busiest days, months, and seasons
  • Patient profiles by age group
  • Gaps in care by age group
  • Trends in types of conditions being treated

Using this kind of information, doctors are able to build up a clear picture of the health of their practice, and where there could be opportunities for improvement. In just one pilot project with a GP with a busy practice, the Engage Mx report uncovered over R400 000 lost due to missed patient health reviews. The doctor was able to see which age groups were most likely to need intervention – and prevention – before health problems became more serious.

Better health for patients, healthcare professionals and practices

The report helps healthcare professionals answer a range of questions such as:

  • When is the best time for me to take leave?
  • What kind of services could I add to the practice offering?
  • Are there growing patient needs that my practice could be fulfilling?
  • Where and how can I innovate my offering?
  • How do I grow my practice sustainably?
  • What kind of resources am I going to need in order to grow?
  • When and how should I be communicating with my patients?
  • Could my practice benefit from running marketing campaigns?
  •  

Because the data is so clearly visualised and easily accessible, busy healthcare providers don’t need to take hours out of their professional or personal time to make sense of the numbers.

Ultimately, the beauty of the new tool is that better health outcomes for patients can also improve the financial health of the practice – and also the time and administrative burden on doctors, because it can help reduce the hours currently spent on doing these tasks manually. Adds Dr Ozynski: “Doctors who’ve already used the Engage Mx report have told me that it makes it easier to plan their leave, for example, or look for new opportunities to expand services, while making their patients feel valued.”

Data-driven healthcare is perhaps the most exciting global trend in healthcare today. Practical, user-friendly tools like Engage Mx make it possible for South African doctors to take what they do best – bring the human touch to healthcare, while enabling them to future-proof their practices in an increasingly complex clinical and regulatory environment. A financially healthy practice is a sustainable practice, and that’s good for everyone.

Learn more: https://eu1.hubs.ly/H0jd6vb0

Is AI in Medicine Playing Fair?

Photo by Christina Morillo

As artificial intelligence (AI) rapidly integrates into health care, a new study by researchers at the Icahn School of Medicine at Mount Sinai reveals that all generative AI models may recommend different treatments for the same medical condition based solely on a patient’s socioeconomic and demographic background.  

Their findings, which are detailed in the April 7, 2025 online issue of Nature Medicine, highlight the importance of early detection and intervention to ensure that AI-driven care is safe, effective, and appropriate for all.

As part of their investigation, the researchers stress-tested nine large language models (LLMs) on 1,000 emergency department cases, each replicated with 32 different patient backgrounds, generating more than 1.7 million AI-generated medical recommendations. Despite identical clinical details, the AI models occasionally altered their decisions based on a patient’s socioeconomic and demographic profile, affecting key areas such as triage priority, diagnostic testing, treatment approach, and mental health evaluation. 

“Our research provides a framework for AI assurance, helping developers and health care institutions design fair and reliable AI tools,” says co-senior author Eyal Klang, MD, Chief of Generative-AI in the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai. “By identifying when AI shifts its recommendations based on background rather than medical need, we inform better model training, prompt design, and oversight. Our rigorous validation process tests AI outputs against clinical standards, incorporating expert feedback to refine performance. This proactive approach not only enhances trust in AI-driven care but also helps shape policies for better health care for all.” 

One of the study’s most striking findings was the tendency of some AI models to escalate care recommendations—particularly for mental health evaluations—based on patient demographics rather than medical necessity. In addition, high-income patients were more often recommended advanced diagnostic tests such as CT scans or MRI, while low-income patients were more frequently advised to undergo no further testing. The scale of these inconsistencies underscores the need for stronger oversight, say the researchers. 

While the study provides critical insights, researchers caution that it represents only a snapshot of AI behavior.  Future research will continue to include assurance testing to evaluate how AI models perform in real-world clinical settings and whether different prompting techniques can reduce bias. The team also aims to work with other health care institutions to refine AI tools, ensuring they uphold the highest ethical standards and treat all patients fairly. 

“I am delighted to partner with Mount Sinai on this critical research to ensure AI-driven medicine benefits patients across the globe,” says physician-scientist and first author of the study, Mahmud Omar, MD, who consults with the research team. “As AI becomes more integrated into clinical care, it’s essential to thoroughly evaluate its safety, reliability, and fairness. By identifying where these models may introduce bias, we can work to refine their design, strengthen oversight, and build systems that ensure patients remain at the heart of safe, effective care. This collaboration is an important step toward establishing global best practices for AI assurance in health care.” 

“AI has the power to revolutionize health care, but only if it’s developed and used responsibly,” says co-senior author Girish N. Nadkarni, MD, MPH, Chair of the Windreich Department of Artificial Intelligence and Human Health Director of the Hasso Plattner Institute for Digital Health, and the Irene and Dr. Arthur M. Fishberg Professor of Medicine, at the Icahn School of Medicine at Mount Sinai. “Through collaboration and rigorous validation, we are refining AI tools to uphold the highest ethical standards and ensure appropriate, patient-centered care. By implementing robust assurance protocols, we not only advance technology but also build the trust essential for transformative health care. With proper testing and safeguards, we can ensure these technologies improve care for everyone—not just certain groups.” 

Next, the investigators plan to expand their work by simulating multistep clinical conversations and piloting AI models in hospital settings to measure their real-world impact. They hope their findings will guide the development of policies and best practices for AI assurance in health care, fostering trust in these powerful new tools. 

Source: The Mount Sinai Hospital / Mount Sinai School of Medicine

First of its Kind Collaborative Report Unveils the Transformative Role of Artificial Intelligence and Data Science in Advancing Global Health in Africa

April 2nd, 2025, Nairobi, Kenya – Africa stands at the forefront of a revolutionary shift in global health, driven by artificial intelligence (AI) and data science, according to a report released today from the Science for Africa Foundation (SFA Foundation), African institutions and research councils. The report is a first of its kind to comprehensively examine national-level perspectives across Africa on AI and data science for global health. The landscape presents an unprecedented view into the potential to improve AI governance in Africa to reduce the risk and stop the perpetuation of inequity.

TitledGovernance of Artificial Intelligence for Global Health in Africa”, the report is produced through the SFA Foundation’s Science Policy Engagement with Africa’s Research (SPEAR) programme as a culmination of a year-long effort involving convenings across Africa’s five regions, policy analysis and extensive surveys to identify policy gaps and opportunities in AI and data science for global health. Grounded in consultations across 43 African countries, the report incorporates insights from over 300 stakeholders, ensuring a comprehensive and inclusive approach to its findings.

The global AI governance framework remains ill-suited to Africa’s unique needs and priorities,” said Prof. Tom Kariuki, Chief Executive Officer of the SFA Foundation. “Our report on AI in global health and data sciences champions a shift towards frameworks that reflect Africa’s context, ensuring ethical, equitable, and impactful applications of AI not only for our continent’s health challenges, but also to advance global health.”

Key findings and opportunities

The report identifies key trends, gaps, and opportunities in AI and data science for health across Africa:

  • Increasing national investments: Countries including Mauritius, Nigeria, Malawi, Ethiopia, Ghana, Rwanda, Senegal, and Tunisia have launched national AI programmes, while at least 39 African countries are actively pursuing AI R&D. Initiatives such as Rwanda’s Seed Investment Fund and Nigeria’s National Centre for AI and Robotics illustrate promising investments in AI startups.
  • Need for health-specific AI governance: Despite growing interest, there is a critical gap in governance frameworks tailored to health AI across Africa. While health is prioritised in AI discussions, specific frameworks for responsible deployment in health are still underdeveloped.
  • Inclusive AI policy development: Many existing AI policies lack gender and equity considerations. Closing these gaps is essential to prevent inequalities in access to AI advancements and health outcomes.

Incorporating AI into healthcare is not just about technology—it is about enhancing our policy frameworks to ensure these advancements lead to better health outcomes for all Africans,” added Dr Uzma Alam, Programme Lead of the Science Policy Engagement with Africa’s Research (SPEAR) programme.

  • There are existing policy frameworks on which to build and/or consolidate governing of responsible AI and data science: At least 35 African countries have national STI and ICT as well as health research and innovation policy frameworks that contain policies applicable to the development and deployment of AI and data science.
  • There is a surge in African research on health AI and data science (big data): raising the need for equitable North-South R&D partnerships.

Recommendations and way forward

The report is expected to act as a catalyst for integrating AI into health strategies across the continent, marking a significant step forward in Africa’s journey toward leadership in global health innovation by calling for:

  • Adaptive and Inclusive AI Governance: The report calls for the integration of diverse perspectives spanning gender, urban-rural dynamics, and indigenous knowledge into AI health governance frameworks. It highlights the need for adaptive policies that balance innovation with equitable access, while leveraging regional collaboration and supporting the informal sector.
  • Innovative Funding and African Representation: Recognising the potential of local knowledge and practices, the report advocates for creative funding models to bolster AI research and development. It emphasises connecting the informal sector to markets and infrastructure to encourage grassroots innovation.
  • The Reinforcement of Science Diplomacy: To position Africa as a key player in global AI governance, the report recommends investing in programmes that align AI technologies with Africa’s health priorities. It also stresses the importance of amplifying Africa’s voice in shaping international standards and agreements through robust science-policy collaboration.
  • The Bridging of Gendered digital divide: To bridge the gendered digital divide in Africa. targeted initiatives are needed to address regional disparities and ensure gender inclusivity in the AI ecosystem. It’s essential to focus on programs that build capacity and improve access to resources. 

“The report clearly outlines pathways for leveraging AI to bridge gaps and overcome current capacity constraints, while strengthening Africa’s role as a leader in shaping global health policy,” said Dr Evelyn Gitau, Chief Scientific Officer at the SFA Foundation. “This initiative showcases Africa’s potential to lead, innovate, and influence the global health ecosystem through AI.

We envision a world where AI advances health outcomes equitably, benefiting communities around the world. The Science for Africa Foundation’s report brings this vision to life by providing clarity on policy frameworks of AI and data science in global health. This empowers African voices to shape AI policy – not only directing healthcare innovation but setting a precedent for inclusive AI governance across sectors.” – Vilas Dhar, President of the Patrick J. McGovern Foundation.

Access the Report here: https://bit.ly/4jhzMFs