Investigators trained multiple artificial intelligence models to read images from echocardiograms, a type of ultrasound test that evaluates the structure and function of the heart. Their goal was to determine whether AI could use these images to calculate measurements like inflammation and scarring that are normally obtained through another, more costly test called cardiac magnetic resonance imaging (CMRI). By examining findings from 1453 patients who had undergone both tests, they found the AI models could not accomplish this task.
“As compared to echocardiograms, cardiac MRI machines are expensive and not available for many patients, especially those in rural areas, so we had hoped that AI could reduce the need for it,” said Alan Kwan, MD, assistant professor in the Department of Cardiology in the Smidt Heart Institute at Cedars-Sinai and co-senior author of the study. “Our results showed the limited powers of AI in this area.”
HealthTech is transforming healthcare through AI, mobile applications, wearable devices, telemedicine, and big data analytics. While these advances offer enormous potential to improve patient outcomes and operational efficiency, they also raise complex legal and regulatory challenges – spanning intellectual property, data privacy, licensing, corporate governance, funding, taxation, and litigation.
Webber Wentzel’s Navigating HealthTech Legal Solutions highlights the firm’s extensive experience in helping innovators, investors, and healthcare providers across Africa address the legal and regulatory complexities of HealthTech. Mapping out the complexities at play across both the technology and the law, this resource brings together Webber Wentzel’s cross-practice teams to give clients a holistic perspective on opportunities, risks, and emerging trends in healthcare innovation.
“Our clients are leading the way in healthcare innovation, and they need legal partners who understand the sector end-to-end,” says Bernadette Versfeld, head of the Consumer sector. “This resource demonstrates how we help businesses navigate regulatory hurdles, adopt new technologies, structure investments effectively, and manage risk, all while enabling growth and innovation.”
Drawing on extensive experience working with healthcare companies, insurers, tech providers, investors, and regulators across Africa, the report provides insights into medical device licensing, HealthTech investment structuring, protecting personal health data, managing litigation risks, and compliance with South Africa’s National Health Insurance Act.
“As part of our ongoing commitment to supporting Africa’s healthcare sector, Webber Wentzel continues to advise on emerging trends, innovative technologies, and regulatory developments. By combining deep sector knowledge with cross-practice expertise, we help clients not just respond to change but shape it, empowering them to navigate the complex intersection of healthcare and technology,” adds Versfeld.
Doctors who use artificial intelligence at work risk having their colleagues deem them less competent for it, according to a recent Johns Hopkins University study.
While generative AI holds significant promise for advancing health care, a new study finds its use in medical decision-making impacts how physicians are perceived by their colleagues. The research shows that doctors who primarily rely on generative AI for decision-making face considerable scepticism from fellow clinicians, who correlate their use of AI with a lack of clinical skill and overall competence, resulting in a diminished perceived quality of patient care.
The research included a diverse group of clinicians from a major hospital system, involving attending physicians, residents, fellows, and advanced practice providers. Results of the study were published in Nature Digital Medicine.
Stigma stunts better care
The findings may indicate a social barrier to AI adoption in health care settings, which could slow advances that might improve patient care.
“AI is already unmistakably part of medicine,” says Tinglong Dai, professor of business at the Johns Hopkins Carey Business School and co-corresponding author of the study. “What surprised us is that doctors who use it in making medical decisions can be perceived by their peers as less capable. That kind of stigma, not the technology itself, may be an obstacle to better care.”
The study, conducted by researchers at Johns Hopkins University, involved a randomised experiment where 276 practicing clinicians evaluated different scenarios: a physician using no AI, one using AI as a primary decision-making tool, and another using it for verification. The research found that as physicians were more dependent on AI, they faced an increasing “competence penalty,” meaning they were viewed more sceptically by their peers than those physicians who did not rely on AI.
“In the age of AI, human psychology remains the ultimate variable,” says Haiyang Yang, first author of the study and academic program director of the Masters of Science in Management program at the Carey Business School. “The way people perceive AI use can matter just as much as, or even more than, the performance of the technology itself.”
Skipping AI equalled more respect
According to the study, peer perception suffers for doctors who rely on AI. Framing generative AI as a “second opinion” or a verification tool partially improved negative perceptions from peers, but it did not fully eliminate them. Not using GenAI, however, resulted in the most favourable peer perceptions.
The findings align with theories that suggest perceived dependence on an external source like AI can be seen as a weakness by clinicians.
Ironically, while GenAI’s visible use can undermine a physician’s perceived clinical expertise among peers, the study also found that clinicians still recognise AI as a beneficial tool for enhancing precision in clinical assessment. The research showed that clinicians still generally acknowledge the value of GenAI for improving the accuracy of clinical assessments, and they view institutionally customized GenAI as even more useful.
The collaborative nature of the study led to thoughtful suggestions for GenAI implementation in health care settings, which are crucial to balance innovation with maintaining professional trust and physician reputation, the researchers note.
“Physicians place a high value on clinical expertise, and as AI becomes part of the future of medicine, it’s important to recognise its potential to complement – not replace – clinical judgment, ultimately strengthening decision making and improving patient care,” said Risa Wolf, co-corresponding author of the research and associate professor of pediatric endocrinology at Johns Hopkins School of Medicine with a joint appointment at the Carey Business School.
Study has implications beyond medical education, suggesting other fields could benefit from AI-enhanced training
Artificial intelligence (AI) is becoming a powerful new tool in training and education, including in the field of neurosurgery. Yet a new study suggests that AI tutoring provides better results when paired with human instruction.
Researchers at the Neurosurgical Simulation and Artificial Intelligence Learning Centre at The Neuro (Montreal Neurological Institute-Hospital) of McGill University are studying how AI and virtual reality (VR) can improve the training and performance of brain surgeons. They simulate brain surgeries using VR, monitor students’ performance using AI and provide continuous verbal feedback on how students can improve performance and prevent errors. Previous research has shown that an intelligent tutoring system powered by AI developed at the Centre outperformed expert human teachers, but these instructors were not provided with trainee AI performance data.
In their most recent study, published in JAMA Surgery, the researchers recruited 87 medical students from four Quebec medical schools and divided them into three groups: one trained with AI-only verbal feedback, one with expert instructor feedback, and one with expert feedback informed by real-time AI performance data. The team recorded the students’ performance, including how well and how quickly their surgical skills improved while undergoing the different types of training.
They found that students receiving AI-augmented, personalised feedback from a human instructor outperformed both other groups in surgical performance and skill transfer. This group also demonstrated significantly better risk management for bleeding and tissue injury – two critical measures of surgical expertise. The study suggests that while intelligent tutoring systems can provide standardised, data-driven assessments, the integration of human expertise enhances engagement and ensures that feedback is contextualised and adaptive.
“Our findings underscore the importance of human input in AI-driven surgical education,” said lead study author Bianca Giglio. “When expert instructors used AI performance data to deliver tailored, real-time feedback, trainees learned faster and transferred their skills more effectively.”
While this study was specific to neurosurgical training, its findings could carry over to other professions where students must acquire highly technical and complex skills in high-pressure environments.
“AI is not replacing educators – it’s empowering them,” added senior author Dr Rolando Del Maestro, a neurosurgeon and current Director of the Centre. “By merging AI’s analytical power with the critical guidance of experienced instructors, we are moving closer to creating the ‘Intelligent Operating Room’ of the future capable of assessing and training learners while minimising errors during human surgical procedures.”
AI-based medicine will revolutionise care including for Alzheimer’s and diabetes, predicts a technology expert, but it must be accessible to all patients
AI image created with Gencraft
Healing with Artificial Intelligence, written by technology expert Daniele Caligiore, uses the latest science research to highlight key innovations assisted by AI such as diagnostic imaging and surgical robots.
From exoskeletons that help spinal injury patients walk to algorithms that can predict the onset of dementia years in advance, Caligiore explores what he describes as a ‘revolution’ that will change healthcare forever.
Economically, the market for AI in healthcare is experiencing rapid growth, with forecasts predicting an increase in value from around USD 11 billion in 2021 to nearly USD 188 billion by 2030, reflecting an annual growth rate of 37%. AI is already being used in some countries, for example to search through genetic data for disease markers, or to assist with scheduling and other administrative tasks – and this trend is set to continue.
However, the author caveats his predictions of progress by warning these technologies may widen existing inequality. Caligiore suggests that AI-based medicine must be available to all people, regardless of where they live or how much they earn, and that people from low-income nations must not be excluded from cutting-edge care which wealthier nations can access.
Other challenges posed by the advancement of AI in healthcare includes who takes responsibility for treatment decisions, especially when a procedure goes wrong. This is a particular challenge given widespread concerns around explainable AI, as many advanced AI systems operate as black boxes, making decisions through complex processes that even their creators cannot fully understand or explain.
Caligiore says AI should support doctors and patients, not replace doctors who, says the author, have a ‘unique ability to offer empathy, understanding, and emotional support’.
“AI should be viewed as a tool, not a colleague, and it should always be seen as a support, never a replacement,” writes Caligiore.
“It is important to find the right balance in using AI tools, both for doctors and patients. Patients can use AI to learn more about their health, such as what diseases may be associated with their symptoms or what lifestyle changes may help prevent illness. However, this does not mean AI should replace doctors.”
Despite his warnings, Caligiore is largely optimistic about the impact of AI in healthcare: “Like a microscope detecting damaged cells or a map highlighting brain activity during specific tasks, AI can uncover valuable insights that might go unnoticed, aiding in more accurate and personalized diagnoses and treatments,” he says.
In any case, Caligiore predicts the healthcare landscape will look ‘dramatically different’ in a few years, with technology acting as a ‘magnifying glass for medicine’ to enable doctors to observe the human body with greater precision and detail.
Examples of where AI will make profound impacts in healthcare include, regenerative medicine, where gene and stem cell therapies repair damaged cells and organs. Spinal cord injury patients are among those who could benefit.
AI may also provide personalised therapies, suggesting treatments tailored to specific individuals often based on their unique genetic profile. Studies are being conducted into targeting different tremor types in Parkinson’s and breast cancer subtypes
The convergence of regenerative medicine, genetically modified organisms (GMOs), and AI is the next frontier in medicine, Caligiore suggests. Genetically modified organisms (GMOs), living organisms whose genetic material has been altered through genetic engineering techniques, have already paved the way for personalised gene therapies.
Blending real and virtual worlds may also prove useful to healthcare, for example the ‘metaverse’ – group therapy where patients participate with an avatar, or ‘digital twins’ – AI simulations of a patient’s body and brain on a computer so doctors can identify underlying causes of disease and simulate the effects of various therapies for specific patients to help doctors make more informed decisions.
These advances and others will reshape the doctor-patient relationship, according to Healing with Artificial Intelligence, but the author suggests the key is for patients and clinicians to keep a critical mindset about AI.
Caligiore warns that role of physicians will evolve as AI becomes more integrated into healthcare but the need for human interactions will remain ‘central to patient care’.
“While healthcare professionals must develop technical skills to use AI tools, they should also nurture and enhance qualities that AI cannot replicate – such as soft skills and emotional intelligence. These human traits are essential for introducing an emotional component into work environments,” he explains.
The popular large language model performs better than expected but still has some knowledge gaps – and hallucinations
When people worry that they’re getting sick, they are increasingly turning to generative artificial intelligence like ChatGPT for a diagnosis. But how accurate are the answers that AI gives out?
Research recently published in the journal iScience puts ChatGPT and its large language models to the test, with a few surprising conclusions.
“People talk to ChatGPT all the time these days, and they say: ‘I have these symptoms. Do I have cancer? Do I have cardiac arrest? Should I be getting treatment?’” Hamed said. “It can be a very dangerous business, so we wanted to see what would happen if we asked these questions, what sort of answers we got and how these answers could be verified from the biomedical literature.”
The researchers tested ChatGPT for disease terms and three types of associations: drug names, genetics and symptoms. The AI showed high accuracy in identifying disease terms (88–97%), drug names (90–91%) and genetic information (88–98%). Hamed admitted he thought it would be “at most 25% accuracy.”
“The exciting result was ChatGPT said cancer is a disease, hypertension is a disease, fever is a symptom, Remdesivir is a drug and BRCA is a gene related to breast cancer,” he said. “Incredible, absolutely incredible!”
Symptom identification, however, scored lower (49–61%), and the reason may be how the large language models are trained. Doctors and researchers use biomedical ontologies to define and organise terms and relationships for consistent data representation and knowledge-sharing, but users enter more informal descriptions.
“ChatGPT uses more of a friendly and social language, because it’s supposed to be communicating with average people. In medical literature, people use proper names,” Hamed said. “The LLM is apparently trying to simplify the definition of these symptoms, because there is a lot of traffic asking such questions, so it started to minimize the formalities of medical language to appeal to those users.”
One puzzling result stood out. The National Institutes of Health maintains a database called GenBank, which gives an accession number to every identified DNA sequence. It’s usually a combination of letters and numbers. For example, the designation for the Breast Cancer 1 gene (BRCA1) is NM_007294.4.
When asked for these numbers as part of the genetic information testing, ChatGPT just made them up – a phenomenon known as “hallucinating.” Hamed sees this as a major failing amid so many other positive results.
“Maybe there is an opportunity here that we can start introducing these biomedical ontologies to the LLMs to provide much higher accuracy, get rid of all the hallucinations and make these tools into something amazing,” he said.
Hamed’s interest in LLMs began in 2023, when he discovered ChatGPT and heard about the issues regarding fact-checking. His goal is to expose the flaws so data scientists can adjust the models as needed and make them better.
“If I am analysing knowledge, I want to make sure that I remove anything that may seem fishy before I build my theories and make something that is not accurate,” he said.
New model that combines MRI, biochemical, and clinical information shows potential to enhance care
Illustration highlighting the integration of MRI radiomics and biochemical biomarkers for knee osteoarthritis progression prediction. Created with Biorender.
Image credit: Wang T, et al., 2025, PLOS Medicine, CC-BY 4.0
An artificial intelligence (AI)-assisted model that combines a patient’s MRI, biochemical, and clinical information shows preliminary promise in improving predictions of whether their knee osteoarthritis may soon worsen. Ting Wang of Chongqing Medical University, China, and colleagues present this model August 21st in the open-access journal PLOS Medicine.
In knee osteoarthritis, cartilage in the knee joint gradually wears away, causing pain and stiffness. It affects an estimated 303.1 million people worldwide and can lead to the need for total knee replacement. Being able to better predict how a person’s knee osteoarthritis may worsen in the near future could help inform more timely treatment. Prior research suggests that computational models combining multiple types of data – including a patient’s MRI results, clinical assessments, and blood and urine biochemical tests – could enhance such predictions.
The integration of all three types of information in a single predictive model has not been widely reported. To address that gap, Wang and colleagues utilized data from the Foundation of the National Institutes of Health Osteoarthritis Biomarkers Consortium on 594 people with knee osteoarthritis, including their biochemical test results, clinical data, and a total of 1,753 knee MRIs captured over a 2-year timespan.
With the help of AI tools, the researchers used half of the data to develop a predictive model combining all three data types. Then, they used the other half of the data to test the model, which they named the Load-Bearing Tissue Radiomic plus Biochemical biomarker and Clinical variable Model (LBTRBC-M).
In the tests, the LBTRBC-M showed good accuracy in using a patient’s MRI, biochemical and clinical data to predict whether, within the next two years, they would experience worsening pain alone, worsening pain alongside joint space narrowing in the knee (an indicator of structural worsening), joint space narrowing alone, or no worsening at all.
The researchers also had seven resident physicians use the model to assist their own predictions of worsening knee osteoarthritis, finding that it improved their accuracy from 46.9 to 65.4 percent.
These findings suggest that a model like LBTRBC-M could help enhance knee osteoarthritis care. However, further model refinement and validation in additional groups of patients is needed.
The authors add, “Our study shows that combining deep learning with longitudinal MRI radiomics and biochemical biomarkers significantly improves the prediction of knee osteoarthritis progression—potentially enabling earlier, more personalized intervention.”
The authors state, “This work is the result of years of collaboration across multiple disciplines, and we were especially excited to see how non-invasive imaging biomarkers could be leveraged to support individualized patient care.”
Co-author Prof. Changhai Ding notes, “This study marks a step forward in using artificial intelligence to extract meaningful clinical signals from complex datasets in musculoskeletal health.”
Investigators have developed an artificial intelligence-assisted diagnostic system that can estimate bone mineral density in both the lumbar spine and the femur of the upper leg, based on X-ray images. The advance is described in a study published in the Journal of Orthopaedic Research.
A total of 1454 X-ray images were analysed using the scientists’ system. Performance rates for the lumbar and femur of patients with bone density loss, or osteopenia, were 86.4% and 84.1%, respectively, in terms of sensitivity. The respective specificities were 80.4% and 76.3%. (Sensitivity reflected the ability of the test to correctly identify people with osteopenia, whereas specificity reflected its ability to correctly identify those without osteopenia). The test also had high sensitivity and specificity for categorising patients with and without osteoporosis.
“Bone mineral density measurement is essential for screening and diagnosing osteoporosis, but limited access to diagnostic equipment means that millions of people worldwide may remain undiagnosed,” said corresponding author Toru Moro, MD, PhD, of the University of Tokyo. “This AI system has the potential to transform routine clinical X-rays into a powerful tool for opportunistic screening, enabling earlier, broader, and more efficient detection of osteoporosis.”
New paper critically examines the US Food and Drug Administration’s regulatory framework for artificial intelligence-powered healthcare products, highlighting gaps in safety evaluations, post-market surveillance, and ethical considerations.
An agile, transparent, and ethics-driven oversight system is needed for the U.S. Food and Drug Administration (FDA) to balance innovation with patient safety when it comes to artificial intelligence-driven medical technologies. That is the takeaway from a new report issued to the FDA, published this week in the open-access journal PLOS Medicineby Leo Celi of the Massachusetts Institute of Technology, and colleagues.
Artificial intelligence is becoming a powerful force in healthcare, helping doctors diagnose diseases, monitor patients, and even recommend treatments. Unlike traditional medical devices, many AI tools continue to learn and change after they’ve been approved, meaning their behaviour can shift in unpredictable ways once they’re in use.
In the new paper, Celi and his colleagues argue that the FDA’s current system is not set up to keep tabs on these post-approval changes. Their analysis calls for stronger rules around transparency and bias, especially to protect vulnerable populations. If an algorithm is trained mostly on data from one group of people, it may make mistakes when used with others. The authors recommend that developers be required to share information about how their AI models were trained and tested, and that the FDA involve patients and community advocates more directly in decision-making. They also suggest practical fixes, including creating public data repositories to track how AI performs in the real world, offering tax incentives for companies that follow ethical practices, and training medical students to critically evaluate AI tools.
“This work has the potential to drive real-world impact by prompting the FDA to rethink existing oversight mechanisms for AI-enabled medical technologies. We advocate for a patient-centred, risk-aware, and continuously adaptive regulatory approach – one that ensures AI remains an asset to clinical practice without compromising safety or exacerbating healthcare disparities,” the authors say.
Henry Adams, Country Manager, InterSystems South Africa
One area undergoing significant evolution in the healthcare industry is the process of obtaining patient consent. It is a topic that is highly controversial but absolutely necessary and one that must evolve if we are to bring patient care into the 21st century.
Traditionally, patient consent has involved detailed discussions between healthcare providers and patients, ensuring that individuals are fully informed before agreeing to medical procedures or participation in research. However, as artificial intelligence (AI) becomes more prevalent, the mechanisms and ethics surrounding patient consent are being re-examined.
The current state of patient consent
Informed consent is a cornerstone of ethical medical practice, granting patients autonomy over their healthcare decisions. This process typically requires clear communication about the nature of the procedure, potential risks and benefits, and any alternative options.
In the context of AI, particularly with the use of big data and machine learning algorithms, the consent process becomes more complex. Patients must understand not only how their data will be used but also the implications of AI-driven analyses, which may not be entirely transparent.
The rise of dynamic consent models
To address these complexities, the concept of dynamic consent has emerged. Dynamic consent utilises digital platforms to facilitate ongoing, interactive communication between patients and healthcare providers.
This approach allows patients to modify their consent preferences in real-time, reflecting changes in their health status or personal views. Such models aim to enhance patient engagement and trust, providing a more nuanced and flexible framework for consent in the digital age.
AI has the potential to revolutionise the consent process by personalising and simplifying information delivery. Intelligent systems can tailor consent documents to individual patients, highlighting the most pertinent information and using language that aligns with the patient’s comprehension level. In addition, AI-powered chatbots can engage in real-time dialogues, answering patient questions and clarifying uncertainties, enhancing understanding and facilitating informed decision-making.
Data privacy, ethical and security considerations
The integration of AI into patient consent processes necessitates an increased attention to data privacy and security. As AI systems require access to vast amounts of personal health data, robust additional safeguards must be in place to protect against unauthorized access and breaches. Ensuring that AI algorithms operate transparently and that patients are aware of how their data is being used is critical to maintaining trust in the healthcare system, and AI in particular.
While AI can augment the consent process, the ethical implications of its use must be carefully considered. The potential for AI to inadvertently introduce biases or operate without full transparency poses challenges to informed consent. Therefore, human oversight remains indispensable.
Healthcare professionals must work alongside AI systems, the “human in the loop”, to ensure that the technology serves as a tool to enhance, rather than replace, the human touch in patient interactions.
The next 5-10 years
Over the next decade, AI will become increasingly integrated into patient consent processes. Experts predict advancements in natural language processing and machine learning will lead to more sophisticated and user-friendly consent platforms. However, the centrality of human judgment in medical decision-making is unlikely to diminish. AI can provide valuable support, but the nuanced understanding and empathy of healthcare professionals will remain vital.
So, as we take all of this into account, the evolution of AI in patient consent processes offers promising avenues for enhancing patient autonomy and streamlining healthcare operations. By leveraging AI responsibly, healthcare institutions can create more personalised, efficient, and secure consent experiences.
Nonetheless, it is imperative to balance technological innovation with ethical considerations, ensuring that human judgment continues to play a pivotal role in medical decision-making. As we navigate this new world, a collaborative approach that integrates AI capabilities with human expertise will be essential in shaping the future of patient consent. And for healthcare in South Africa, this is going to have to start with education.