Category: IT in Healthcare

Doctors Who Use AI Viewed Negatively by Their Peers, Study Shows

Johns Hopkins researchers find that despite pressure on clinicians to be early adopters of AI, many face scepticism from peers for using it

Photo by Andres Siimon on Unsplash

Doctors who use artificial intelligence at work risk having their colleagues deem them less competent for it, according to a recent Johns Hopkins University study.

While generative AI holds significant promise for advancing health care, a new study finds its use in medical decision-making impacts how physicians are perceived by their colleagues. The research shows that doctors who primarily rely on generative AI for decision-making face considerable scepticism from fellow clinicians, who correlate their use of AI with a lack of clinical skill and overall competence, resulting in a diminished perceived quality of patient care.

The research included a diverse group of clinicians from a major hospital system, involving attending physicians, residents, fellows, and advanced practice providers. Results of the study were published in Nature Digital Medicine.

Stigma stunts better care

The findings may indicate a social barrier to AI adoption in health care settings, which could slow advances that might improve patient care.

“AI is already unmistakably part of medicine,” says Tinglong Dai, professor of business at the Johns Hopkins Carey Business School and co-corresponding author of the study. “What surprised us is that doctors who use it in making medical decisions can be perceived by their peers as less capable. That kind of stigma, not the technology itself, may be an obstacle to better care.”

The study, conducted by researchers at Johns Hopkins University, involved a randomised experiment where 276 practicing clinicians evaluated different scenarios: a physician using no AI, one using AI as a primary decision-making tool, and another using it for verification. The research found that as physicians were more dependent on AI, they faced an increasing “competence penalty,” meaning they were viewed more sceptically by their peers than those physicians who did not rely on AI.

“In the age of AI, human psychology remains the ultimate variable,” says Haiyang Yang, first author of the study and academic program director of the Masters of Science in Management program at the Carey Business School. “The way people perceive AI use can matter just as much as, or even more than, the performance of the technology itself.”

Skipping AI equalled more respect

According to the study, peer perception suffers for doctors who rely on AI. Framing generative AI as a “second opinion” or a verification tool partially improved negative perceptions from peers, but it did not fully eliminate them. Not using GenAI, however, resulted in the most favourable peer perceptions.

The findings align with theories that suggest perceived dependence on an external source like AI can be seen as a weakness by clinicians.

Ironically, while GenAI’s visible use can undermine a physician’s perceived clinical expertise among peers, the study also found that clinicians still recognise AI as a beneficial tool for enhancing precision in clinical assessment. The research showed that clinicians still generally acknowledge the value of GenAI for improving the accuracy of clinical assessments, and they view institutionally customized GenAI as even more useful.

The collaborative nature of the study led to thoughtful suggestions for GenAI implementation in health care settings, which are crucial to balance innovation with maintaining professional trust and physician reputation, the researchers note.

“Physicians place a high value on clinical expertise, and as AI becomes part of the future of medicine, it’s important to recognise its potential to complement – not replace – clinical judgment, ultimately strengthening decision making and improving patient care,” said Risa Wolf, co-corresponding author of the research and associate professor of pediatric endocrinology at Johns Hopkins School of Medicine with a joint appointment at the Carey Business School.

Source: Johns Hopkins University

Altron HealthTech Set to Pilot South Africa’s First Oncology Companion App

ThriveLink to connect patients, doctors, caregivers, and medical schemes in a seamless digital platform

The last thing someone dealing with a life-threatening disease wants is the pain of endless administrative paperwork and confusion that arises when aspects of their care are not easily coordinated. Altron HealthTech is set to pilot a solution designed to minimise these burdens by integrating various aspects of care management into one solution.   

The company announced today that it will soon begin piloting ThriveLink, South Africa’s first platform to connect patients, doctors, caregivers, and medical schemes in one integrated digital space. The oncology companion app is designed to help cancer patients flourish during a trying time by providing seamless care coordination, access to key information and educational content and removal of administrative obstacles. 

“We’ve built this tool with the ultimate goal of making life easier for cancer patients to be empowered throughout managing their treatment journey,” says Altron HealthTech MD Leslie Moodley. “They’ll receive appointment tracking, medication reminders, and secure communication with their care team – all customised for their unique treatment plan in one digital space – so they can focus on what matters most: their health and wellbeing.” 

Addressing a growing crisis

The development team was inspired to create ThriveLink after frontline agents logged an alarming increase in cancer diagnoses. Cancer cases in South Africa are projected to nearly double from 62 000 in 2019 to 121 000 nationally by 2030 based on data compiled by the SA Journal of Oncology, driven by an aging population and increased lifestyle risks. 

“We have insight into anonymised and aggregated data, and were shocked at the increase in cancer volumes,” says Moodley. “We realised there was value in developing a tool that could span the entire healthcare value chain and all the various touchpoints, to solve for a very real issue. This insight sparked a critical question: how can we make it easier for oncologists, our key stakeholders, to focus on what matters most – patient care? 

ThriveLink brings together data from specialists, medical aids, pharmacies, and other relevant sources to coordinate care to connect healthcare providers. Beyond appointment tracking and medication reminders, the app offers educational content, emotional support tools, and secure communication channels. 

“The solution enables these data points to collaborate in a technical sense to coordinate care,” explains Moodley. “Our response was to build a technology-driven platform that not only streamlines authorisations and treatment protocols but also enables real-time interoperability. This empowers oncologists to coordinate care more efficiently, track treatment pathways, and adapt plans based on patient-specific outcomes. Patients won’t have to worry about burdensome details and will get reminded when it’s time to take their medication or schedule a follow-up.” 

Built on medical expertise and security

The app serves as the vital link in a complex ecosystem, ensuring secure information flow, informed decision-making, and trust at every stage.  

Altron HealthTech consulted widely with oncologists, patients, and other medical professionals before beginning development. A base application was rolled out to specialists about a year ago, and feedback from that pilot informed the expanded platform now ready for patient testing. 

The app has been built on secure, cloud-based software-as-a-service architecture in compliance with the Protection of Personal Information Act and all relevant regulatory requirements. Patients must provide informed consent before signing up. 

Beyond supporting patients directly, ThriveLink is designed to help control healthcare costs. Cancer is among the most expensive therapeutic burdens, with the Cancer Alliance having predicted that this disease will cost the public sector an additional R50 billion between 2020 and 2030. 

“By streamlining processes and integrating claims, authorisations, and clinical data, we remove duplication and costs from the system,” says Moodley. “This can indirectly help keep medical aid premiums down, benefiting all medical scheme patients.” 

Altron HealthTech is in early-stage discussions with medical aid schemes interested in integrating the app into their mobile solutions. 

Virtual Antenatal Care Linked to Poorer Pregnancy Outcomes

Source: Pixabay CC0

Women who receive more virtual antenatal care during their second or third trimesters could experience poorer pregnancy outcomes, including higher risks of preterm birth, Caesarean sections and neonatal intensive care unit admissions, a new study suggests.

Increased virtual antenatal care in later pregnancy was also found to be associated with lower rates of early skin-to-skin contact with the newborn and fewer instances of breastfeeding as the first feed.

Led by King’s College London and published in the American Journal of Obstetrics & Gynecology, the study looked at associations between virtual antenatal care and pregnancy outcomes in more than 34 000 pregnancies from a diverse, South London population, from periods before and during the COVID-19 pandemic.

Women were split into four groups, according to the proportion of virtual antenatal care appointments received during their pregnancy – low and stable virtual antenatal care throughout pregnancy, high first trimester virtual antenatal care, high second trimester virtual antenatal care, and high third trimester virtual antenatal care.

Pregnancy and birth outcome data were obtained from hospital records via the Early Life Cross-Linkage in Research, Born in South London (eLIXIR-BiSL) platform, funded by the UKRI Medical Research Council (MRC).

Analyses of the data revealed that, compared with those who received a low and stable proportion of virtual antenatal care throughout their pregnancy:

  • Women who received a high proportion of virtual antenatal care in their second trimester experienced more premature births (before 37 weeks), labour inductions, breech presentation, and bleeding after birth; and
  • Women who received a high proportion of virtual antenatal care in their third trimester had more premature births (before 37 weeks), elective or emergency Caesarean sections, and neonatal intensive care unit admissions; as well as lower rates of third- or fourth-degree vaginal tears, early skin-to-skin contact with the newborn and fewer instances of breastfeeding as the first feed.

During the COVID-19 pandemic, the use of virtual antenatal care increased, to limit face-to-face contact and prevent spread of the SARS-CoV-2 virus. While research has looked at the experiences of women and healthcare providers receiving and delivering virtual care, fewer studies have focused on the impact of virtual antenatal care on pregnancy outcomes.

Our work adds an important perspective to the growing evidence base on virtual antenatal care, suggesting that the timing of its use during pregnancy may influence pregnancy outcomes.

Dr Katie Dalrymple, Lecturer at King’s and first author of the study

The findings build on an earlier study by the team, which found that virtual maternity care during the COVID-19 pandemic was linked to higher NHS costs – with each 1% increase in virtual antenatal care associated with a £7 increase in maternity costs to the NHS.

In addition to the cost implications of virtual care, the findings from the new study suggest that virtual antenatal care could come with increased risks to mother and baby. The authors conclude that careful consideration may be needed to minimise these risks before using virtual antenatal care in future health system shocks or to replace face-to-face care.

Our study findings suggest the need for careful integration of virtual care in maternity services, to minimise potential risks.

Professor Laura Magee, Professor of Women’s Health at King’s and co-senior author of the paper

Source: King’s College London

Human Instruction with AI Guidance Gives the Best Results in Neurosurgical Training

Study has implications beyond medical education, suggesting other fields could benefit from AI-enhanced training

Artificial intelligence (AI) is becoming a powerful new tool in training and education, including in the field of neurosurgery. Yet a new study suggests that AI tutoring provides better results when paired with human instruction.

Researchers at the Neurosurgical Simulation and Artificial Intelligence Learning Centre at The Neuro (Montreal Neurological Institute-Hospital) of McGill University are studying how AI and virtual reality (VR) can improve the training and performance of brain surgeons. They simulate brain surgeries using VR, monitor students’ performance using AI and provide continuous verbal feedback on how students can improve performance and prevent errors. Previous research has shown that an intelligent tutoring system powered by AI developed at the Centre outperformed expert human teachers, but these instructors were not provided with trainee AI performance data.

In their most recent study, published in JAMA Surgery, the researchers recruited 87 medical students from four Quebec medical schools and divided them into three groups: one trained with AI-only verbal feedback, one with expert instructor feedback, and one with expert feedback informed by real-time AI performance data. The team recorded the students’ performance, including how well and how quickly their surgical skills improved while undergoing the different types of training.

They found that students receiving AI-augmented, personalised feedback from a human instructor outperformed both other groups in surgical performance and skill transfer. This group also demonstrated significantly better risk management for bleeding and tissue injury – two critical measures of surgical expertise. The study suggests that while intelligent tutoring systems can provide standardised, data-driven assessments, the integration of human expertise enhances engagement and ensures that feedback is contextualised and adaptive.

“Our findings underscore the importance of human input in AI-driven surgical education,” said lead study author Bianca Giglio. “When expert instructors used AI performance data to deliver tailored, real-time feedback, trainees learned faster and transferred their skills more effectively.”

While this study was specific to neurosurgical training, its findings could carry over to other professions where students must acquire highly technical and complex skills in high-pressure environments.

“AI is not replacing educators – it’s empowering them,” added senior author Dr Rolando Del Maestro, a neurosurgeon and current Director of the Centre. “By merging AI’s analytical power with the critical guidance of experienced instructors, we are moving closer to creating the ‘Intelligent Operating Room’ of the future capable of assessing and training learners while minimising errors during human surgical procedures.”

Source: McGill University

Doctors’ Human Touch Still Needed in the AI Healthcare Revolution

AI-based medicine will revolutionise care including for Alzheimer’s and diabetes, predicts a technology expert, but it must be accessible to all patients

AI image created with Gencraft

Healing with Artificial Intelligencewritten by technology expert Daniele Caligiore, uses the latest science research to highlight key innovations assisted by AI such as diagnostic imaging and surgical robots.

From exoskeletons that help spinal injury patients walk to algorithms that can predict the onset of dementia years in advance, Caligiore explores what he describes as a ‘revolution’ that will change healthcare forever.

Economically, the market for AI in healthcare is experiencing rapid growth, with forecasts predicting an increase in value from around USD 11 billion in 2021 to nearly USD 188 billion by 2030, reflecting an annual growth rate of 37%. AI is already being used in some countries, for example to search through genetic data for disease markers, or to assist with scheduling and other administrative tasks – and this trend is set to continue.

However, the author caveats his predictions of progress by warning these technologies may widen existing inequality. Caligiore suggests that AI-based medicine must be available to all people, regardless of where they live or how much they earn, and that people from low-income nations must not be excluded from cutting-edge care which wealthier nations can access.

Other challenges posed by the advancement of AI in healthcare includes who takes responsibility for treatment decisions, especially when a procedure goes wrong. This is a particular challenge given widespread concerns around explainable AI, as many advanced AI systems operate as black boxes, making decisions through complex processes that even their creators cannot fully understand or explain.

Caligiore says AI should support doctors and patients, not replace doctors who, says the author, have a ‘unique ability to offer empathy, understanding, and emotional support’.

“AI should be viewed as a tool, not a colleague, and it should always be seen as a support, never a replacement,” writes Caligiore.

“It is important to find the right balance in using AI tools, both for doctors and patients. Patients can use AI to learn more about their health, such as what diseases may be associated with their symptoms or what lifestyle changes may help prevent illness. However, this does not mean AI should replace doctors.”

Despite his warnings, Caligiore is largely optimistic about the impact of AI in healthcare: “Like a microscope detecting damaged cells or a map highlighting brain activity during specific tasks, AI can uncover valuable insights that might go unnoticed, aiding in more accurate and personalized diagnoses and treatments,” he says.

In any case, Caligiore predicts the healthcare landscape will look ‘dramatically different’ in a few years, with technology acting as a ‘magnifying glass for medicine’ to enable doctors to observe the human body with greater precision and detail.

Examples of where AI will make profound impacts in healthcare include, regenerative medicine, where gene and stem cell therapies repair damaged cells and organs. Spinal cord injury patients are among those who could benefit.

AI may also provide personalised therapies, suggesting treatments tailored to specific individuals often based on their unique genetic profile. Studies are being conducted into targeting different tremor types in Parkinson’s and breast cancer subtypes

The convergence of regenerative medicine, genetically modified organisms (GMOs), and AI is the next frontier in medicine, Caligiore suggests. Genetically modified organisms (GMOs), living organisms whose genetic material has been altered through genetic engineering techniques, have already paved the way for personalised gene therapies.

Blending real and virtual worlds may also prove useful to healthcare, for example the ‘metaverse’ – group therapy where patients participate with an avatar, or ‘digital twins’ – AI simulations of a patient’s body and brain on a computer so doctors can identify underlying causes of disease and simulate the effects of various therapies for specific patients to help doctors make more informed decisions.

These advances and others will reshape the doctor-patient relationship, according to Healing with Artificial Intelligence, but the author suggests the key is for patients and clinicians to keep a critical mindset about AI.

Caligiore warns that role of physicians will evolve as AI becomes more integrated into healthcare but the need for human interactions will remain ‘central to patient care’.

“While healthcare professionals must develop technical skills to use AI tools, they should also nurture and enhance qualities that AI cannot replicate – such as soft skills and emotional intelligence. These human traits are essential for introducing an emotional component into work environments,” he explains.

Source: Taylor & Francis Group

New Research Finds Surprises in ChatGPT’s Diagnosis of Medical Symptoms

The popular large language model performs better than expected but still has some knowledge gaps – and hallucinations

When people worry that they’re getting sick, they are increasingly turning to generative artificial intelligence like ChatGPT for a diagnosis. But how accurate are the answers that AI gives out?

Research recently published in the journal iScience puts ChatGPT and its large language models to the test, with a few surprising conclusions.

Ahmed Abdeen Hamed – a research fellow for the Thomas J. Watson College of Engineering and Applied Science’s School of Systems Science and Industrial Engineering at Binghamton University – led the study, with collaborators from AGH University of Krakow, Poland; Howard University; and the University of Vermont.

As part of Professor Luis M. Rocha’s Complex Adaptive Systems and Computational Intelligence Lab, Hamed developed a machine-learning algorithm last year that he calls xFakeSci. It can detect up to 94% of bogus scientific papers — nearly twice as successfully as more common data-mining techniques. He sees this new research as the next step to verify the biomedical generative capabilities of large language models.

“People talk to ChatGPT all the time these days, and they say: ‘I have these symptoms. Do I have cancer? Do I have cardiac arrest? Should I be getting treatment?’” Hamed said. “It can be a very dangerous business, so we wanted to see what would happen if we asked these questions, what sort of answers we got and how these answers could be verified from the biomedical literature.”

The researchers tested ChatGPT for disease terms and three types of associations: drug names, genetics and symptoms. The AI showed high accuracy in identifying disease terms (88–97%), drug names (90–91%) and genetic information (88–98%). Hamed admitted he thought it would be “at most 25% accuracy.”

“The exciting result was ChatGPT said cancer is a disease, hypertension is a disease, fever is a symptom, Remdesivir is a drug and BRCA is a gene related to breast cancer,” he said. “Incredible, absolutely incredible!”

Symptom identification, however, scored lower (49–61%), and the reason may be how the large language models are trained. Doctors and researchers use biomedical ontologies to define and organise terms and relationships for consistent data representation and knowledge-sharing, but users enter more informal descriptions.

“ChatGPT uses more of a friendly and social language, because it’s supposed to be communicating with average people. In medical literature, people use proper names,” Hamed said. “The LLM is apparently trying to simplify the definition of these symptoms, because there is a lot of traffic asking such questions, so it started to minimize the formalities of medical language to appeal to those users.”

One puzzling result stood out. The National Institutes of Health maintains a database called GenBank, which gives an accession number to every identified DNA sequence. It’s usually a combination of letters and numbers. For example, the designation for the Breast Cancer 1 gene (BRCA1) is NM_007294.4.

When asked for these numbers as part of the genetic information testing, ChatGPT just made them up – a phenomenon known as “hallucinating.” Hamed sees this as a major failing amid so many other positive results.

“Maybe there is an opportunity here that we can start introducing these biomedical ontologies to the LLMs to provide much higher accuracy, get rid of all the hallucinations and make these tools into something amazing,” he said.

Hamed’s interest in LLMs began in 2023, when he discovered ChatGPT and heard about the issues regarding fact-checking. His goal is to expose the flaws so data scientists can adjust the models as needed and make them better.

“If I am analysing knowledge, I want to make sure that I remove anything that may seem fishy before I build my theories and make something that is not accurate,” he said.

Source: Binghamton University

New AI–based Test Detects Early Signs of Osteoporosis from X-ray Images

Photo by Cottonbro on Pexels

Investigators have developed an artificial intelligence-assisted diagnostic system that can estimate bone mineral density in both the lumbar spine and the femur of the upper leg, based on X-ray images. The advance is described in a study published in the Journal of Orthopaedic Research.

A total of 1454 X-ray images were analysed using the scientists’ system. Performance rates for the lumbar and femur of patients with bone density loss, or osteopenia, were 86.4% and 84.1%, respectively, in terms of sensitivity. The respective specificities were 80.4% and 76.3%. (Sensitivity reflected the ability of the test to correctly identify people with osteopenia, whereas specificity reflected its ability to correctly identify those without osteopenia). The test also had high sensitivity and specificity for categorising patients with and without osteoporosis.

“Bone mineral density measurement is essential for screening and diagnosing osteoporosis, but limited access to diagnostic equipment means that millions of people worldwide may remain undiagnosed,” said corresponding author Toru Moro, MD, PhD, of the University of Tokyo. “This AI system has the potential to transform routine clinical X-rays into a powerful tool for opportunistic screening, enabling earlier, broader, and more efficient detection of osteoporosis.”

Source: Wiley

Scientists Argue for More FDA Oversight of Healthcare AI Tools 

New paper critically examines the US Food and Drug Administration’s regulatory framework for artificial intelligence-powered healthcare products, highlighting gaps in safety evaluations, post-market surveillance, and ethical considerations.

An agile, transparent, and ethics-driven oversight system is needed for the U.S. Food and Drug Administration (FDA) to balance innovation with patient safety when it comes to artificial intelligence-driven medical technologies. That is the takeaway from a new report issued to the FDA, published this week in the open-access journal PLOS Medicine by Leo Celi of the Massachusetts Institute of Technology, and colleagues.

Artificial intelligence is becoming a powerful force in healthcare, helping doctors diagnose diseases, monitor patients, and even recommend treatments. Unlike traditional medical devices, many AI tools continue to learn and change after they’ve been approved, meaning their behaviour can shift in unpredictable ways once they’re in use.

In the new paper, Celi and his colleagues argue that the FDA’s current system is not set up to keep tabs on these post-approval changes. Their analysis calls for stronger rules around transparency and bias, especially to protect vulnerable populations. If an algorithm is trained mostly on data from one group of people, it may make mistakes when used with others. The authors recommend that developers be required to share information about how their AI models were trained and tested, and that the FDA involve patients and community advocates more directly in decision-making. They also suggest practical fixes, including creating public data repositories to track how AI performs in the real world, offering tax incentives for companies that follow ethical practices, and training medical students to critically evaluate AI tools.

“This work has the potential to drive real-world impact by prompting the FDA to rethink existing oversight mechanisms for AI-enabled medical technologies. We advocate for a patient-centred, risk-aware, and continuously adaptive regulatory approach – one that ensures AI remains an asset to clinical practice without compromising safety or exacerbating healthcare disparities,” the authors say.

Provided by PLOS

The Evolution of AI in Patient Consent is a Data-Driven Future

Henry Adams, Country Manager, InterSystems South Africa

One area undergoing significant evolution in the healthcare industry is the process of obtaining patient consent. It is a topic that is highly controversial but absolutely necessary and one that must evolve if we are to bring patient care into the 21st century.

Traditionally, patient consent has involved detailed discussions between healthcare providers and patients, ensuring that individuals are fully informed before agreeing to medical procedures or participation in research. However, as artificial intelligence (AI) becomes more prevalent, the mechanisms and ethics surrounding patient consent are being re-examined.

The current state of patient consent

Informed consent is a cornerstone of ethical medical practice, granting patients autonomy over their healthcare decisions. This process typically requires clear communication about the nature of the procedure, potential risks and benefits, and any alternative options.

In the context of AI, particularly with the use of big data and machine learning algorithms, the consent process becomes more complex. Patients must understand not only how their data will be used but also the implications of AI-driven analyses, which may not be entirely transparent.

The rise of dynamic consent models

To address these complexities, the concept of dynamic consent has emerged. Dynamic consent utilises digital platforms to facilitate ongoing, interactive communication between patients and healthcare providers.

This approach allows patients to modify their consent preferences in real-time, reflecting changes in their health status or personal views. Such models aim to enhance patient engagement and trust, providing a more nuanced and flexible framework for consent in the digital age.

AI has the potential to revolutionise the consent process by personalising and simplifying information delivery. Intelligent systems can tailor consent documents to individual patients, highlighting the most pertinent information and using language that aligns with the patient’s comprehension level. In addition, AI-powered chatbots can engage in real-time dialogues, answering patient questions and clarifying uncertainties, enhancing understanding and facilitating informed decision-making.

Data privacy, ethical and security considerations

The integration of AI into patient consent processes necessitates an increased attention to data privacy and security. As AI systems require access to vast amounts of personal health data, robust additional safeguards must be in place to protect against unauthorized access and breaches. Ensuring that AI algorithms operate transparently and that patients are aware of how their data is being used is critical to maintaining trust in the healthcare system, and AI in particular.

While AI can augment the consent process, the ethical implications of its use must be carefully considered. The potential for AI to inadvertently introduce biases or operate without full transparency poses challenges to informed consent. Therefore, human oversight remains indispensable.

Healthcare professionals must work alongside AI systems, the “human in the loop”, to ensure that the technology serves as a tool to enhance, rather than replace, the human touch in patient interactions.

The next 5-10 years

Over the next decade, AI will become increasingly integrated into patient consent processes. Experts predict advancements in natural language processing and machine learning will lead to more sophisticated and user-friendly consent platforms. However, the centrality of human judgment in medical decision-making is unlikely to diminish. AI can provide valuable support, but the nuanced understanding and empathy of healthcare professionals will remain vital.

So, as we take all of this into account, the evolution of AI in patient consent processes offers promising avenues for enhancing patient autonomy and streamlining healthcare operations. By leveraging AI responsibly, healthcare institutions can create more personalised, efficient, and secure consent experiences.

Nonetheless, it is imperative to balance technological innovation with ethical considerations, ensuring that human judgment continues to play a pivotal role in medical decision-making. As we navigate this new world, a collaborative approach that integrates AI capabilities with human expertise will be essential in shaping the future of patient consent. And for healthcare in South Africa, this is going to have to start with education.

A New Way of Visualising BP Data to Better Manage Hypertension

Photo by National Cancer Institute on Unsplash

If a picture is worth a thousand words, how much is a graph worth? For doctors trying to determine whether a patient’s blood pressure is within normal range, the answer may depend on the type of graph they’re looking at.

A new study from the University of Missouri highlights how different graph formats can affect clinical decision-making. Because blood pressure fluctuates moment to moment, day to day, it can be tricky for doctors to accurately assess it.

“Sometimes a patient’s blood pressure is high at the doctor’s office but normal at home, a condition called white coat hypertension,” said Victoria Shaffer, a psychology professor in the College of Arts and Science and lead author of the study published in the Journal of General Internal Medicine. “There are some estimates that 10% to 20% of the high blood pressure that gets diagnosed in the clinic is actually controlled – it’s just white coat hypertension – and if you take those same people’s blood pressure at home, it is really controlled.”

In the study, Shaffer and the team showed 57 doctors how a hypothetical patient’s blood pressure data would change over time using two different types of graphs. One raw graph showed the actual numbers, which displayed peaks and valleys, while the other graph was a new visual tool they created: a smoothed graph that averages out fluctuations in data.  

When the blood pressure of the patient was under control but had a lot of fluctuation, the doctors were more likely to accurately assess the patient’s health using the new smoothed graph compared to the raw graph.

“Raw data can be visually noisy and hard to interpret because it is easy to get distracted by outliers in the data,” Shaffer said. “At the end of the day, patients and their doctors just want to know if blood pressure is under control, and this new smoothed graph can be an additional tool to make it easier and faster for busy doctors to accurately assess that.”

This proof-of-concept study is the foundation for Shaffer’s ongoing research with Richelle Koopman, a professor in the School of Medicine, which includes working with Vanderbilt University and Oregon Health & Science University to determine whether the new smoothed graph can one day be shown to patients taking their own blood pressure at home. The research team is working to get the technology integrated with HIPAA-compliant electronic health records that patients and their care team have access to.

This could alleviate pressure on the health care system by potentially reducing the need for in-person visits when blood pressure is under control, reducing the risk for false positives that may lead to over-treatment.

 “There are some people who are being over-treated with unnecessary blood pressure medication that can make them dizzy and lower their heart rate,” Shaffer said. “This is particularly risky for older adults who are more at risk for falling. Hopefully, this work can help identify those who are being over-treated.”

The findings were not particularly surprising to Shaffer.

“As a psychologist, I know that, as humans, we have these biases that underlie a lot of our judgments and decisions,” Shaffer said. “We tend to be visually drawn to extreme cases and perceive extreme cases as threats. It’s hard to ignore, whether you’re a patient or a provider. We are all humans.”

Given the increasing popularity of health informatics and smart wearable devices that track vital signs, the smoothed graphs could one day be applied to interpreting other health metrics.

“We have access to all this data now like never before, but how do we make use of it in a meaningful way, so we are not constantly overwhelming people?” Shaffer said. “With better visualisation tools, we can give people better context for their health information and help them take action when needed.”

Source: EurekAlert!