Tag: artificial intelligence

Admin and Ethics should be the Basis of Your Healthcare AI Stratetgy

Technology continues to play a strong role in shaping healthcare. In 2023, the focus was on how Artificial Intelligence (AI),  became significantly entrenched in patient records, diagnosis and care. Now in 2024 the focus is on the ethical aspects of AI.  Many organisations including practitioner groups, hospitals and medical associations are putting together AI Codes of Conduct, with new legislation planning to be passed in countries such as the USA.

The entire patient journey has benefited from the use of AI, in tangible ways that we can understand. From online bookings, the sharing of information with electronic health records, keyword diagnosis, sharing of visual scans, e-scripts, easy claims, SMS’s and billing, are all examples of how software systems are incorporated into practices to facilitate a streamlined experience for both the patient and doctor. *But although 75% of medical professionals agree on the transformation abilities of AI, only 6% have implemented an AI strategy.

Strategies need to include ethical considerations

CompuGroup Medical South Africa, (CGM SA), a leading international MedTech company that has spent over 20 years designing software solutions for the healthcare industry, has identified one main area that seems to constantly be the topic for ethical consideration.

This is the sharing of patient electronic health records or EHR’s. On one hand the wealth of information provided in each EHR – from a patient’s medical history, demographics, their laboratory test results over time, medicine prescribed, a history of medical procedures, X-rays to any medical allergies – offers endless opportunities for real time patient care. On the other hand, there seems to be a basic mistrust of how these records will be shared and stored, no one wants their personal medical information to end up on the internet.

But there’s also the philosophical view that although you might not want your info to be public record, it still has the ability to benefit the care of thousands of people. If we want a learning AI system that adapts as we do, if we want a decision making support system that is informed by past experiences, then the sharing of data should be viewed as a tool and no longer a privacy barrier.

Admin can cause burnout

Based on their interactions with professionals, CGM has informally noted that healthcare practices spend 73% of their time dealing with administrative tasks. This can be broken down into 38% focusing on EHR documentation and review, 19% related to insurance and billing, 11% on tests, medications and other orders and the final 6% on clinical planning and logistics.

Even during the consultation doctors can spend up to 40% of their time taking clinical notes. Besides the extra burden that this places on health care practices, this also leads to less attention being paid to the patient and still requires 1-2 hours of admin in the evenings. (Admin being the number one cause of burnout in clinicians and too much screen time during interactions being the number one complaint by patients.)

The solution

The ability for medical practitioners to implement valuable and effective advanced technical software, such as Autoscriber, will assist with time saving, data quality and overall job satisfaction. Autoscriber is an AI engine designed to ease the effort required when creating clinical notes by turning the consultation between patient and doctor into a structured summary that includes ICD-10 codes which is the standard method of classification of diseases used by South African medical professionals    

It identifies clinical facts in real time, including medications and symptoms. It then orders and summarises the data in a format ready for import into the EHR, creating a more detailed and standardised report on each patient encounter, allowing for a more holistic patient outcome. In essence, with the introduction of Autoscriber into the South African market, CGM seeks to aid practitioners in swiftly creating precise and efficient clinical records, saving them from extensive after-hours commitments.

Dilip Naran, VP of Product Architecture at CGM SA explains: “It is clear that AI will not replace healthcare professionals, but it will augment their capabilities to provide superior patient care. Ethical considerations are important but should not override patient care or safety. The Autoscriber solution provides full control to the HCP to use, edit or discard the transcribed note ensuring that these notes are comprehensive, attributable and contemporaneous.”

AI-based App can Help Physicians Diagnose Melanomas

3D structure of a melanoma cell derived by ion abrasion scanning electron microscopy. Credit: Sriram Subramaniam/ National Cancer Institute

A mobile app that uses artificial intelligence, AI, to analyse images of suspected skin lesions can diagnose melanoma with very high precision. This is shown in a study led from Linköping University in Sweden where the app has been tested in primary care. The results have been published in the British Journal of Dermatology.

“Our study is the first in the world to test an AI-based mobile app for melanoma in primary care in this way. A great many studies have been done on previously collected images of skin lesions and those studies relatively agree that AI is good at distinguishing dangerous from harmless ones. We were quite surprised by the fact that no one had done a study on primary care patients,” says Magnus Falk, senior associate professor at the Department of Health, Medicine and Caring Sciences at Linköping University, specialist in general practice at Region Östergötland, who led the current study.

Melanoma can be difficult to differentiate from other skin changes, even for experienced physicians. However, it is important to detect melanoma as early as possible, as it is a serious type of skin cancer.

There is currently no established AI-based support for assessing skin lesions in Swedish healthcare.

“Primary care physicians encounter many skin lesions every day and with limited resources need to make decisions about treatment in cases of suspected skin melanoma. This often results in an abundance of referrals to specialists or the removal of skin lesions, which in the majority of cases turn out to be harmless. We wanted to see if the AI support tool in the app could perform better than primary care physicians when it comes to identifying pigmented skin lesions as dangerous or not, in comparison with the final diagnosis,” says Panos Papachristou, researcher affiliated with Karolinska Institutet and specialist in general practice, main author of the study and co-founder of the company that developed the app.

And the results are promising.

“First of all, the app missed no melanoma. This disease is so dangerous that it’s essential not to miss it. But it’s almost equally important that the AI decision support tool could acquit many suspected skin lesions and determine that they were harmless,” says Magnus Falk.

In the study, primary care physicians followed the usual procedure for diagnosing suspected skin tumours. If the physicians suspected melanoma, they either referred the patient to a dermatologist for diagnosis, or the skin lesion was cut away for tissue analysis and diagnosis.

Only after the physician decided how to handle the suspected melanoma did they use the AI-based app. This involves the physician taking a picture of the skin lesion with a mobile phone equipped with an enlargement lens called a dermatoscope. The app analyses the image and provides guidance on whether or not the skin lesion appears to be melanoma.

To find out how well the AI-based app worked as a decision support tool, the researchers compared the app’s response to the diagnoses made by the regular diagnostic procedure.

Of the more than 250 skin lesions examined, physicians found 11 melanomas and 10 precursors of cancer, known as in situ melanoma. The app found all the melanomas, and missed only one precursor. In cases where the app responded that a suspected lesion was not a melanoma, including in situ melanoma, there was a 99.5% probability that this was correct.

“It seems that this method could be useful. But in this study, physicians weren’t allowed to let their decision be influenced by the app’s response, so we don’t know what happens in practice if you use an AI-based decision support tool. So even if this is a very positive result, there is uncertainty and we need to continue to evaluate the usefulness of this tool with scientific studies,” says Magnus Falk.

The researchers now plan to proceed with a large follow-up primary care study in several countries, where use of the app as an active decision support tool will be compared to not using it at all.

Source: Linköping University

Is AI a Help or Hindrance to Radiologists? It’s Down to the Doctor

New research shows AI isn’t always a help for radiologists

Photo by Anna Shvets

One of the most touted promises of medical artificial intelligence tools is their ability to augment human clinicians’ performance by helping them interpret images such as X-rays and CT scans with greater precision to make more accurate diagnoses.

But the benefits of using AI tools on image interpretation appear to vary from clinician to clinician, according to new research led by investigators at Harvard Medical School, working with colleagues at MIT and Stanford.

The study findings suggest that individual clinician differences shape the interaction between human and machine in critical ways that researchers do not yet fully understand. The analysis, published in Nature Medicine, is based on data from an earlier working paper by the same research group released by the National Bureau of Economic Research.

In some instances, the research showed, use of AI can interfere with a radiologist’s performance and interfere with the accuracy of their interpretation.

“We find that different radiologists, indeed, react differently to AI assistance – some are helped while others are hurt by it,” said co-senior author Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS.

“What this means is that we should not look at radiologists as a uniform population and consider just the ‘average’ effect of AI on their performance,” he said. “To maximize benefits and minimize harm, we need to personalize assistive AI systems.”

The findings underscore the importance of carefully calibrated implementation of AI into clinical practice, but they should in no way discourage the adoption of AI in radiologists’ offices and clinics, the researchers said.

Instead, the results should signal the need to better understand how humans and AI interact and to design carefully calibrated approaches that boost human performance rather than hurt it.

“Clinicians have different levels of expertise, experience, and decision-making styles, so ensuring that AI reflects this diversity is critical for targeted implementation,” said Feiyang “Kathy” Yu, who conducted the work while at the Rajpurkar lab with co-first author on the paper with Alex Moehring at the MIT Sloan School of Management.

“Individual factors and variation would be key in ensuring that AI advances rather than interferes with performance and, ultimately, with diagnosis,” Yu said.

AI tools affected different radiologists differently

While previous research has shown that AI assistants can, indeed, boost radiologists’ diagnostic performance, these studies have looked at radiologists as a whole without accounting for variability from radiologist to radiologist.

In contrast, the new study looks at how individual clinician factors – area of specialty, years of practice, prior use of AI tools – come into play in human-AI collaboration.

The researchers examined how AI tools affected the performance of 140 radiologists on 15 X-ray diagnostic tasks – how reliably the radiologists were able to spot telltale features on an image and make an accurate diagnosis. The analysis involved 324 patient cases with 15 pathologies: abnormal conditions captured on X-rays of the chest.

To determine how AI affected doctors’ ability to spot and correctly identify problems, the researchers used advanced computational methods that captured the magnitude of change in performance when using AI and when not using it.

The effect of AI assistance was inconsistent and varied across radiologists, with the performance of some radiologists improving with AI and worsening in others.

AI tools influenced human performance unpredictably

AI’s effects on human radiologists’ performance varied in often surprising ways.

For instance, contrary to what the researchers expected, factors such how many years of experience a radiologist had, whether they specialised in thoracic, or chest, radiology, and whether they’d used AI readers before, did not reliably predict how an AI tool would affect a doctor’s performance.

Another finding that challenged the prevailing wisdom: Clinicians who had low performance at baseline did not benefit consistently from AI assistance. Some benefited more, some less, and some none at all. Overall, however, lower-performing radiologists at baseline had lower performance with or without AI. The same was true among radiologists who performed better at baseline. They performed consistently well, overall, with or without AI.

Then came a not-so-surprising finding: More accurate AI tools boosted radiologists’ performance, while poorly performing AI tools diminished the diagnostic accuracy of human clinicians.

While the analysis was not done in a way that allowed researchers to determine why this happened, the finding points to the importance of testing and validating AI tool performance before clinical deployment, the researchers said. Such pre-testing could ensure that inferior AI doesn’t interfere with human clinicians’ performance and, therefore, patient care.

What do these findings mean for the future of AI in the clinic?

The researchers cautioned that their findings do not provide an explanation for why and how AI tools seem to affect performance across human clinicians differently, but note that understanding why would be critical to ensuring that AI radiology tools augment human performance rather than hurt it.

To that end, the team noted, AI developers should work with physicians who use their tools to understand and define the precise factors that come into play in the human-AI interaction.

And, the researchers added, the radiologist-AI interaction should be tested in experimental settings that mimic real-world scenarios and reflect the actual patient population for which the tools are designed.

Apart from improving the accuracy of the AI tools, it’s also important to train radiologists to detect inaccurate AI predictions and to question an AI tool’s diagnostic call, the research team said. To achieve that, AI developers should ensure that they design AI models that can “explain” their decisions.

“Our research reveals the nuanced and complex nature of machine-human interaction,” said study co-senior author Nikhil Agarwal, professor of economics at MIT. “It highlights the need to understand the multitude of factors involved in this interplay and how they influence the ultimate diagnosis and care of patients.”

Source: Harvard Medical School

Getting the Most from AI in MedTech Takes Data Know-How

As a leader in Medical Technology innovation, InterSystems, a pioneer in healthcare data platform development, has learned, understood, and incorporated pivotal insights from its extensive experience in digital health solutions. That experience points up the need to give AI a strong foundation.

We understand the importance of leveraging AI to drive transformative change in healthcare. Our latest white paper, “Getting the Most from AI in MedTech Takes Data Know-How,” dives into the challenges and opportunities facing MedTech companies venturing into the realm of AI. From data cleanliness to privacy and security considerations, we address key issues that MedTech companies must navigate to succeed in today’s rapidly evolving healthcare landscape.

AI in MedTech Takes Data Know-How

The promise of AI in revolutionising MedTech is undeniable. AI in varying forms and degrees is forecasted to save hundreds of thousands of lives and billions of dollars a year. But here’s the catch- AI models are only as good as the data they’re built on. An AI application can sift through large amounts of data from various Electronic Health Record (EHR) environments and legacy systems and identify patterns within the scope of its model, but it can’t identify data that exists outside of those boundaries.

If one asks “What risk factors does the patient have for stroke?”, AI can only answer based on the information that’s there. Sometimes, things get lost in translation, and that’s why interoperability – the ability to exchange information in a way that ensures the sender and receiver understand data the same way is crucial.

InterSystems: Your Data Sherpa:

Ever wondered why some AI models in MedTech fall short? It’s all about the data. This means MedTech companies can’t just lean on their currently used standard but should consider all those in which relevant data is captured in the market or build on a platform that does.

With InterSystems by your side, you gain access to a treasure trove of healthcare data expertise. One of the benefits of our business is that it’s much broader than a single EHR. This means providing software solutions like The HL7® FHIR® (Fast Healthcare Interoperability Resources) offering a comprehensive view of patient data, accelerating development timelines, and delivering tangible results that showcase the value of your innovations.

Clean Data Is a Must

Data cleanliness is key in the world of AI. Pulling data from various sources presents its own set of challenges, from ensuring data cleanliness to reconciling discrepancies and omissions. Raw data is often messy, inconsistent, and filled with gaps like missing labels. If the data fed into an AI model is incomplete and error-ridden, the conclusions drawn from its analysis will be similarly flawed and suspect. Thus, maintaining high standards of data quality is essential to ensure the accuracy and effectiveness of AI-driven insights.

Henry Adams, Country Manager, InterSystems South Africa, says: “InterSystems advocates for robust preprocessing, cleaning, and labelling techniques to ensure data quality and integrity. Our platform keeps track of data lineage, simplifies labelling, and aggregates health data into a single, patient-centric model ready for analysis”.

Privacy, Security, and Reliability: The Sweet Success!

Privacy and security are essential across industries, but they are even more critical for MedTech product developers. Handling sensitive patient data necessitates strict adherence to regulations like HIPAA and GDPR to safeguard patient confidentiality and comply with legal requirements. Beyond regulatory compliance, ensuring privacy and security is crucial for maintaining patient safety, preserving reputation and trust, and fostering collaboration within the industry.

To help MedTech companies comply with regulations and safeguard patient data, InterSystems’ platform meets needs across major deployments, such as a nonprofit health data network and uses techniques like redundant processing and queues built into the connective tissue of their software. Reliable connectivity solutions ensure seamless data exchange, even in the most demanding healthcare environments.

Charting the Course Forward

If you are a MedTech company still struggling to make sense of siloed healthcare data for your AI initiatives? We have the answers-collaboration with the right partner is essential for integrating AI into medical practices. An ideal partner understands the need for data acquisition, aggregation, cleaning, privacy, and security regulations. “With InterSystems as your partner and by your side, you can navigate the complexities of AI integration and drive transformative innovation in healthcare, making MedTech excellence easier to attain,” concludes Adams.

You can learn more about our support for MedTech innovation at InterSystems.com/MedTech.

For more information or to download the guide, please visit!  

When it Comes to Personalised Cancer Treatments, AI is no Match for Human Doctors

Cancer treatment is growing more complex, but so too are the possibilities. After all, the better a tumour’s biology and genetic features are understood, the more treatment approaches there are. To be able to offer patients personalised therapies tailored to their disease, laborious and time-consuming analysis and interpretation of various data is required. In one of many artificial intelligence (AI)projects at Charité – Universitätsmedizin Berlin and Humboldt-Universität zu Berlin, researchers studied whether generative AI tools such as ChatGPT can help with this step.

The crucial factor in the phenomenon of tumour growth is an imbalance of growth-inducing and growth-inhibiting factors, which can result, for example, from changes in oncogenes.

Precision oncology, a specialised field of personalised medicine, leverages this knowledge by using specific treatments such as low-molecular weight inhibitors and antibodies to target and disable hyperactive oncogenes.

The first step in identifying which genetic mutations are potential targets for treatment is to analyse the genetic makeup of the tumour tissue. The molecular variants of the tumour DNA that are necessary for precision diagnosis and treatment are determined. Then the doctors use this information to craft individual treatment recommendations. In especially complex cases, this requires knowledge from various fields of medicine.

At Charité, this is when the “molecular tumour board” (MTB) meets: Experts from the fields of pathology, molecular pathology, oncology, human genetics, and bioinformatics work together to analyse which treatments seem most promising based on the latest studies.

It is a very involved process, ultimately culminating in a personalised treatment recommendation.

Can artificial intelligence help with treatment decisions?

Dr Damian Rieke, a doctor at Charité, and his colleagues wondered whether AI might be able to help at this juncture.

In a study just recently published in the journal JAMA Network Open, they worked with other researchers to examine the possibilities and limitations of large language models such as ChatGPT in automatically scanning scientific literature with an eye to selecting personalised treatments.

AI ‘not even close’

“We prompted the models to identify personalised treatment options for fictitious cancer patients and then compared the results with the recommendations made by experts,” Rieke explains.

His conclusion: “AI models were able to identify personalised treatment options in principle – but they weren’t even close to the abilities of human experts.”

The team created ten molecular tumour profiles of fictitious patients for the experiment.

A human physician specialist and four large language models were then tasked with identifying a personalised treatment option.

These results were presented to the members of the MTB for assessment, without them knowing where which recommendation came from.

Improved AI models hold promise for future uses

Dr. Manuela Benary, a bioinformatics specialist reported: “There were some surprisingly good treatment options identified by AI in isolated cases. “But large language models perform much worse than human experts.”

Beyond that, data protection, privacy, and reproducibility pose particular challenges in relation to the use of artificial intelligence with real-world patients, she notes.

Still, Rieke is fundamentally optimistic about the potential uses of AI in medicine: “In the study, we also showed that the performance of AI models is continuing to improve as the models advance. This could mean that AI can provide more support for even complex diagnostic and treatment processes in the future – as long as humans are the ones to check the results generated by AI and have the final say about treatment.”

Source: Charité – Universitätsmedizin Berlin

AI-based CT Scans of the Brain can Nearly Match MRI

Photo by Mart Production on Pexels

A new artificial intelligence (AI)-based method can provide as much information on subtle neurodegenerative changes in the brain captured by computed tomography (CT) as compared to magnetic resonance imaging (MRI). The method, reported in the journal Alzheimer’s & Dementia, could enhance diagnostic support, particularly in primary care, for conditions such as dementia and other brain disorders.

Compared to MRI, which requires powerful superconducting magnetics and their associated cryogenic cooling, computed tomography (CT) is a relatively inexpensive and widely available imaging technology. CT is considered inferior to MRI when it comes to reproducing subtle structural changes in the brain or flow changes in the ventricular system. Certain imaging must therefore currently be carried out by specialist departments at larger hospitals equipped with MRI.

AI trained on MRI images

Created with deep learning, a form of AI, the software has been trained to transfer interpretations from MRI images to CT images of the same brains. The new software can provide diagnostic support for radiologists and other professionals who interpret CT images.

“Our method generates diagnostically useful data from routine CT scans that, in some cases, is as good as an MRI scan performed in specialist healthcare,” says Michael Schöll, a professor at Sahlgrenska Academy who led the work involved in the study, carried out in collaboration with researchers at Karolinska Institutet, the National University of Singapore, and Lund University

“The point is that this simple, quick method can provide much more information from examinations that are already carried out on a routine basis within primary care, but also in certain specialist healthcare investigations. In its initial stage, the method can support dementia diagnosis, however, it is also likely to have other applications within neuroradiology.”

Reliable decision-making support

This is a well-validated clinical application of AI-based algorithms, and has the potential to become a fast and reliable form of decision-making support that effectively reduces the number of false negatives. The researchers believe that this solution can improve diagnostics in primary care, optimising patient flow to specialist care.

“This is a major step forward for imaging diagnosis,” says Meera Srikrishna, a postdoctor at the University of Gothenburg and lead author of the study.

“It is now possible to measure the size of different structures or regions of the brain in a similar way to advanced analysis of MRI images. The software makes it possible to segment the brain’s constituent parts in the image and to measure its volume, even though the image quality is not as high with CT.”

Applications for other brain diseases

The software was trained on images of 1117 people, all of whom underwent both CT and MRI imaging. The current study mainly involved healthy older individuals and patients with various forms of dementia. Another application that the team is now investigating is for normal pressure hydrocephalus (NPH).

With NPH, the team has obtained new results indicating that the method can be used both during diagnosis and to monitor the effects of treatment. NPH is a condition that occurs particularly in older people, whereby fluid builds up in the cerebral ventricular system and results in neurological symptoms. About two percent of all people over the age of 65 are affected. Because diagnosis can be complicated and the condition risks being confused with other diseases, many cases are likely to be missed.

“NPH is difficult to diagnose, and it can also be hard to safely evaluate the effect of shunt surgery to drain the fluid in the brain,” continues Michael. “We therefore believe that our method can make a big difference when caring for these patients.”

The software has been developed over the course of several years, and development is now continuing in cooperation with clinics in Sweden, the UK, and the US together with a company, which is a requirement for the innovation to be approved and transferred to healthcare.

Source: University of Gothenburg

Clinical Researchers Beware – ChatGPT is not a Reliable Aid

Photo by National Cancer Institute on Unsplash

Clinicians are all too familiar with the ‘Google patient’ who finds every scary, worst-case or outright false diagnosis online on whatever is ailing them. During COVID, misinformation spread like wildfire, eroding the public’s trust in vaccines and the healthcare profession. But now, AI models like ChatGPT can be whispering misleading information to the clinical researchers trying to produce real research.

Researchers from CHU Sainte-Justine and the Montreal Children’s Hospital recently posed 20 medical questions to ChatGPT. The chatbot provided answers of limited quality, including factual errors and fabricated references, show the results of the study published in Mayo Clinic Proceedings: Digital Health.

“These results are alarming, given that trust is a pillar of scientific communication. ChatGPT users should pay particular attention to the references provided before integrating them into medical manuscripts,” says Dr Jocelyn Gravel, lead author of the study and emergency physician at CHU Sainte-Justine.

Questionable quality, fabricated references

The researchers drew their questions from existing studies and asked ChatGPT to support its answers with references. They then asked the authors of the articles from which the questions were taken to rate the software’s answers on a scale from 0 to 100%.

Out of 20 authors, 17 agreed to review the answers of ChatGPT. They judged them to be of questionable quality (median score of 60%). They also found major (five) and minor (seven) factual errors. For example, the software suggested administering an anti-inflammatory drug by injection, when it should be swallowed. ChatGPT also overestimated the global burden of mortality associated with Shigella infections by a factor of ten.

Of the references provided, 69% were fabricated, yet looked real. Most of the false citations (95%) used the names of authors who had already published articles on a related subject, or came from recognised organisations such as the Food and Drug Administration. The references all bore a title related to the subject of the question and used the names of known journals or websites. Even some of the real references contained errors (eight out of 18).

ChatGPT explains

When asked about the accuracy of the references provided, ChatGPT gave varying answers. In one case, it claimed, “References are available in Pubmed,” and provided a web link. This link referred to other publications unrelated to the question. At another point, the software replied, “I strive to provide the most accurate and up-to-date information available to me, but errors or inaccuracies can occur.”

Despite even the most ‘truthful’ of these responses, ChatGPT poses hidden risks to academic, the researcher say.

“The importance of proper referencing in science is undeniable. The quality and breadth of the references provided in authentic studies demonstrate that the researchers have performed a complete literature review and are knowledgeable about the topic. This process enables the integration of findings in the context of previous work, a fundamental aspect of medical research advancement. Failing to provide references is one thing but creating fake references would be considered fraudulent for researchers,” says Dr Esli Osmanlliu, emergency physician at the Montreal Children’s Hospital and scientist with the Child Health and Human Development Program at the Research Institute of the McGill University Health Centre.

“Researchers using ChatGPT may be misled by false information because clear, seemingly coherent and stylistically appealing references can conceal poor content quality,” adds Dr Osmanlliu.

This is the first study to assess the quality and accuracy of references provided by ChatGPT, the researchers point out.

Source: McGill University Health Centre

Would it be Ethical to Entrust Human Patients to Robotic Nurses?

Photo by Alex Knight on Unsplash

Advancements in AI have resulted in typically human characteristics like creativity, communication, critical thinking, and learning being replicated by machines for complex tasks like driving vehicles and creating art. With further development, these human-like attributes may develop enough to one day make it possible for robots and AI to be entrusted with nursing, a very ‘human’ practice. But… would it be ethical to entrust the care of humans to machines?

In a step toward answering this question, Japanese researchers recently explored the ethics of such a situation in the journal Nursing Ethics.

The study was conducted by Associate Professor Tomohide Ibuki from Tokyo University of Science, in collaboration with medical ethics researcher Dr Eisuke Nakazawa from The University of Tokyo and nursing researcher Dr Ai Ibuki from Kyoritsu Women’s University.

“This study in applied ethics examines whether robotics, human engineering, and human intelligence technologies can and should replace humans in nursing tasks,” says Dr Ibuki.

Nurses show empathy and establish meaningful connections with their patients, a human touch which is essential in fostering a sense of understanding, trust, and emotional support. The researchers examined whether the current advancements in robotics and AI can implement these human qualities by replicating the ethical concepts attributed to human nurses, including advocacy, accountability, cooperation, and caring.

Advocacy in nursing involves speaking on behalf of patients to ensure that they receive the best possible medical care. This encompasses safeguarding patients from medical errors, providing treatment information, acknowledging the preferences of a patient, and acting as mediators between the hospital and the patient. In this regard, the researchers noted that while AI can inform patients about medical errors and present treatment options, they questioned its ability to truly understand and empathise with patients’ values and to effectively navigate human relationships as mediators.

The researchers also expressed concerns about holding robots accountable for their actions. They suggested the development of explainable AI, which would provide insights into the decision-making process of AI systems, improving accountability.

The study further highlights that nurses are required to collaborate effectively with their colleagues and other healthcare professionals to ensure the best possible care for patients. As humans rely on visual cues to build trust and establish relationships, unfamiliarity with robots might lead to suboptimal interactions. Recognising this issue, the researchers emphasised the importance of conducting further investigations to determine the appropriate appearance of robots for facilitating efficient cooperation with human medical staff.

Lastly, while robots and AI have the potential to understand a patient’s emotions and provide appropriate care, the patient must also be willing to accept robots as care providers.

Having considered the above four ethical concepts in nursing, the researchers acknowledge that while robots may not fully replace human nurses anytime soon, they do not dismiss the possibility. While robots and AI can potentially reduce the shortage of nurses and improve treatment outcomes for patients, their deployment requires careful weighing of the ethical implications and impact on nursing practice.

“While the present analysis does not preclude the possibility of implementing the ethical concepts of nursing in robots and AI in the future, it points out that there are several ethical questions. Further research could not only help solve them but also lead to new discoveries in ethics,” concludes Dr Ibuki.

Source: Tokyo University of Science

Dr Robot Will See You Now: Medical Chatbots Need to be Regulated

Photo by Alex Knight on Unsplash

The Large Language Models (LLM) used in chatbots may appear to offer reliable, persuasive advice in a format which mimics conversation but in they can offer potentially harmful information when prompted with medical questions. Therefore, any LLM-chatbot in a medical setting would require approval as a medical device, argue experts in a paper published in Nature Medicine.

The mistake often made with LLM-chatbots is that they are a true “artificial intelligence” when in fact they are more closely related to the predictive text in a smartphone. They mostly use conversations and text scraped from the internet, and use algorithms to associate words and sentences in a manner that appears meaningful.

“Large Language Models are neural network language models with remarkable conversational skills. They generate human-like responses and engage in interactive conversations. However, they often generate highly convincing statements that are verifiably wrong or provide inappropriate responses. Today there is no way to be certain about the quality, evidence level, or consistency of clinical information or supporting evidence for any response. These chatbots are unsafe tools when it comes to medical advice and it is necessary to develop new frameworks that ensure patient safety,” said Prof Stephen Gilbert at TU Dresden.

Challenges in the regulatory approval of LLMs

Most people research their symptoms online before seeking medical advice. Search engines play a role in decision-making process. The forthcoming integration of LLM-chatbots into search engines may increase users’ confidence in the answers given by a chatbot that mimics conversation. It has been demonstrated that LLMs can provide profoundly dangerous information when prompted with medical questions.

The basis of LLMs do not have any medical “ground truth,” which is inherently dangerous. Chat-interfaced LLMs have already provided harmful medical responses and have already been used unethically in ‘experiments’ on patients without consent. Almost every medical LLM use case requires regulatory control in the EU and US. In the US their lack of explainability disqualifies them from being ‘non devices’. LLMs with explainability, low bias, predictability, correctness, and verifiable outputs do not currently exist and they are not exempted from current (or future) governance approaches.

The authors describe in their paper the limited scenarios in which LLMs could find application under current frameworks. They also describe how developers can seek to create LLM-based tools that could be approved as medical devices, and they explore the development of new frameworks that preserve patient safety. “Current LLM-chatbots do not meet key principles for AI in healthcare, like bias control, explainability, systems of oversight, validation and transparency. To earn their place in medical armamentarium, chatbots must be designed for better accuracy, with safety and clinical efficacy demonstrated and approved by regulators,” concludes Prof Gilbert.

Source: Technische Universität Dresden

In the ICU, Artificial Intelligence Beats Humans

Image created using an AI art program, Craiyon, with the prompt “An AI monitoring a patient in an ICU ward”.

In the future, artificial intelligence will play an important role in medicine. In diagnostics, successful tests have already been performed with AI, such as accurately categorising images according to whether they show pathological changes or not. But training an AI run in real time to examine the time-varying conditions of patients in an ICU and to calculate treatment suggestions has remained a challenge. Now, University of Vienna Researchers report in the Journal of Clinical Medicine that they have accomplished such a feat.

With the help of extensive data from ICUs of various hospitals, an AI was developed that provides suggestions for the treatment of people who require intensive care due to sepsis. Analyses show that AI already surpasses the quality of human decisions making it important to also discuss the legal aspects of such methods.

Making optimal use of existing data

“In an intensive care unit, a lot of different data is collected around the clock. The patients are constantly monitored medically. We wanted to investigate whether these data could be used even better than before,” says Prof Clemens Heitzinger from the Institute for Analysis and Scientific Computing at TU Wien (Vienna).

Medical staff make their decisions on the basis of well-founded rules. Most of the time, they know very well which parameters they have to take into account in order to provide the best care. But now, a computer can easily take many more parameters than a human into account – sometimes leading to even better decisions.

The computer as planning agent

“In our project, we used a form of machine learning called reinforcement learning,” says Clemens Heitzinger. “This is not just about simple categorisation – for example, separating a large number of images into those that show a tumour and those that do not – but about a temporally changing progression, about the development that a certain patient is likely to go through. Mathematically, this is something quite different. There has been little research in this regard in the medical field.”

The computer becomes an agent that makes its own decisions: if the patient is well, the computer is “rewarded”. If the condition deteriorates or death occurs, the computer is “punished”. The computer programme has the task of maximising its virtual “reward” by taking actions. In this way, extensive medical data can be used to automatically determine a strategy which achieves a particularly high probability of success.

Already better than a human

“Sepsis is one of the most common causes of death in intensive care medicine and poses an enormous challenge for doctors and hospitals, as early detection and treatment is crucial for patient survival,” says Prof Oliver Kimberger from the Medical University of Vienna. “So far, there have been few medical breakthroughs in this field, which makes the search for new treatments and approaches all the more urgent. For this reason, it is particularly interesting to investigate the extent to which artificial intelligence can contribute to improve medical care here. Using machine learning models and other AI technologies are an opportunity to improve the diagnosis and treatment of sepsis, ultimately increasing the chances of patient survival.”

Analysis shows that AI capabilities are already outperforming humans: “Cure rates are now higher with an AI strategy than with purely human decisions. In one of our studies, the cure rate in terms of 90-day mortality was increased by about 3% to about 88%,” says Clemens Heitzinger.

Of course, this does not mean that one should leave medical decisions in an ICU to the computer alone. But the artificial intelligence may run along as an additional device at the bedside – and the medical staff can consult it and compare their own assessment with the AI’s suggestions. Such AIs can also be highly useful in education.

Discussion about legal issues is necessary

“However, this raises important questions, especially legal ones,” says Clemens Heitzinger. “One probably thinks of the question who will be held liable for any mistakes made by the artificial intelligence first. But there is also the converse problem: what if the artificial intelligence had made the right decision, but the human chose a different treatment option and the patient suffered harm as a result?” Does the doctor then face the accusation that it would have been better to trust the artificial intelligence because it comes with a huge wealth of experience? Or should it be the human’s right to ignore the computer’s advice at all times?

“The research project shows: artificial intelligence can already be used successfully in clinical practice with today’s technology – but a discussion about the social framework and clear legal rules are still urgently needed,” Clemens Heitzinger is convinced.

Source: EurekAlert!