Category: Lab Tests and Imaging

New Blood Test for Ischaemic Stroke is a ‘Game-changer’

Ischaemic and haemorrhagic stroke. Credit: Scientific Animations CC4.0

A new study led by investigators from Brigham and Women’s Hospital has developed a new test by combining blood-based biomarkers with a clinical score to identify patients experiencing large vessel occlusion (LVO) stroke with high accuracy. Their results are published in the journal Stroke: Vascular and Interventional Neurology.

“We have developed a game-changing, accessible tool that could help ensure that more people suffering from stroke are in the right place at the right time to receive critical, life-restoring care,” said senior author Joshua Bernstock, MD, PhD, MPH, a clinical fellow in the Department of Neurosurgery at Brigham and Women’s Hospital.

Most strokes are ischaemic, in which blood flow to the brain is obstructed. LVO strokes are an aggressive type of ischaemic stroke that occurs when an obstruction occurs in a major artery in the brain, causing brain cells to rapidly die off from lack of oxygen. Major medical emergencies, LVO strokes require the swift treatment with mechanical thrombectomy, a surgical procedure that retrieves the blockage.

“Mechanical thrombectomy has allowed people that otherwise would have died or become significantly disabled be completely restored, as if their stroke never happened,” said Bernstock. “The earlier this intervention is enacted, the better the patient’s outcome is going to be. This exciting new technology has the potential to allow more people globally to get this treatment faster.”

The research team previously targeted two specific proteins found in capillary blood, one called glial fibrillary acidic protein (GFAP), which is also associated with brain bleeds and traumatic brain injury, and one called D-dimer. In this study, they demonstrated that the levels of these blood-based biomarkers combined with field assessment stroke triage for emergency destination (FAST-ED) scores could identify LVO ischaemic strokes while ruling out other conditions such as bleeding in the brain. Brain bleeds cause similar symptoms to LVO stroke, making them hard to distinguish from one another in the field, yet treatment for each is vastly different.

In this prospective, observational diagnostic accuracy study, the researchers looked at data from a cohort of 323 patients coded for stroke in Florida between May 2021 and August 2022. They found that combining the levels of the biomarkers GFAP and D-dimer with FAST-ED data less than six hours from the onset of symptoms allowed the test to detect LVO strokes with 93% specificity and 81% sensitivity. Other findings included that the test ruled out all patients with brain bleeds, suggesting that it may also eventually be used to detect intracerebral haemorrhage in the field.

Bernstock’s team also sees promising potential future use of this accessible diagnostic tool in low- and middle-income countries, where advanced imaging is not always available. It might also be useful in assessing patients with traumatic brain injuries. Next, they are carrying out another prospective trial to measure the test’s performance when used in an ambulance. They have also designed an interventional trial that leverages the technology to expedite the triage of stroke patients by having them bypass standard imaging and move directly to intervention.

“In stroke care, time is brain,” Bernstock said. “The sooner a patient is put on the right care pathway, the better they are going to do. Whether that means ruling out bleeds or ruling in something that needs an intervention, being able to do this in a prehospital setting with the technology that we built is going to be truly transformative.

Source: Brigham and Women’s Hospital

Flexible Microdisplay Enables Real-time Visualisation in Neurosurgery

The device represents a huge leap ahead guide neurosurgeons with visualised brain activity

The device’s LEDs can light up in several colors. This allows surgeons to see which areas they need to operate on. It allows them to track brain states during surgery, including the onset of epileptic seizures. Credit: UCSF

A thin film that combines an electrode grid and LEDs can both track and produce a visual representation of the brain’s activity in real-time during surgery-a huge improvement over the current state of the art. The device is designed to provide neurosurgeons visual information about a patient’s brain to monitor brain states during surgical interventions to remove brain lesions including tumours and epileptic tissue.

The team behind the device describes their work in the journal Science Translational Medicine.

Each LED in the device represents the activity of a few thousand neurons. In a series of proof-of-concept experiments in rodents and large non-primate mammals, researchers showed that the device can effectively track and display neural activity in the brain corresponding to different areas of the body. In this case, the LEDs developed by the team light up red in the areas that need to be removed by the surgeon. Surrounding areas that control critical functions and should be avoided show up in green.

The study also showed that the device can visualise the onset and map the propagation of an epileptic seizure on the surface of the brain. This would allow physicians to isolate the ‘nodes’ of the brain that are involved in epilepsy. It also would allow physicians to deliver necessary treatment by removing tissue or by using electrical pulses to stimulate the brain.

“Neurosurgeons could see and stop a seizure before it spreads, view what brain areas are involved in different cognitive processes, and visualise the functional extent of tumour spread. This work will provide a powerful tool for the difficult task of removing a tumour from the most sensitive brain areas,” said Daniel Cleary, one of the study’s coauthors, a neurosurgeon and assistant professor at Oregon Health and Science University.

The device was conceived and developed by a team of engineers and physicians from University of California San Diego and Massachusetts General Hospital (MGH) and was led by Shadi Dayeh, the paper’s corresponding author and a professor in the Department of Electrical and Computer Engineering at UC San Diego.

Protecting critical brain functions

During brain surgery, physicians need to map brain function to define which areas of the organ control critical functions and can’t be removed. Currently, neurosurgeons work with a team of electrophysiologists during the procedure. But that team and their monitoring equipment are located in a different part of the operating room.

Brain areas that need to be protected and those that need to be operated on are either marked by electrophysiologists on a paper that is brought to the surgeon or communicated verbally to the surgeon, who then places sterile papers on the brain surface to mark these regions.

“Both are inefficient ways of communicating critical information during a procedure, and could impact its outcomes,” said Dr Angelique Paulk of MGH, who is a co-author and co-inventor of the technology.

In addition, the electrodes currently used to monitor brain activity during surgery do not produce detailed fine grained data. So surgeons need to keep a buffer zone, known as resection margin, of 5 to 7mm around the area they are removing inside the brain.

This means that they might leave some harmful tissue in. The new device provides a level of detail that would shrink this buffer zone to less than 1mm.

“We invented the brain microdisplay to display with precision critical cortical boundaries and to guide neurosurgery in a cost-effective device that simplifies and reduces the time of brain mapping procedures,” said Shadi Dayeh, the paper’s corresponding author and a professor in the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering.

Researchers installed the LEDs on top of another innovation from the Dayeh lab, the platinum nanorod electrode grid (PtNRGrid). Using the PtNRGrids since 2019, Dayeh’s team pioneered human brain and spinal cord mapping with thousands of channels to monitor brain neural activity.

They reported early safety and effectiveness results in a series of articles in Science Translational Medicine in 2022 in tens of human subjects.

(New sensor grids record human brain signals with record breaking resolution and Microelectrode array can enable safer spinal cord surgery) — ahead of Neuralink and other companies in this space.

The PtNRGrid also includes perforations, which enable physicians to insert probes to stimulate the brain with electrical signals, both for mapping and for therapy.

How it’s made

The display uses gallium nitride-based micro-LEDs, bright enough to be seen under surgical lights. The two models built measures 5mm or 32mm on a side, with 1024 or 2048 LEDs. They capture brain activity at 20 000 samples a second, enabling .

“This enables precise and real-time displays of cortical dynamics during critical surgical interventions,” said Youngbin Tchoe, the first author and co-inventor, formerly a postdoc in the Dayeh group at UC San Diego and now an assistant professor at Ulsan National Institute of Science and Technology.

In addition to the LEDs, the device includes acquisition and control electronics as well as software drivers to analyse and project cortical activity directly from the surface of the brain.

“The brain iEEG-microdisplay can impressively both record the activity of the brain to a very fine degree and display this activity for a neurosurgeon to use in the course of surgery. We hope that this device will ultimately lead to better clinical outcomes for patients with its ability to both reveal and communicate the detailed activity of the underlying brain during surgery,” said study coauthor Jimmy Yang, a neurosurgeon and assistant professor at The Ohio State University.

Next steps

Dayeh’s team is working to build a microdisplay that will include 100 000 LEDs, with a resolution equivalent to that of a smartphone screen – for a fraction of the cost of a high-end smartphone. Each LED in those displays would reflect the activity of a few hundred neurons.

These brain microdisplays would also include a foldable portion. This would allow surgeons to operate within the foldable portion and monitor the impact of the procedure as the other, unfolded portion of the microdisplay shows the status of the brain in real time.

Researchers are also working on one limitation of the study – the close proximity of the LED sensors and the PtNRGrids led to a slight interference and noise in the data.

The team plans to build customised hardware to change the frequency of the pulses that turn on the LEDs to make it easier to screen out that signal, which is not relevant to the brain’s electrical activity.

Source: University of California San Francisco

Could Diamond Dust Replace Gadolinium in MRI?

Photo by Mart Production on Pexels

An unexpected discovery surprised a scientist at the Max Planck Institute for Intelligent Systems in Stuttgart: nanometre-sized diamond particles, which were intended for a completely different purpose, shone brightly in a magnetic resonance imaging experiment – outshining the actual contrast agent, the heavy metal gadolinium.

The researchers, publishing their serendipitous discovery in Advanced Materials, believe that diamond nanoparticles, in addition to their use in drug delivery to treat tumour cells, might one day become a novel MRI contrast agent.

While the discovery of diamond dust’s potential as a future MRI contrast agent may never be considered a turning point in science history, its signal-enhancing properties are nevertheless an unexpected finding which may open-up new possibilities: diamond dust glows brightly even after days of being injected.

Perhaps it could replace gadolinium, which has been used in clinics to enhance the brightness of tissues to detect tumours, inflammation, or vascular abnormalities for more than 30 years. But when injected into a patient’s bloodstream, gadolinium travels not only to tumour tissue but also to surrounding healthy tissue. It is retained in the brain and kidneys, persisting months to years after the last administration and its long-term effects are not yet known. Gadolinium also causes a number of other side effects, and the search for an alternative has been going on for years.

Serendipity often advances science

Could diamond dust, a carbon-based material, become a well-tolerable alternative because of an unexpected discovery made in a laboratory at the Max Planck Institute for Intelligent Systems in Stuttgart?

Dr Jelena Lazovic Zinnanti was working on an experiment using nanometre-sized diamond particles for an entirely different purpose. The research scientist, who heads the Central Scientific Facility Medical Systems at MPI-IS, was surprised when she put the 3–5nm particles into tiny drug-delivery capsules made of gelatin. She wanted these capsules to rupture when exposed to heat. She assumed that diamond dust, with its high heat capacity, could help.

“I had intended to use the dust only to heat up the drug carrying capsules,” Jelena recollects.

“I used gadolinium to track the dust particles’ position. I intended to learn if the capsules with diamonds inside would heat up better. While performing preliminary tests, I got frustrated, because gadolinium would leak out of the gelatin – just as it leaks out of the bloodstream into the tissue of a patient. I decided to leave gadolinium out. When I took MRI images a few days later, to my surprise, the capsules were still bright. Wow, this is interesting, I thought! The diamond dust seemed to have better signal enhancing properties than gadolinium. I hadn’t expected that.”

Jelena took these findings further by injecting the diamond dust into live chicken embryos. She discovered that while gadolinium diffuses everywhere, the diamond nanoparticles stayed in the blood vessels, didn’t leak out and later shone brightly in the MRI, just as they had done in the gelatin capsules.

While other scientists had published papers showing how they used diamond particles attached to gadolinium for magnetic resonance imaging, no one had ever shown that diamond dust itself could be a contrast agent. Two years later, Jelena became the lead author of a paper now published in Advanced Materials.

“Why the diamond dust shines bright in our MRI still remains a mystery to us,” says Jelena.

She can only assume the reason is the dust’s magnetic properties: “I think the tiny particles have carbons that are slightly paramagnetic. The particles may have a defect in their crystal lattice, making them slightly magnetic. That’s why they behave like a T1 contrast agent such as gadolinium. Additionally, we don’t know whether diamond dust could potentially be toxic, something that needs to be carefully examined in the future.”

Source: Max Planck Institute for Intelligent Systems

Researchers Demonstrate the Effect of Neurochemicals on fMRI Readings

Photo by Fakurian Design on Unsplash

The brain is an incredibly complex and active organ that uses electricity and chemicals to transmit and receive signals between its sub-regions. Researchers have explored various technologies to directly or indirectly measure these signals to learn more about the brain. Functional magnetic resonance imaging (fMRI), for example, allows them to detect brain activity via changes related to blood flow.

Yen-Yu Ian Shih, PhD, professor of neurology and associate director of UNC’s Biomedical Research Imaging Center, and his fellow lab members have long been curious about how neurochemicals in the brain regulate and influence neural activity, blood flow, and subsequently, fMRI measurement in the brain.

A new study by the lab has confirmed their suspicions that fMRI interpretation is not as straightforward as it seems.

“Neurochemical signalling to blood vessels is less frequently considered when interpreting fMRI data,” said Shih, who also leads the Center for Animal MRI. “In our study on rodent models, we showed that neurochemicals, aside from their well-known signalling actions to typical brain cells, also signal to blood vessels, and this could have significant contributions to fMRI measurements.”

Their findings, published in Nature Communications, stem from the installation and upgrade of two 9.4-Tesla animal MRI systems and a 7-Tesla human MRI system at the Biomedical Research Imaging Center.

When activity in neurons increases in a specific brain region, blood flow and oxygen levels increase in the area, usually proportionate to the strength of neural activity. Researchers decided to use this phenomenon to their advantage and eventually developed fMRI techniques to detect these changes in the brain.

For years, this method has helped researchers better understand brain function and influenced their knowledge about human cognition and behaviour. The new study from Shih’s lab, however, demonstrates that this well-established neuro-vascular relationship does not apply across the entire brain because cell types and neurochemicals vary across brain areas.

Shih’s team focused on the striatum, a region deep in the brain involved in cognition, motivation, reward, and sensorimotor function, to identify the ways in which certain neurochemicals and cell types in the brain region may be influencing fMRI signals.

For their study, Shih’s lab controlled neural activity in rodent brains using a light-based technique, while measuring electrical, optical, chemical, and vascular signals to help interpret fMRI data. The researchers then manipulated the brain’s chemical signalling by injecting different drugs into the brain and evaluated how the drugs influenced the fMRI responses.

They found that in some cases, neural activity in the striatum went up, but the blood vessels constricted, causing negative fMRI signals. This is related to internal opioid signaling in the striatum. Conversely, when another neurochemical, dopamine, predominated signaling in striatum, the fMRI signals were positive.

“We identified several instances where fMRI signals in the striatum can look quite different from expected,” said Shih. “It’s important to be mindful of underlying neurochemical signaling that can influence blood vessels or perivascular cells in parallel, potentially overshadowing the fMRI signal changes triggered by neural activity.”

Members of Shih’s lab, including first- and co-authors Dominic Cerri, PhD, and Lindsey Walton, PhD, travelled to the University of Sussex in the United Kingdom, where they were able to perform experiments and further demonstrate the opioid’s vascular effects.

They also collected human fMRI data at UNC’s 7-Tesla MRI system and collaborated with researchers at Stanford University to explore possible findings using transcranial magnetic stimulation, a procedure that uses magnetic fields to stimulate the human brain.

By better understanding fMRI signaling, basic science researchers and physician scientists will be able to provide more precise insights into neural activity changes in healthy brains, as well as in cases of neurological and neuropsychiatric disorders.

Source: UNC School of Medicine

Is AI a Help or Hindrance to Radiologists? It’s Down to the Doctor

New research shows AI isn’t always a help for radiologists

Photo by Anna Shvets

One of the most touted promises of medical artificial intelligence tools is their ability to augment human clinicians’ performance by helping them interpret images such as X-rays and CT scans with greater precision to make more accurate diagnoses.

But the benefits of using AI tools on image interpretation appear to vary from clinician to clinician, according to new research led by investigators at Harvard Medical School, working with colleagues at MIT and Stanford.

The study findings suggest that individual clinician differences shape the interaction between human and machine in critical ways that researchers do not yet fully understand. The analysis, published in Nature Medicine, is based on data from an earlier working paper by the same research group released by the National Bureau of Economic Research.

In some instances, the research showed, use of AI can interfere with a radiologist’s performance and interfere with the accuracy of their interpretation.

“We find that different radiologists, indeed, react differently to AI assistance – some are helped while others are hurt by it,” said co-senior author Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS.

“What this means is that we should not look at radiologists as a uniform population and consider just the ‘average’ effect of AI on their performance,” he said. “To maximize benefits and minimize harm, we need to personalize assistive AI systems.”

The findings underscore the importance of carefully calibrated implementation of AI into clinical practice, but they should in no way discourage the adoption of AI in radiologists’ offices and clinics, the researchers said.

Instead, the results should signal the need to better understand how humans and AI interact and to design carefully calibrated approaches that boost human performance rather than hurt it.

“Clinicians have different levels of expertise, experience, and decision-making styles, so ensuring that AI reflects this diversity is critical for targeted implementation,” said Feiyang “Kathy” Yu, who conducted the work while at the Rajpurkar lab with co-first author on the paper with Alex Moehring at the MIT Sloan School of Management.

“Individual factors and variation would be key in ensuring that AI advances rather than interferes with performance and, ultimately, with diagnosis,” Yu said.

AI tools affected different radiologists differently

While previous research has shown that AI assistants can, indeed, boost radiologists’ diagnostic performance, these studies have looked at radiologists as a whole without accounting for variability from radiologist to radiologist.

In contrast, the new study looks at how individual clinician factors – area of specialty, years of practice, prior use of AI tools – come into play in human-AI collaboration.

The researchers examined how AI tools affected the performance of 140 radiologists on 15 X-ray diagnostic tasks – how reliably the radiologists were able to spot telltale features on an image and make an accurate diagnosis. The analysis involved 324 patient cases with 15 pathologies: abnormal conditions captured on X-rays of the chest.

To determine how AI affected doctors’ ability to spot and correctly identify problems, the researchers used advanced computational methods that captured the magnitude of change in performance when using AI and when not using it.

The effect of AI assistance was inconsistent and varied across radiologists, with the performance of some radiologists improving with AI and worsening in others.

AI tools influenced human performance unpredictably

AI’s effects on human radiologists’ performance varied in often surprising ways.

For instance, contrary to what the researchers expected, factors such how many years of experience a radiologist had, whether they specialised in thoracic, or chest, radiology, and whether they’d used AI readers before, did not reliably predict how an AI tool would affect a doctor’s performance.

Another finding that challenged the prevailing wisdom: Clinicians who had low performance at baseline did not benefit consistently from AI assistance. Some benefited more, some less, and some none at all. Overall, however, lower-performing radiologists at baseline had lower performance with or without AI. The same was true among radiologists who performed better at baseline. They performed consistently well, overall, with or without AI.

Then came a not-so-surprising finding: More accurate AI tools boosted radiologists’ performance, while poorly performing AI tools diminished the diagnostic accuracy of human clinicians.

While the analysis was not done in a way that allowed researchers to determine why this happened, the finding points to the importance of testing and validating AI tool performance before clinical deployment, the researchers said. Such pre-testing could ensure that inferior AI doesn’t interfere with human clinicians’ performance and, therefore, patient care.

What do these findings mean for the future of AI in the clinic?

The researchers cautioned that their findings do not provide an explanation for why and how AI tools seem to affect performance across human clinicians differently, but note that understanding why would be critical to ensuring that AI radiology tools augment human performance rather than hurt it.

To that end, the team noted, AI developers should work with physicians who use their tools to understand and define the precise factors that come into play in the human-AI interaction.

And, the researchers added, the radiologist-AI interaction should be tested in experimental settings that mimic real-world scenarios and reflect the actual patient population for which the tools are designed.

Apart from improving the accuracy of the AI tools, it’s also important to train radiologists to detect inaccurate AI predictions and to question an AI tool’s diagnostic call, the research team said. To achieve that, AI developers should ensure that they design AI models that can “explain” their decisions.

“Our research reveals the nuanced and complex nature of machine-human interaction,” said study co-senior author Nikhil Agarwal, professor of economics at MIT. “It highlights the need to understand the multitude of factors involved in this interplay and how they influence the ultimate diagnosis and care of patients.”

Source: Harvard Medical School

A Better View of Atherosclerotic Plaques with New Imaging Technique

Source: Wikimedia CC0

Researchers have developed a new catheter-based device that combines two powerful optical techniques to image atherosclerotic plaques that can build up inside the heart’s coronary arteries. By providing new details about plaque, the device could help clinicians and researchers improve treatments for preventing heart attacks and strokes.

“Atherosclerosis, leading to heart attacks and strokes, is the number one cause of death in Western societies – exceeding all combined cancer types – and, therefore, a major public health issue,” said research team member leader Laura Marcu from University of California, Davis. “Better clinical management made possible by advanced intravascular imaging tools will benefit patients by providing more accurate information to help cardiologists tailor treatment or by supporting the development of new therapies.”

In the Optica Publishing Group journal Biomedical Optics Express, researchers describe their new flexible device, which combines fluorescence lifetime imaging (FLIM) and polarisation-sensitive optical coherence tomography (PSOCT) to capture rich information about the composition, morphology and microstructure of atherosclerotic plaques. The work was a collaborative project with Brett Bouma and Martin Villiger, experts in OCT from the Wellman Center for Photomedicine at Massachusetts General Hospital.

“With further testing and development, our device could be used for longitudinal studies where intravascular imaging is obtained from the same patients at different timepoints, providing a picture of plaque evolution or response to therapeutic interventions,” said Julien Bec, first author of the paper. “This will be very valuable to better understand disease evolution, evaluate the efficacy of new drugs and treatments and guide stenting procedures used to restore normal blood flow.”

Gaining an unprecedented view

Most of what scientists know about how atherosclerosis forms and develops over time comes from histopathology studies of postmortem coronary specimens. Although the development of imaging systems such as intravascular ultrasound and intravascular OCT has made it possible to study plaques in living patients, there is still a need for improved methods and tools to investigate and characterise atherosclerosis.

To address this need, the researchers embarked on a multi-year research project to develop and validate multispectral FLIM as an intravascular imaging modality. FLIM can provide insights into features such as the composition of the extracellular matrix, the presence of inflammation and the degree of calcification inside an artery. In earlier work, they combined FLIM with intravascular ultrasound, and in this new work they combined it with PSOCT. PSOCT provides high-resolution morphological information along with birefringence and depolarisation measurements. When used together, FLIM and PSOCT provide an unprecedented amount of information on plaque morphology, microstructure and biochemical composition.

“Birefringence provides information about the plaque collagen, a key structural protein that helps with lesion stabilization, and depolarisation is related to lipid content that contributes to plaque destabilization,” said Bec. “Holistically, this hybrid approach can provide the most detailed picture of plaque characteristics of all intravascular imaging modalities reported to date.”

Getting two imaging modalities into one device

The development of multimodal intravascular imaging systems compatible with coronary catheterisation is technologically challenging. It requires flexible catheters < 1mm diameter that can operate in vessels with sharp twists and turns. A high imaging speed of around 100 frames/second is also necessary to limit cardiac motion artefacts and ensure proper imaging inside an artery.

To integrate FLIM and PSOCT into a single device without compromising the performance of either imaging modality, the researchers used optical components previously developed by Marcu’s lab and other research groups. Key to achieving high PSOCT performance was a newly designed rotary collimator with high light throughput and a high return loss, ie the ratio of power reflected back toward the light source compared to the power incident on the device. The catheter system they developed has similar dimensions and flexibility as the intravascular imaging devices that are currently in clinical use.

After testing the new system with artificial tissue to demonstrate basic functionality on well characterized samples, the researchers also showed that it could be used to measure properties of a healthy coronary artery removed from a pig. Finally, in vivo testing in swine hearts demonstrated that the hybrid catheter system’s performance was sufficient to support work toward clinical validation. These tests all showed that the FLIM-PSOCT catheter system could simultaneously acquire co-registered FLIM data over four distinct spectral bands and PSOCT backscattered intensity, birefringence and depolarization information.

Next, the researchers plan to use the intravascular imaging system to image plaques in ex vivo human coronary arteries. By comparing the optical signals acquired using the system with plaque characteristics identified by expert pathologists, they can better understand which features can be identified by FLIM-PSOCT and use this to develop prediction models. They also plan to move forward with testing in support of clinical validation of the system in patients.

Source: Optica

New, More Accurate Approach to Blood Tests for Determining Diabetes Risks

Photo by National Cancer Institute on Unsplash

A new approach to blood tests could potentially be used to estimate a patient’s risk of type 2 diabetes, according to a new study appearing in BMC’s Journal of Translational Medicine. Currently, the most commonly used inflammatory biomarker currently used to predict the risk of type 2 diabetes is high-sensitivity C-reactive protein (CRP). But new research has suggested that jointly assessing of biomarkers, rather than assessing each individually, would improve the chances of predicting diabetes risk and diabetic complications.

A study by Edith Cowan University (ECU) researcher Dan Wu investigated the connection between systematic inflammation, assessed by joint cumulative high-sensitivity CRP and another biomarker called monocyte to high-density lipoprotein ratio (MHR), and incident type 2 diabetes.

The study followed more than 40 800 non-diabetic participants over a near ten-year period, with more than 4800 of the participants developing diabetes over this period.

Wu said that of those patients presenting with type 2 diabetes, significant interaction between MHR and CRP was observed.

“Specifically, increases in the MHR in each CRP stratum increased the risk of type 2 diabetes; concomitant increases in MHR and CRP presented significantly higher incidence rates and risks of diabetes.

“Furthermore, the association between chronic inflammation (reflected by the joint cumulative MHR and CRP exposure) and incident diabetes was highly age- and sex-specific and influenced by hypertension, high cholesterol, or prediabetes. The addition of the MHR and CRP to the clinical risk model significantly improved the prediction of incident diabetes,” said Wu.

Biological sex a risk factor

The study found that females had a greater risk of type 2 diabetes conferred by joint increases in CRP and MHR, with Wu stating that sex hormones could account for these differences.

Wu said that the research findings corroborated the involvement of chronic inflammation in causing early-onset diabetes and merited specific attention.

“Epidemiological evidence indicates a consistent increase in early-onset diabetes, especially in developing countries. Leveraging this age-specific association between chronic inflammation and type 2 diabetes may be a promising method for achieving early identification of at-risk young adults and developing personalised interventions,” she added.

Wu noted that the chronic progressive nature of diabetes and the enormous burden of subsequent comorbidities further highlighted the urgent need to address this critical health issue.

Although aging and genetics are non-modifiable risk factors, other risk factors could be modified through lifestyle changes.

Inflammation is strongly influenced by life activities and metabolic conditions such as diet, sleep disruptions, chronic stress, and glucose and cholesterol dysregulation, thereby indicating the potential benefits of monitoring risk-related metabolic conditions.

Wu said that the dual advantages of cost effectiveness and the wide availability of cumulative MHR and CRP in current clinical settings, potentiated the widespread use of these measures as a convenient tool for predicting the risk of diabetes.

Source: Edith Cowan University

Terahertz Biosensor can Accurately Detect Skin Cancer

3D structure of a melanoma cell derived by ion abrasion scanning electron microscopy. Credit: Sriram Subramaniam/ National Cancer Institute

Researchers have developed a revolutionary biosensor using terahertz (THz) waves that can detect skin cancer with exceptional sensitivity, potentially paving the way for earlier and easier diagnoses. Published in the journal IEEE Transactions on Biomedical Engineering, the study presents a significant advancement in early cancer detection, thanks to a multidisciplinary collaboration of teams from Queen Mary University of London and the University of Glasgow.

“Traditional methods for detecting skin cancer often involve expensive, time-consuming, CT, PET scans and invasive higher frequencies technologies,” explains Dr Shohreh Nourinovin, Postdoctoral Research Associate at Queen Mary’s School of Electronic Engineering and Computer Science, and the study’s first author.

“Our biosensor offers a non-invasive and highly efficient solution, leveraging the unique properties of THz waves – a type of radiation with lower energy than X-rays, thus safe for humans – to detect subtle changes in cell characteristics.”

The key innovation lies in the biosensor’s design. Featuring tiny, asymmetric resonators on a flexible substrate, it can detect subtle changes in the properties of cells.

Unlike traditional methods that rely solely on refractive index, this device analyses a combination of parameters, including resonance frequency, transmission magnitude, and a value called “Full Width at Half Maximum” (FWHM). This comprehensive approach provides a richer picture of the tissue, allowing for more accurate differentiation between healthy and cancerous cells and to measure malignancy degree of the tissue.

In tests, the biosensor successfully differentiated between normal skin cells and basal cell carcinoma (BCC) cells, even at different concentrations. This ability to detect early-stage cancer holds immense potential for improving patient outcomes.

“The implications of this study extend far beyond skin cancer detection,” says Dr Nourinovin.

“This technology could be used for early detection of various cancers and other diseases, like Alzheimer’s, with potential applications in resource-limited settings due to its portability and affordability.”

Dr Nourinovin’s research journey wasn’t without its challenges.

Initially focusing on THz spectroscopy for cancer analysis, her project was temporarily halted due to the COVID pandemic. However, this setback led her to explore the potential of THz metasurfaces, a novel approach that sparked a new chapter in her research.

Source: Queen Mary University of London

Experimental Model Identifies New Drug–drug Interactions

Photo by Myriam Zilles on Unsplash

When taking oral drugs, transporter proteins found on cells that line the gastrointestinal tract facilitate their entry into the bloodstream. But for many drugs, it is not known which of those transporters they use to exit the digestive tract.

Identifying the transporters used by specific drugs could help to improve patient treatment because if two drugs rely on the same transporter, they can interfere with each other and should not be prescribed together.

Researchers at MIT, Brigham and Women’s Hospital, and Duke University have developed a multipronged strategy to identify the transporters used by different drugs, which appears in Nature Biomedical Engineering. Their approach, which makes use of both tissue models and machine-learning algorithms, has already revealed that a commonly prescribed antibiotic and a blood thinner can interfere with each other.

“One of the challenges in modelling absorption is that drugs are subject to different transporters. This study is all about how we can model those interactions, which could help us make drugs safer and more efficacious, and predict potential toxicities that may have been difficult to predict until now,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

Learning more about which transporters help drugs pass through the digestive tract could also help drug developers improve the absorbability of new drugs by adding excipients that enhance their interactions with transporters.

Former MIT postdocs Yunhua Shi and Daniel Reker are the lead authors of the study.

Drug transport

Previous studies have identified several transporters in the GI tract that help drugs pass through the intestinal lining. Three of the most commonly used, which were the focus of the new study, are BCRP, MRP2, and PgP.

For this study, Traverso and his colleagues adapted a tissue model they had developed in 2020 to measure a given drug’s absorbability. This experimental setup, based on pig intestinal tissue grown in the laboratory, can be used to systematically expose tissue to different drug formulations and measure how well they are absorbed.

To study the role of individual transporters within the tissue, the researchers used short strands of RNA called siRNA to knock down the expression of each transporter. In each section of tissue, they knocked down different combinations of transporters, which enabled them to study how each transporter interacts with many different drugs.

“There are a few roads that drugs can take through tissue, but you don’t know which road. We can close the roads separately to figure out, if we close this road, does the drug still go through? If the answer is yes, then it’s not using that road,” Traverso says.

The researchers tested 23 commonly used drugs using this system, allowing them to identify transporters used by each of those drugs. Then, they trained a machine-learning model on that data, as well as data from several drug databases. The model learned to make predictions of which drugs would interact with which transporters, based on similarities between the chemical structures of the drugs.

Using this model, the researchers analysed a new set of 28 currently used drugs, as well as 1595 experimental drugs. This screen yielded nearly 2 million predictions of potential drug interactions. Among them was the prediction that doxycycline, an antibiotic, could interact with warfarin, a commonly prescribed blood-thinner. Doxycycline was also predicted to interact with digoxin, which is used to treat heart failure, levetiracetam, an antiseizure medication, and tacrolimus, an immunosuppressant.

Identifying interactions

To test those predictions, the researchers looked at data from about 50 patients who had been taking one of those three drugs when they were prescribed doxycycline. This data, which came from a patient database at Massachusetts General Hospital and Brigham and Women’s Hospital, showed that when doxycycline was given to patients already taking warfarin, the level of warfarin in the patients’ bloodstream went up, then went back down again after they stopped taking doxycycline.

That data also confirmed the model’s predictions that the absorption of doxycycline is affected by digoxin, levetiracetam, and tacrolimus. Only one of those drugs, tacrolimus, had been previously suspected to interact with doxycycline.

“These are drugs that are commonly used, and we are the first to predict this interaction using this accelerated in silico and in vitro model,” Traverso says. “This kind of approach gives you the ability to understand the potential safety implications of giving these drugs together.”

Source: Massachusetts Institute of Technology

“Movies” with Colour and Music Visualise Brain Activity Data in Beautiful Detail

Novel toolkit translates neuroimaging data into audiovisual formats to aid interpretation

Simple audiovisualisation of wide field neural activity. Adapted from Thibodeaux et al., 2024, PLOS ONE, CC-BY 4.0

Complex neuroimaging data can be explored through translation into an audiovisual format – a video with accompanying musical soundtrack – to help interpret what happens in the brain when performing certain behaviours. David Thibodeaux and colleagues at Columbia University, US, present this technique in the open-access journal PLOS ONE on February 21, 2024. Examples of these beautiful “brain movies” are included below.

Recent technological advances have made it possible for multiple components of activity in the awake brain to be recorded in real time. Scientists can now observe, for instance, what happens in a mouse’s brain when it performs specific behaviours or receives a certain stimulus. However, such research produces large quantities of data that can be difficult to intuitively explore to gain insights into the biological mechanisms behind brain activity patterns.

Prior research has shown that some brain imaging data can be translated into audible representations. Building on such approaches, Thibodeaux and colleagues developed a flexible toolkit that enables translation of different types of brain imaging data – and accompanying video recordings of lab animal behaviour – into audiovisual representations.

The researchers then demonstrated the new technique in three different experimental settings, showing how audiovisual representations can be prepared with data from various brain imaging approaches, including 2D wide-field optical mapping (WFOM) and 3D swept confocally aligned planar excitation (SCAPE) microscopy.

The toolkit was applied to previously-collected WFOM data that detected both neural activity and brain blood flow changes in mice engaging in different behaviours, such as running or grooming. Neuronal data was represented by piano sounds that struck in time with spikes in brain activity, with the volume of each note indicating magnitude of activity and its pitch indicating the location in the brain where the activity occurred. Meanwhile, blood flow data were represented by violin sounds. The piano and violin sounds, played in real time, demonstrate the coupled relationship between neuronal activity and blood flow. Viewed alongside a video of the mouse, a viewer can discern which patterns of brain activity corresponded to different behaviours.

The authors note that their toolkit is not a substitute for quantitative analysis of neuroimaging data. Nonetheless, it could help scientists screen large datasets for patterns that might otherwise have gone unnoticed and are worth further analysis.

The authors add: “Listening to and seeing representations of [brain activity] data is an immersive experience that can tap into this capacity of ours to recognise and interpret patterns (consider the online security feature that asks you to “select traffic lights in this image” – a challenge beyond most computers, but trivial for our brains)…[It] is almost impossible to watch and focus on both the time-varying [brain activity] data and the behavior video at the same time, our eyes will need to flick back and forth to see things that happen together. You generally need to continually replay clips over and over to be able to figure out what happened at a particular moment. Having an auditory representation of the data makes it much simpler to see (and hear) when things happen at the exact same time.”

  1. Audiovisualisation of neural activity from the dorsal surface of the thinned skull cortex of the awake mouse.
  2. Audiovisualisation of neural activity from the dorsal surface of the thinned skull cortex of the ketamine/xylazine anaesthetised mouse.
  3. Audiovisualisation of SCAPE microscopy data capturing calcium activity in apical dendrites in the awake mouse brain.
  4. Audiovisualisation of neural activity and blood flow from the dorsal surface of the thinned skull cortex of the awake mouse.

Video Credits: Thibodeaux et al., 2024, PLOS ONE, CC-BY 4.0