Many survivors of cardiogenic shock showed evidence of new cognitive impairment after leaving the hospital, according to a study led by UT Southwestern Medical Center researchers. The findings, published in the Journal of the American College of Cardiology, highlight a need to screen survivors and provide referrals to neuropsychology experts, the authors said.
“Our study demonstrated that nearly two-thirds of cardiogenic shock survivors experienced cognitive impairment within three months of hospital discharge, underscoring a critical but overlooked aspect of recovery,” said senior investigator James de Lemos, MD, Professor of Internal Medicine and Chief of the Division of Cardiology at UT Southwestern. “The findings are important for developing interventions that focus not only on improving survival but also on preventing or mitigating the functional consequences of cardiogenic shock, including cognitive decline.”
Cardiogenic shock results from heart failure, heart attack, or complications following cardiac surgery, and is characterised by a sudden drop in heart pumping ability. It results in acute hypoperfusion and hypoxia of the organs and has historically resulted in high mortality.
With advances in treatment during the past two decades, up to 70% of patients suffering from cardiogenic shock can now survive. But there is limited understanding of survivors’ recovery and quality of life after they leave the hospital.
“Our study is the first to systematically examine the cognitive outcomes of cardiogenic shock survivors, evaluating how cognition impacts patients’ ability to return to daily activities,” said Eric Hall, M.D., a clinical fellow in the Division of Cardiology who was the study leader and first author. “We found that cardiogenic shock is associated with cognitive impairment, which is an under-recognized consequence strongly linked to patients’ overall quality of life.”
UTSW researchers conducted the study by enrolling 141 patients who had survived cardiogenic shock before being discharged. To establish a baseline, family members completed a questionnaire, the AD8 survey, about the patients’ cognitive function before hospitalisation.
Before discharge, each patient completed the Montreal Cognitive Assessment-Blind (bMoCA) to screen for signs of cognitive impairment. Three months after discharge, patients repeated the assessments, allowing researchers to track changes in thinking ability and daily functioning over time.
Among patients with no sign of cognitive impairment before admission, 65% were found to have new impairment at discharge, and 53% continued to show impairment at their three-month follow-up. UTSW researchers emphasized that these findings should inform the development of comprehensive survivorship programs including screening protocols to identify impairments patients face and rehabilitation programs to help them recover from those challenges.
“We hope to use this study as a foundation to develop targeted rehabilitation strategies that connect patients with neuropsychology experts and improve long-term recovery in cardiogenic shock survivors,” Dr de Lemos said.
Heals spinal cord injuries with the help of electricity. Researchers have developed an ultra-thin implant that can be placed directly on the spinal cord. The implant delivers a carefully controlled electrical current across the injured area. In a recent study, researchers were able to observe how the electrical field treatment led to improved recovery in rats with spinal cord injuries, and that the animals regained movement and sensation. Please note that the image shows a newer model of the implant used in the study. Photo and illustration: University of Auckland
Researchers at Chalmers University of Technology in Sweden and the University of Auckland in New Zealand have developed a groundbreaking bioelectric implant that restores movement in rats after injuries to the spinal cord.
This breakthrough, published in Nature Communications, offers new hope for an effective treatment for humans suffering from loss of sensation and function due to spinal cord injury.
Electricity stimulated nerve fibres to reconnect
Before birth, and to a lesser extent afterwards, naturally occurring electric fields play a vital role in early nervous system development, encouraging and guiding the growth of nerve fibres along the spinal cord. Scientists are now harnessing this same electrical guidance system in the lab.
“We developed an ultra-thin implant designed to sit directly on the spinal cord, precisely positioned over the injury site in rats,” says Bruce Harland, senior research fellow, University of Auckland, and one of the lead researchers of the study.
The device delivers a carefully controlled electrical current across the injury site.
“The aim is to stimulate healing so people can recover functions lost through spinal cord injury,” says Professor Darren Svirskis, University of Auckland, Maria Asplund, Professor of bioelectronics at Chalmers University of Technology.
She is, together with Darren Svirskis, University of Auckland,
In the study, researchers observed how electrical field treatment improved the recovery of locomotion and sensation in rats with spinal cord injury. The findings offer renewed hope for individuals experiencing loss of function and sensation due to spinal cord injuries.
“Long-term, the goal is to transform this technology into a medical device that could benefit people living with life-changing spinal-cord injuries,” says Maria Asplund.
The study presents the first use of a thin implant that delivers stimulation in direct contact with the spinal cord, marking a groundbreaking advancement in the precision of spinal cord stimulation.
“This study offers an exciting proof of concept showing that electric field treatment can support recovery after spinal cord injury,” says doctoral student Lukas Matter, Chalmers University of Technology, the other lead researcher alongside Harland.
Improved mobility after four weeks
Unlike humans, rats have a greater capacity for spontaneous recovery after spinal cord injury, which allowed researchers to compare natural healing with healing supported by electrical stimulation.
After four weeks, animals that received daily electric field treatment showed improved movement compared with those who did not. Throughout the 12-week study, they responded more quickly to gentle touch.
“This indicates that the treatment supported recovery of both movement and sensation,” Harland says.
“Just as importantly, our analysis confirmed that the treatment did not cause inflammation or other damage to the spinal cord, demonstrating that it was not only effective but also safe,” Svirskis says.
The next step is to explore how different doses, including the strength, frequency, and duration of the treatment, affect recovery, to discover the most effective recipe for spinal-cord repair.
Ischaemic and haemorrhagic stroke. Credit: Scientific Animations CC4.0
Patients with atrial fibrillation who have experienced a stroke would benefit greatly from earlier treatment than is currently recommended in UK guidelines, finds a new study led by UCL researchers.
The results of the CATALYST study, published in The Lancet, included data from four randomised trials with a total of 5441 patients across the UK, Switzerland, Sweden and the United States, who had all experienced a recent stroke (between 2017-2024) due to a blocked artery and atrial fibrillation (irregular heartbeat).
Patients had either started medication early (within four days of their stroke) or later (after five days or more).
The researchers found that starting direct oral anticoagulants (DOACs, which thin the blood to prevent it from clotting as quickly) within four days of having a stroke was safe, with no increase in bleeding into the brain. Additionally, early initiation of treatment significantly reduced the risk of another stroke due to bleeding or artery blockage by 30% compared to those who started treatment later.
People with atrial fibrillation who have had a stroke have an increased risk of having another stroke, but this risk can be reduced by taking anticoagulants.
Anticoagulants come with the rare but dangerous side effect of bleeding into the brain, and there is a lack of evidence about when is best to start taking them after a stroke. Current UK guidelines are varied, suggesting that those who have had a moderate or severe stroke should wait at least five days before starting blood-thinning treatments.
To tackle this question, the researchers investigated the impact of early compared to delayed anticoagulant treatment.
Chief Investigator, Professor David Werring (UCL Queen Square Institute of Neurology) said: “Our new study supports the early initiation of DOACs in clinical practice, offering better protection against further strokes for a wide range of patients.”
The researchers now hope that their findings will influence clinical guidelines and improve outcomes for stroke patients worldwide.
First author and main statistician, Dr Hakim-Moulay Dehbi (UCL Comprehensive Clinical Trials Unit), said: “By systematically combining the data from four clinical trials, we have identified with increased confidence, compared to the individual trials, that early DOAC initiation is effective.”
The CATALYST study builds on findings from the British Heart Foundation funded OPTIMAS study – where the UCL-led research team analysed 3621 patients with atrial fibrillation who had had a stroke between 2019 and 2024, across 100 UK hospitals.
Half of the participants began anticoagulant treatment within four days of their stroke (early), and the other half started treatment seven to 14 days after having a stroke (delayed). Patients were followed up after 90 days to assess several outcomes including whether they went on to have another stroke and whether they experienced bleeding in the brain.
Both the early and late groups experienced a similar number of recurrent strokes. Early treatment was found to be effective and did not increase the risk of a bleed into the brain.
Professor Nick Freemantle, Senior Investigator and Director of the UCL Comprehensive Clinical Trials Unit (CCTU) that co-ordinated the OPTIMAS trial, said: “The benefits of early initiation of blood-thinning treatment are clear: patients receive the definitive and effective long-term stroke prevention therapy promptly, rather than waiting. This approach ensures that crucial treatments are not delayed or missed, particularly for patients who are discharged from the hospital.”
Study limitations
The timing for starting blood-thinning medication was based on previous trial designs (such as OPTIMAS), which may not cover all possible scenarios. Additionally, not all participants were randomised to the same timing groups, so some data was excluded. Lastly, the study didn’t include many patients with very severe strokes, so the findings might not apply to those cases.
A diabetes medication that lowers brain fluid pressure has cut monthly migraine days by more than half, according to a new study presented at the European Academy of Neurology (EAN) Congress 2025
A diabetes medication that lowers brain fluid pressure has cut monthly migraine days by more than half, according to a new study presented at the European Academy of Neurology (EAN) Congress 2025.1
Researchers at the Headache Centre of the University of Naples “Federico II” gave the glucagon-like peptide-1 (GLP-1) receptor agonist liraglutide to 26 adults with obesity and chronic migraine (defined as ≥ 15 headache days per month). Patients reported an average of 11 fewer headache days per month, while disability scores on the Migraine Disability Assessment Test dropped by 35 points, indicating a clinically meaningful improvement in work, study, and social functioning.
GLP-1 agonists have gained recent widespread attention, reshaping treatment approaches for several diseases, including diabetes and cardiovascular disease.2 In the treatment of type 2 diabetes, liraglutide helps lower blood sugar levels and reduce body weight by suppressing appetite and reducing energy intake.3,4,5
Importantly, while participants’ body-mass index declined slightly (from 34.01 to 33.65), this change was not statistically significant. An analysis of covariance confirmed that BMI reduction had no effect on headache frequency, strengthening the hypothesis that pressure modulation, not weight loss, drives the benefit.
“Most patients felt better within the first two weeks and reported quality of life improved significantly”, said lead researcher Dr Simone Braca. “The benefit lasted for the full three-month observation period, even though weight loss was modest and statistically non-significant.”
Patients were screened to exclude papilledema (optic disc swelling resulting from increased intracranial pressure) and sixth nerve palsy, ruling out idiopathic intracranial hypertension (IIH) as a confounding factor. Growing evidence closely links subtle increases in intracranial pressure to migraine attacks.6 GLP-1-receptor agonists such as liraglutide reduce cerebrospinal fluid secretion and have already proved effective in treating IIH.7 Therefore, building on these observations, Dr Braca and colleagues hypothesised that exploiting the same mechanism of action might ultimately dampen cortical and trigeminal sensitisation that underlie migraine.
“We think that, by modulating cerebrospinal fluid pressure and reducing intracranial venous sinuses compression, these drugs produce a decrease in the release of calcitonin gene-related peptide (CGRP), a key migraine-promoting peptide”, Dr Braca explained. “That would pose intracranial pressure control as a brand-new, pharmacologically targetable pathway.”
Mild gastrointestinal side effects (mainly nausea and constipation) occurred in 38% of participants but did not lead to treatment discontinuation.
Following this exploratory 12-week pilot study, a randomised, double-blind trial with direct or indirect intracranial pressure measurement is now being planned by the same research team in Naples, led by professor Roberto De Simone. “We also want to determine whether other GLP-1 drugs can deliver the same relief, possibly with even fewer gastrointestinal side effects”, Dr Braca noted.
If confirmed, GLP-1-receptor agonists could offer a new treatment option for the estimated one in seven people worldwide who live with migraine,8 particularly those who do not respond to current preventives. Given liraglutide’s established use in type 2 diabetes and obesity, it may represent a promising case of drug repurposing in neurology.
References
Braca S., Russo C. et al.GLP-1R Agonists for the Treatment of Migraine: A Pilot Prospective Observational Study. Abstract A-25-13975. Presented at the 11th EAN Congress (Helsinki, Finland).
Zheng, Z., Zong, Y., Ma, Y. et al. Glucagon-like peptide-1 receptor: mechanisms and advances in therapy. Sig Transduct Target Ther9, 234 (2024).
Lin, C. H. et al. An evaluation of liraglutide including its efficacy and safety for the treatment of obesity. Expert Opin. Pharmacother.21, 275–285 (2020).
Moon, S. et al. Efficacy and safety of the new appetite suppressant, liraglutide: A meta-analysis of randomized controlled trials. Endocrinol. Metab. (Seoul.)36, 647–660 (2021).
Jacobsen, L. V., Flint, A., Olsen, A. K. & Ingwersen, S. H. Liraglutide in type 2 diabetes mellitus: clinical pharmacokinetics and pharmacodynamics. Clin. Pharmacokinet.55, 657–672 (2016).
De Simone R, Sansone M, Russo C, Miele A, Stornaiuolo A, Braca S. The putative role of trigemino-vascular system in brain perfusion homeostasis and the significance of the migraine attack. Neurol Sci. 2022 Sep;43(9):5665-5672. doi: 10.1007/s10072-022-06200-x. Epub 2022 Jul 8. PMID: 35802218; PMCID: PMC9385793.
Mitchell J.L., Lyons H.S., Walker J.K. et al. (2023). The effect of GLP-1RA exenatide on idiopathic intracranial hypertension: a randomised clinical trial. Brain. 146(5):1821-1830.
Steiner T.J., Stovner L.J., Jensen, R. et al. (2020). Migraine remains second among the world’s causes of disability. The Journal of Headache and Pain. 21:137.
A medical team at Erasmus University Medical Center in the Netherlands uses the new imaging probe with a Quest camera to get a better view of cancerous tumors during non-brain cancer surgery. Photo courtesy of Erasmus University Medical Center
In a significant leap forward for successful cancer surgery, researchers at the University of Missouri and collaborators have developed a new imaging probe to help surgeons more accurately identify and remove aggressive tumours during operations.
The tool is expected to be a critical advancement in the fight against glioblastoma, one of the most difficult-to-treat brain cancers. In the future, it is intended to be expanded for image-guided surgery of various other solid tumours.
Described in a new study in Nature Publishing Group Imaging, the innovation works by pairing a fluorescent dye with a fatty acid molecule that cancer cells readily absorb. When introduced into the body, the compound is taken up by tumour cells, causing them to glow under near-infrared light, revealing cancer that might otherwise remain hidden.
Glioblastoma is considered surgically incurable because the tumour doesn’t stay in one place – it spreads and invades healthy brain tissue in a diffuse, microscopic way. This makes it impossible to remove completely without risking serious damage to brain function.
“Surgery remains one of the primary treatments for many cancers,” Elena Goun, associate professor of chemistry in the College of Arts and Science and one of the lead authors of the study, said. “In breast or prostate cancer, surgeons can often remove the tumour along with surrounding tissue. In brain cancer, that’s simply not possible. You must preserve healthy brain tissue. But if even a few cancer cells are left behind, the disease will return.”
That dilemma is especially acute with glioblastoma, which doesn’t form a neatly contained mass. Instead, it sends out microscopic extensions — finger-like projections that blend into healthy brain tissue and are invisible to the naked eye.
Because of this, surgeons must walk a fine line: removing as much tumour as possible while avoiding harm to vital brain areas. The more thoroughly the tumour is removed, the more effective follow-up treatments like radiation and chemotherapy tend to be.
The new small-molecule probe, known as FA-ICG, is engineered to solve that problem. It links a natural long-chain fatty acid (FA) to indocyanine green (ICG), an FDA-approved near-infrared dye widely used in surgical imaging. This fatty acid-based approach means the probe is highly selective: glioblastoma cells, which thrive on fatty acids, absorb it more than normal brain cells. That makes the cancer stand out more clearly.
The result is a tool that takes advantage of cancer’s altered metabolism to highlight tumour cells from within.
“Surgeons would view a monitor during surgery showing where the probe is lighting up,” Goun explained. “If they still see fluorescent signals, it means cancer is still present and more tissue needs to be removed. When the light disappears, they would know they’ve cleared the area.”
In the operating room, surgeons already use a variety of tools to guide tumour removal – including microscopes, ultrasound and fluorescent dyes. Of those, fluorescent dyes are particularly useful because they make otherwise invisible tumour cells light up under special lighting.
Right now, the only approved imaging dye for glioblastoma surgery is 5-ALA, which fluoresces under blue light. But 5-ALA comes with major limitations: The operating room must be darkened in order to see it, tissue penetration is shallow and the fluorescent signal is often weak and non-specific.
It also comes with side effects, including photosensitivity, meaning patients must avoid bright light exposure after surgery due to the risk of skin and eye damage.
That’s where the FA-ICG probe shines – both literally and functionally.
Compared to 5-ALA, FA-ICG is brighter, works under normal surgical lighting, and offers real-time visualisation under the microscope – no need to turn the lights off mid-surgery. This saves time and makes procedures more efficient. The signal-to-background ratio is also higher, meaning it’s easier to distinguish tumour tissue from healthy brain.
The FA-ICG probe is not only easier to see, it’s also easier to use. Its longer half-life allows more flexibility in scheduling surgeries, and the logistics of administration are simpler than with current probes.
“The upside of fluorescence-guided surgery is that you can make little remnants much more visible using the light emitting properties of these tumour cells when you give them a dye,” said Rutger Balvers, a neurosurgeon at Erasmus University Medical Center in the Netherlands, who is expected to lead human clinical trials of the probe. “And we think that the upside of FA-ICG compared to what we have now is that it’s more select in targeting tumour cells. The visual properties of the probe are better than what we’ve used before.”
Michael Chicoine is a neurosurgeon at MU Health Care and chair of Mizzou’s School of Medicine’s Department of Neurosurgery. While he’s not directly involved in the research, Chicoine understands the potential benefits firsthand.
Currently, he said, MRIs are the gold standard for imaging tumours; however, they’re expensive and time-consuming, especially when required during an operation.
“This fluorescent metabolically linked tool gives you real-time imaging,” he said. “We could merge techniques, using the probe during surgery and saving the MRI for a sort of final exam. It’s definitely an exciting advancement.”
Researchers are also excited about other uses for the probe, including for other types of cancers and for use during follow-up treatments.
“After radiation or chemotherapy, it becomes very difficult to distinguish between scar tissue and active tumor,” Chicoine said. “This probe could give us a definitive answer – helping doctors know whether to continue treatment or adjust it, or consider another surgery. Eliminating the current uncertainty would be really helpful.”
Another promising use of the probe could be in photodynamic therapy either during or after surgery. Since the dye also has light-activated properties that can kill cancer cells, researchers are exploring whether it could double as a treatment tool, not just a diagnostic one.
Clinical trials for use in glioblastoma cases are expected to start in Europe, with strong interest already growing among neurosurgical teams.
The upcoming Phase 1 trial will focus on how patients tolerate the probe, whether there are any side effects at an effective dose and how its performance compares to existing tools. Ultimately, the goal is to make brain tumour surgery safer, helping surgeons remove all cancerous tissues while preserving as much healthy brain tissue as possible.
If results are positive, future studies could expand the use of FA-ICG beyond brain tumours to other cancers with high fatty acid metabolism, such as pancreatic cancer,according to fellow corresponding author Laura Mezzanotte from the Erasmus’ Department of Radiology and Nuclear Medicine.
While scientists have long known that different senses activate different parts of the brain, a new Yale-led study indicates that multiple senses all stimulate a critical region deep in the brain that controls consciousness.
The study, published in the journal NeuroImage, sheds new light on how sensory perception works in the brain and may fuel the development of therapies to treat disorders involving attention, arousal, and consciousness.
In the study, a research team led by Yale’s Aya Khalaf focused on the workings of subcortical arousal systems, brain structure networks that play a crucial role in regulating sleep-wake states. Previous studies on patients with disorders of consciousness, such as coma or epilepsy, have confirmed the influence of these systems on states of consciousness.
But prior research has been largely limited to tracking individual senses. For the new study, researchers asked if stimuli from multiple senses share the same subcortical arousal networks. They also looked at how shifts in a subject’s attention might affect these networks.
For the study, researchers analysed fMRI (functional magnetic resonance imaging) datasets collected from 1,561 healthy adult participants as they performed 11 different tasks using four senses: vision, audition, taste, and touch.
They made two important discoveries: that sensory input does make use of shared subcortical systems and, more surprisingly, that all input, regardless of which sense delivered the signal, stimulates activity in two deep brain regions, the midbrain reticular formation and the central thalamus, when a subject is sharply focused on the senses.
The key to stimulating the critical central brain regions, they found, were the sudden shifts in attention demanded by the tasks.
“We were expecting to find activity on shared networks, but when we saw all the senses light up the same central brain regions while a test subject was focusing, it was really astonishing,” said Khalaf, a postdoctoral associate in neurology at Yale School of Medicine and lead author of the study.
The discovery highlighted how key these central brain regions are in regulating not only disorders of consciousness, but also conditions that impact attention and focus, such as attention deficit hyperactivity disorder. This finding could lead to better targeted medications and brain stimulation techniques for patients.
“This has also given us insights into how things work normally in the brain,” said senior author Hal Blumenfeld, the Mark Loughridge and Michele Williams Professor of Neurology who is also a professor in neuroscience and neurosurgery and director of the Yale Clinical Neuroscience Imaging Center. “It’s really a step forward in our understanding of awareness and consciousness.”
Looking across senses, this is the first time researchers have seen a result like this, said Khalaf, who is also part of Blumenfeld’s lab.
“It tells us how important this brain region is and what it could mean in efforts to restore consciousness,” she said.
A new, highly efficient process for performing this conversion could make it easier to develop therapies for spinal cord injuries or diseases like ALS.
Anne Trafton | MIT News
Researchers at MIT have devised a simplified process to convert a skin cell directly into a neuron. This image shows converted neurons (green) that have integrated with neurons in the brain’s striatum after implantation.
Credits :Image: Courtesy of the researchers
Converting one type of cell to another – for example, a skin cell to a neuron – can be done through a process that requires the skin cell to be induced into a “pluripotent” stem cell, then differentiated into a neuron. Researchers at MIT have now devised a simplified process that bypasses the stem cell stage, converting a skin cell directly into a neuron.
Working with mouse cells, the researchers developed a conversion method that is highly efficient and can produce more than 10 neurons from a single skin cell. If replicated in human cells, this approach could enable the generation of large quantities of motor neurons, which could potentially be used to treat patients with spinal cord injuries or diseases that impair mobility.
“We were able to get to yields where we could ask questions about whether these cells can be viable candidates for the cell replacement therapies, which we hope they could be. That’s where these types of reprogramming technologies can take us,” says Katie Galloway, the W. M. Keck Career Development Professor in Biomedical Engineering and Chemical Engineering.
As a first step toward developing these cells as a therapy, the researchers showed that they could generate motor neurons and engraft them into the brains of mice, where they integrated with host tissue.
Galloway is the senior author of two papers describing the new method, which appear today in Cell Systems. MIT graduate student Nathan Wang is the lead author of both papers.
From skin to neurons
Nearly 20 years ago, scientists in Japan showed that by delivering four transcription factors to skin cells, they could coax them to become induced pluripotent stem cells (iPSCs). Similar to embryonic stem cells, iPSCs can be differentiated into many other cell types. This technique works well, but it takes several weeks, and many of the cells don’t end up fully transitioning to mature cell types.
“Oftentimes, one of the challenges in reprogramming is that cells can get stuck in intermediate states,” Galloway says. “So, we’re using direct conversion, where instead of going through an iPSC intermediate, we’re going directly from a somatic cell to a motor neuron.”
Galloway’s research group and others have demonstrated this type of direct conversion before, but with very low yields – fewer than 1 percent. In Galloway’s previous work, she used a combination of six transcription factors plus two other proteins that stimulate cell proliferation. Each of those eight genes was delivered using a separate viral vector, making it difficult to ensure that each was expressed at the correct level in each cell.
In the first of the new Cell Systems papers, Galloway and her students reported a way to streamline the process so that skin cells can be converted to motor neurons using just three transcription factors, plus the two genes that drive cells into a highly proliferative state.
Using mouse cells, the researchers started with the original six transcription factors and experimented with dropping them out, one at a time, until they reached a combination of three – NGN2, ISL1, and LHX3 — that could successfully complete the conversion to neurons.
Once the number of genes was down to three, the researchers could use a single modified virus to deliver all three of them, allowing them to ensure that each cell expresses each gene at the correct levels.
Using a separate virus, the researchers also delivered genes encoding p53DD and a mutated version of HRAS. These genes drive the skin cells to divide many times before they start converting to neurons, allowing for a much higher yield of neurons, about 1100 percent.
“If you were to express the transcription factors at really high levels in nonproliferative cells, the reprogramming rates would be really low, but hyperproliferative cells are more receptive. It’s like they’ve been potentiated for conversion, and then they become much more receptive to the levels of the transcription factors,” Galloway says.
The researchers also developed a slightly different combination of transcription factors that allowed them to perform the same direct conversion using human cells, but with a lower efficiency rate – between 10 and 30 percent, the researchers estimate. This process takes about five weeks, which is slightly faster than converting the cells to iPSCs first and then turning them into neurons.
Implanting cells
Once the researchers identified the optimal combination of genes to deliver, they began working on the best ways to deliver them, which was the focus of the second Cell Systems paper.
They tried out three different delivery viruses and found that a retrovirus achieved the most efficient rate of conversion. Reducing the density of cells grown in the dish also helped to improve the overall yield of motor neurons. This optimised process, which takes about two weeks in mouse cells, achieved a yield of more than 1000 percent.
Working with colleagues at Boston University, the researchers then tested whether these motor neurons could be successfully engrafted into mice. They delivered the cells to a part of the brain known as the striatum, which is involved in motor control and other functions.
After two weeks, the researchers found that many of the neurons had survived and seemed to be forming connections with other brain cells. When grown in a dish, these cells showed measurable electrical activity and calcium signaling, suggesting the ability to communicate with other neurons. The researchers now hope to explore the possibility of implanting these neurons into the spinal cord.
The MIT team also hopes to increase the efficiency of this process for human cell conversion, which could allow for the generation of large quantities of neurons that could be used to treat spinal cord injuries or diseases that affect motor control, such as ALS. Clinical trials using neurons derived from iPSCs to treat ALS are now underway, but expanding the number of cells available for such treatments could make it easier to test and develop them for more widespread use in humans, Galloway says.
The research was funded by the National Institute of General Medical Sciences and the National Science Foundation Graduate Research Fellowship Program.
A breakthrough study from the Hebrew University of Jerusalem, published this week in the prestigious journal PNAS (Proceedings of the National Academy of Sciences USA), reveals a previously unknown peripheral mechanism by which paracetamol relieves pain.
The study was led by Prof Alexander Binshtok from the Hebrew University’s Faculty of Medicine and Center for Brain Sciences (ELSC) and Prof Avi Priel from its School of Pharmacy. Together, they uncovered a surprising new way that paracetamol, one of the world’s most common painkillers, actually works.
For decades, scientists believed that paracetamol relieved pain by working only in the brain and spinal cord. But this new research shows that the drug also works outside the brain, in the nerves that first detect pain.
Their discovery centres on a substance called AM404, which the body makes after taking paracetamol. The team found that AM404 is produced right in the pain-sensing nerve endings – and that it works by shutting off specific channels (called sodium channels) that help transmit pain signals. By blocking these channels, AM404 stops the pain message before it even starts.
“This is the first time we’ve shown that AM404 works directly on the nerves outside the brain,” said Prof Binshtok. “It changes our entire understanding of how paracetamol fights pain.”
This breakthrough could also lead to new types of painkillers. Because AM404 targets only the nerves that carry pain, it may avoid the numbness, muscle weakness, and side effects that come with traditional local anaesthetics.
“If we can develop new drugs based on AM404, we might finally have pain treatments that are highly effective but also safer and more precise,” added Prof Priel.
A new study led by Keck Medicine of USC researchers may have uncovered an effective combination therapy for glioblastoma, a brain tumour diagnosis with few available effective treatments. According to the National Brain Tumor Society, the average survival for patients diagnosed with glioblastoma is eight months.
The study, which was published in the journal Med, finds that using Tumour Treating Fields therapy (TTFields), which delivers targeted waves of electric fields directly into tumours to stop their growth and signal the body’s immune system to attack cancerous tumour cells, may extend survival among patients with glioblastoma, when combined with immunotherapy (pembrolizumab) and chemotherapy (temozolomide).
TTFields disrupt tumour growth using low-intensity, alternating electric fields that push and pull key structures inside tumour cells in continually shifting directions, making it difficult for the cells to multiply. Preventing tumour growth gives patients a better chance of successfully fighting the cancer. When used to treat glioblastoma, TTFields are delivered through a set of mesh electrodes that are strategically positioned on the scalp, generating fields at a precise frequency and intensity focused on the tumour. Patients wear the electrodes for approximately 18 hours a day.
Researchers observed that TTFields attract more tumour-fighting T cells, which are white blood cells that identify and attack cancer cells, into and around the glioblastoma. When followed by immunotherapy, these T cells stay active longer and are replaced by even stronger, more effective tumour-fighting T cells.
“By using TTFields with immunotherapy, we prime the body to mount an attack on the cancer, which enables the immunotherapy to have a meaningful effect in ways that it could not before,” said David Tran, MD, PhD, chief of neuro-oncology with Keck Medicine, co-director of the USC Brain Tumor Center and corresponding author of the study. “Our findings suggest that TTFields may be the key to unlocking the value of immunotherapy in treating glioblastoma.”
TTFields are often combined with chemotherapy in cancer treatment. However, even with aggressive treatment, the prognosis for glioblastoma remains poor. Immunotherapy, while successful in many other cancer types, has also not proved effective for glioblastoma when used on its own.
However, in this study, adding immunotherapy to TTFields and chemotherapy was associated with a 70% increase in overall survival. Notably, patients with larger, unresected (not surgically removed) tumours showed an even stronger immune response to TTFields and lived even longer. This suggests that, when it comes to kick-starting the body’s immune response against the cancer, having a larger tumour may provide more targets for the therapy to work against.
Using alternating electric fields to unlock immunotherapy
Pembrolizumab, the immunotherapy used in this study, is an immune checkpoint inhibitor (ICI), which enhances the body’s natural ability to fight cancers by improving T cells’ ability to identify and attack cancer cells.
However, there are typically few T cells in and around glioblastomas because these tumours originate in the brain and are shielded from the body’s natural immune response by the blood-brain barrier. This barrier safeguards the brain by tightly regulating which cells and substances enter from the bloodstream. Sometimes, this barrier even blocks T cells and other therapies that could help kill brain tumours.
This immunosuppressive environment inside and around the glioblastoma is what makes common cancer therapies like pembrolizumab and chemotherapy significantly less effective in treating it. Tran theorised the best way to get around this issue was to start an immune reaction directly inside the tumour itself, an approach known as in situ immunisation, using TTFields.
This study demonstrates that combining TTFields with immunotherapy triggers a potent immune response within the tumour – one that ICIs can then amplify to bolster the body’s own defence against cancer.
“Think of it like a team sport – immunotherapy sends players in to attack the tumour (the offence), while TTFields weaken the tumour’s ability to fight back (the defence). And just like in team sports, the best defence is a good offence,” said Tran, who is also a member of the USC Norris Comprehensive Cancer Center.
Study methodology and results
The study analysed data from 2-THE-TOP, a Phase 2 clinical trial, which enrolled 31 newly diagnosed glioblastoma patients who had completed chemoradiation therapy. Of those, 26 received TTFields combined with both chemotherapy and immunotherapy. Seven of these 26 patients had inoperable tumours due to their locations – an especially high-risk subgroup with the worst prognosis and few treatment options.
Patients in the trial were given six to 12 monthly treatments of chemotherapy alongside TTFields for up to 24 months. The number and duration of treatments were determined by patients’ response to treatment. The immunotherapy was given every three weeks, starting with the second dose of chemotherapy, for up to 24 months.
Patients who used the device alongside chemotherapy and immunotherapy lived approximately 10 months longer than patients who had used the device with chemotherapy alone in the past. Moreover, those with large, inoperable tumours lived approximately 13 months longer and showed much stronger immune activation compared to patients who underwent surgical removal of their tumours.
“Further studies are needed to determine the optimal role of surgery in this setting, but these findings may offer hope, particularly for glioblastoma patients who do not have surgery as an option,” said Tran.
The researchers are now moving ahead to a Phase 3 trial.
Diffusion tensor imaging shows corpus callosum fibre tracts in two adolescents: One with traumatic brain injury (TBI; G and H) and one with an orthopaedic injury (E and F). At 3 months post-injury (E, G), early degeneration and loss of fibre tracts are visible, especially in the TBI case. At 18 months (F, H), some recovery or reorganisation occurs, but persistent loss and thinning of tracts remain, particularly in the frontal regions, indicating lasting white matter damage after TBI.
By Kathy Malherbe
A silent but devastating brain disease is casting a shadow over contact and collision sports, particularly rugby. Traumatic Brain injuries (TBIs) as a result of an impact to the head, cause a disruption in the normal function of the brain. Repeated TBIs are linked to an increased risk of neurodegenerative diseases like early-onset dementia which has the highest prevalence and is the most concerning. Others include Parkinson’s disease, Alzheimer’s and Chronic Traumatic Encephalopathy, better known as CTE.
How head injuries happen
Dr Hofmeyr Viljoen, radiologist at SCP Radiology, says that there are several types of head injuries common in rugby. ‘The most frequent being TBIs which occur when the impact and sudden movement results in the brain shifting rotationally, sideways or backwards and forwards within the skull. This stretching and elongation causes damage to nerve fibres as well as blood vessels. Surprisingly, a direct blow isn’t always necessary. Rapid acceleration and deceleration, such as during a tackle or fall, can also result in an injury. More severe head injuries may include skull fractures, bruising or bleeding around the brain, all of which require urgent diagnosis and intervention.’
Riaan van Tonder, a sports physician with a special interest in sports-related concussion and radiology registrar at Stellenbosch University, explains that concussions and, even more so, repetitive sub-concussive impacts, result in a cascade of changes at a cellular level, gradually damaging the nervous system.
Although rugby is notorious for heavy tackles and collisions, it took a lawsuit to prompt more widespread awareness. A class-action suit filed in the High Court in London, by former union and league players, accused World Rugby of failing to implement adequate rules to assess, diagnose and manage concussions. Steve Thompson’s, the legendary English hookers, early onset dementia has been one of the sports’ biggest talking points. He was diagnosed in 2020 with this neurodegenerative disease, purportedly as a result of repeated trauma to the brain. The claimants argue that the governing bodies were negligent and that their neurological problems stem from years of unmanaged head injuries. The outcome of this case to be heard in 2025, could significantly reshape the legal and medical responsibilities of sports organisations globally.
What is Chronic Traumatic Encephalopathy (CTE)
CTE is a progressive neurodegenerative condition strongly linked to repeated head impacts. It has been implicated in memory loss, mood disturbances, psychosis and, in many cases, premature death. It can only be diagnosed after death at autopsy, where researchers examine brain tissue for abnormal protein deposits and signs of widespread degeneration. Despite this limitation, mounting evidence is forcing sports organisations, including rugby authorities, to confront uncomfortable truths about how repeated head trauma can alter lives permanently.
Uncovering the extent of the problem
In 2023, the Boston University CTE Centre released updated autopsy findings from its brain bank. Of 376 former NFL player’s brains studied post-mortem, 345 had been diagnosed with CTE, a staggering 91.7%. While brain banks are inherently subject to selection bias, the results remain alarming. For comparison, a 2018 study of 164 randomly selected brains revealed just one case of CTE.
This brain disease isn’t new. Its earliest descriptions date back to Dr Harrison Martland in 1928, who studied post-mortem findings in boxers and coined the term ‘punch drunk’ to describe their confusion, tremors and cognitive decline. What was once confined to boxing is now known to affect athletes in rugby, football, ice hockey and even military personnel exposed to repeated blast injuries.
Radiology’s role in determining head injuries
Although Computed Tomography (CT) scans are not designed to specifically diagnose concussions, they are crucial to imaging patients with severe concussion or atypical symptoms. ‘CT scans rapidly detect serious issues like fractures, brain swelling and bleeding, providing crucial information for urgent treatment decisions,’ explains Dr Viljoen.
‘Magnetic Resonance Imaging (MRI) is used particularly when concussion symptoms persist or worsen. It excels in identifying subtle injuries, such as microbleeds and brain swelling that may have been missed by CT scans,’ he says.
‘CTE is challenging because currently, it can only be definitively diagnosed after death,’ he explains. ‘However, ongoing research aims to develop methods to detect CTE in living patients, potentially using advanced imaging techniques like Positron Emission Tomography (PET).’ Most research is focused on advancing non-invasive methods to see what is happening inside the brain of a living person and to track it over time.
Advanced imaging methods
Emerging imaging techniques, such as Diffusion Tensor Imaging (DTI), show promise for better understanding and management of head injuries, especially the subtle effects of concussions. ‘DTI helps identify damage to the brain’s white matter, potentially guiding return-to-play decisions and treatment strategies,’ notes Dr Viljoen.
The biomechanics of brain trauma
Former NFL player and biomechanical engineer, David Camarillo, explains in a TED talk that helmets, although effective at preventing skull fractures, do little to stop biomechanical forces from affecting the brain inside the skull.
Camarillo highlights that concussions and the stretching of nerve fibres are more likely to affect the middle of the brain, the corpus callosum, the thick band that facilitates communication between the left and right brain hemispheres. ’It’s not just bruising,’ he says, ‘we’re talking about dying brain tissue.’
Smart mouthguard technology in rugby
‘Presently,’ says Van Tonder, ‘smart mouthguards are mandatory at elite level. These custom-fitted mouthguards contain accelerometers and gyroscopes that detect straight and rotational forces on the head. Data is transmitted live to medical teams at a rate of 1 000 samples per second.
‘If a threshold is exceeded, an alert is triggered, prompting an immediate Head Injury Assessment (HIA1). Crucially, the system can identify dangerous impacts, even when no symptoms or video evidence is apparent. This is an essential shift in concussion management,’ says van Tonder. ‘It allows proactive assessments rather than waiting for visible signs.’ World Rugby has committed €2 million to assist teams in adopting this technology and integrating it into HIA1.
Brain Health Service
The really good news is that in March this year, World Rugby and SA Rugby launched a new Brain Health Service to support former elite South African players. It’s the first of its kind in the world and South Africa is the fourth nation to establish this system that supports players to understand how they can optimise management of their long-term brain health. It includes an awareness and education component, an online questionnaire and tele-health delivered cognitive assessment with a trained brain health practitioner. This service assesses players for any brain health warning signs, provides a baseline result, advice on managing risk factors and signposts anyone in need of specialist care.
Super Rugby and smart mouthguards
Super Rugby has revised its smart mouthguard policy, no longer requiring players to leave the field immediately for a HIA when an alert is triggered. The change follows criticism from players and coaches, including Crusaders captain Scott Barrett, who argued the rule could unfairly affect match outcomes. Players must still wear the devices but on-field doctors will assess them first; full HIAs will be conducted at half-time or full-time, if necessary. Further trials are planned to improve the system before reinstating immediate alerts.
Where to from here?
Researchers continue to explore ways to reduce brain movement inside the skull during collisions. One innovative idea includes an airbag neck collar for cyclists, which inflates around the head upon impact. It’s closer to the goal of reducing the brain’s movement – and therefore the risk of concussion. However, regulatory hesitation remains a barrier, with no formal cycling helmet approval process currently in place.
The evidence linking repetitive head impacts to long-term brain degeneration is too compelling to ignore. Rugby, like other contact sports, must continue evolving its protocols, technology and player education to protect athletes at all levels … starting at schools.
While innovations such as smart mouthguards mark significant progress, much remains to be done: From regulatory reform to changing the sporting culture that once downplayed the severity of concussion. Van Tonder notes, ‘We’re behind, but it’s not too late to catch up.’
In rugby, the HIA protocol now consists of three stages:
HIA1: Immediate, sideline assessment during the match.
HIA2: Same-day evaluation within three hours post-match.
HIA3: A more detailed follow-up, typically done 36-48 hours later.