Category: Neurology

Small Molecule Could Restore Sight Lost to Optic Nerve Injury

Eye
Source: Daniil Kuzelev on Unsplash

Central nervous system (CNS) injuries often result in a catastrophic loss of sensory, motor and visual functions, and poses one of the most difficult medical challenges today. Neuroscientists report in PNAS that they recently identified a small molecule that can effectively stimulate nerve regeneration and restore visual functions after optic nerve injury.

“There is currently no effective treatment available for traumatic injuries to the CNS, so there is an immediate need for potential drug to promote CNS repair and ultimately achieve full function recovery, such as visual function, in patients,” said research leader Dr Eddie Ma Chi-him at City University of Hong Kong.

Enhancing mitochondrial dynamics and motility is key for successful axon regeneration

Axons are responsible for transmitting signals between neurons and from the brain to muscles and glands. The first step for successful axon regeneration is to form active growth cones and the activation of a regrowth programme, involving the synthesis and transport of materials to regrow axons. These are all energy-demanding processes, which require the active transport of mitochondria (the powerhouse of the cell) to injured axons at the distal end.

Injured neurons therefore face special challenges that require long-distance transport of mitochondria from the soma to distal regenerating axons, where axonal mitochondria in adults are mostly stationary and local energy consumption is critical for axon regeneration.

A research team led by Dr Ma identified a therapeutic small molecule, M1, which can increase the fusion and motility of mitochondria, resulting in sustained, long-distance axon regeneration. Regenerated axons elicited neural activities in target brain regions and restored visual functions within four to six weeks after optic nerve injury in M1-treated mice.

Small molecule M1 promotes mitochondrial dynamics and sustains long-distance axon regeneration

“Photoreceptors in the eyes [retina] forward visual information to neurons in the retina. To facilitate the recovery of visual function after injury, the axons of the neurons must regenerate through the optic nerve and relay nerve impulses to visual targets in the brain via the optic nerve for image processing and formation,” explained Dr Ma.

To investigate whether M1 could promote long-distance axon regeneration after CNS injuries, the research team assessed the extent of axon regeneration in M1-treated mice four weeks after injury. Strikingly, most of the regenerating axons of M1-treated mice reached 4mm distal to the crush site (ie near optic chiasm), while no regenerating axons were found in vehicle-treated control mice. In M1-treated mice, the survival of retinal ganglion cells (RGCs, neurons that transmit visual stimuli from the eye to the brain) was significantly increased from 19% to 33% four weeks after optic nerve injury.

“This indicates that the M1 treatment sustains long-distance axon regeneration from the optic chiasm, i.e. midway between the eyes and target brain region, to multiple subcortical visual targets in the brain. Regenerated axons elicit neural activities in target brain regions and restore visual functions after M1 treatment,” Dr Ma added.

M1 treatment restores visual function

To further explore whether M1 treatment can restore visual function, the research team gave the M1-treated mice a pupillary light reflex test six weeks after the optic nerve injury. They found that the lesioned eyes of M1-treated mice restored the pupil constriction response upon blue light illumination to a level similar to that of non-lesioned eyes, suggesting that M1 treatment can restore the pupil constriction response after optic nerve injuries.

In addition, the research team assessed the response of the mice to a looming stimulus — a visually induced innate defensive response to avoid predators. The mice were placed into an open chamber with a triangular prism-shaped shelter and a rapidly expanding overhead-black circle as a looming stimulus, and their freeze and escape behaviours were observed. Half of the M1-treated mice responded to the stimulus by hiding in a shelter, showing that M1 induced robust axon regeneration to reinnervate subcortical visual target brain regions for complete recovery of their visual function.

Potential clinical application of M1 for repairing nervous system injury

The seven-year-long study highlights the potential of a readily available, non-viral therapy for CNS repair, which builds on the team’s previous research on peripheral nerve regeneration using gene therapy.

“This time we used the small molecule, M1, to repair the CNS simply by intravitreal injection into the eyes, which is an established medical procedure for patients, eg for macular degeneration treatment. Successful restoration of visual functions, such as pupillary light reflex and response to looming visual stimuli was observed in M1-treated mice four to six weeks after the optic nerve had been damaged,” said Dr Au Ngan-pan, Research Associate in the Department of Neuroscience.

The team is also developing an animal model for treating glaucoma-related vision loss using M1 and possibly other common eye diseases and vision impairments such as diabetes-related retinopathy, macular degeneration and traumatic optic neuropathy. Thus, further investigation is warranted to evaluate the potential clinical application of M1. “This research breakthrough heralds a new approach that could address unmet medical needs in accelerating functional recovery within a limited therapeutic time window after CNS injuries,” said Dr Ma.

Source: City University of Hong Kong 

Neuroimaging can’t Identify Psychiatric Disorders – Yet

MRI images of the brain
Photo by Anna Shvets on Pexels

Neuroimaging technologies hold great promise in helping clinicians link specific symptoms of mental health disorders to abnormal patterns of brain activity. But a new study published in the American Journal of Psychiatry shows there are still kinks to be ironed out before doctors can translate images of the brain to psychiatric disorders such as post-traumatic stress disorder (PTSD).

Several years ago, The National Institutes of Mental Health launched a multi-billion-dollar research effort to locate biomarkers of brain activity that point to the biological roots of a host of mental health diseases, which today are typically identified by clinical evaluation of a constellation of often overlapping symptoms reported by patients.

“The idea is to forget classification of disease by symptoms and find underlying biological causes,” said Yale’s Ilan Harpaz-Rotem, professor of psychiatry and psychology and senior author of the study.

For the new study, the Yale-led team attempted to replicate the findings of an earlier nationwide neuroimaging study, in which scientists linked clusters of brain activity to a variety of outcomes among patients who had arrived at US emergency departments following traumatic events. Specifically, when researchers measured patients’ brain activity during the performance of simple tasks such as mapping responses to threats and rewards, they detected a cluster of brain activity that showed high reactivity to both threat and reward signals and seemed to predict more severe symptoms of PTSD later on.

However, when Yale researchers analysed similar neuroimaging data collected from recent trauma survivors in Israel, they were not able to replicate these findings. While they did identify the different clusters of brain activity observed in the earlier study, they found no association with prospective PTSD symptoms.

“That is not to say one set of data is right and the other is wrong, just that there is a lot of fundamental work that needs to be done to develop reliable models that could generalise across different studies,” said Yale’s Ziv Ben-Zion, a postdoctoral associate at Yale School of Medicine and the corresponding author of the study.

In fact, Yale researchers are currently working with the investigators of the original study to merge datasets “to search for common underlying patterns of brain activity associated with different responses to trauma,” Ben-Zion said.

“It took about 100 years to come up with current classifications of mental illness, but we’ve only been exploring refining psychiatric diagnoses using biomarkers for the last 10 years,” said Harpaz-Rotem. “We still have a long way to go.”

Source: Yale University

MRI Scans Reveal How Horror Movies Terrify Us

Photo by Daniel Jensen on Unsplash

Finnish researchers at the University of Turku mapped the brain activity of (un)lucky participants who watched two of the highest rated horror movies of the last 100 years.

Humans are fascinated by things that scare them, such as death-defying stunts and true crime documentaries, provided these sources of fear at a safe distance. Horror movies are no different, providing a relentless villain, such as Jason in Friday the 13th or a supernatural threat.

For their study into cinematic terror, published in the journal NeuroImage, the researchers first established the 100 best and scariest horror movies of the past century, and how they made people feel.

Unseen threats are the scariest

Firstly, 72% of people report watching at last one horror movie every 6 months, and the reasons for doing so, besides the feelings of fear and anxiety, was primarily that of excitement. Watching horror movies was also an excuse to socialise, with many people preferring to watch horror movies with others than on their own.

People found horror that was psychological in nature and based on real events the scariest, and were far more scared by things that were unseen or implied rather than what they could actually see.

“This latter distinction reflects two types of fear that people experience. The creeping foreboding dread that occurs when one feels that something isn’t quite right, and the instinctive response we have to the sudden appearance of a monster that make us jump out of our skin,” says principal investigator, Professor Lauri Nummenmaa from Turku PET Centre.

MRI reveals different types of fear

Researchers wanted to know how the brain copes with fear in response to this complicated and ever changing environment. The group had people watch two horror movies (The Conjuring 2, 2016, and Insidious, 2010; both directed by James Wan) whilst measuring neural activity in a magnetic resonance imaging scanner.

During those times when anxiety is slowly increasing, regions of the brain involved in visual and auditory perception become more active, as the need to attend for cues of threat in the environment become more important. After a sudden shock, brain activity is more evident in regions involved in emotion processing, threat evaluation, and decision making, enabling a rapid response.

However, these regions are in continuous talk-back with sensory regions throughout the movie, as if the sensory regions were preparing response networks as a scary event was becoming increasingly likely.

“Therefore, our brains are continuously anticipating and preparing us for action in response to threat, and horror movies exploit this expertly to enhance our excitement,” explains Researcher Matthew Hudson.

Source: University of Turku

Recognising a Voice is Easier with a Face

To recognise a famous voice, human brains use the same centre that is activated when the speaker’s face is presented, according to the results of an innovative neuroscience study which asked participants to identify US presidents.

The new study, published in the Journal of Neurophysiology, suggests that voice and face recognition are linked even more intimately than previously thought. It offers an intriguing possibility that visual and auditory information relevant to identifying someone feeds into a common brain centre, allowing for more robust, well-rounded recognition by integrating separate modes of sensation.

“From behavioural research, we know that people can identify a familiar voice faster and more accurately when they can associate it with the speaker’s face, but we never had a good explanation of why that happens,” said senior author Taylor Abel, MD, associate professor of neurological surgery at the University of Pittsburgh School of Medicine. “In the visual cortex, specifically in the part that typically processes faces, we also see electrical activity in response to famous people’s voices, highlighting how deeply the two systems are interlinked.”

Even though the interplay between the auditory and the visual brain processing systems has been widely acknowledged and investigated by various teams of neuroscientists all over the world, those systems were traditionally thought to be structurally and spatially distinct.

Few studies have attempted to directly measure activity from the brain centre – which primarily consolidates and processes visual information – to determine whether this centre is also engaged when participants are exposed to famous voice stimuli.

Researchers recruited epilepsy patients who had been implanted with electrodes measuring brain activity to determine the source of their seizures.

Abel and his team showed five participants photographs of three US presidents – Bill Clinton, George W. Bush and Barack Obama – or played short recordings of their voices, and asked participants to identify them.

Recordings of the electrical activity from the region of the brain responsible for processing visual cues (the fusiform gyri) showed that the same region became active when participants heard familiar voices, though that response was lower in magnitude and slightly delayed.

“This is important because it shows that auditory and visual areas interact very early when we identify people, and that they don’t work in isolation,” said Abel. “In addition to enriching our understanding of the basic functioning of the brain, our study explains the mechanisms behind disorders where voice or face recognition is compromised, such as in some dementias or related disorders.”

Source: University of Pittsburgh

Newly Discovered Subarachnoidal Layer Protects the Brain

Advances in neuro-imaging and molecular biology have unearthed a subtle, previously unknown layer in the brain. As described in the journal Science, the newly discovered layer forms a previously unknown component of brain anatomy that acts as both a protective barrier and platform from which immune cells monitor the brain for infection and inflammation.

“The discovery of a new anatomic structure that segregates and helps control the flow of cerebrospinal fluid (CSF) in and around the brain now provides us much greater appreciation of the sophisticated role that CSF plays not only in transporting and removing waste from the brain, but also in supporting its immune defenses,” said Maiken Nedergaard, co-director of the Center for Translational Neuromedicine at University of Rochester and the University of Copenhagen. Nedergaard and her colleagues have made significant findings in the field of neuroscience, including detailing the many critical functions of previously overlooked cells in the brain called glia and the brain’s unique process of waste removal, which the lab named the glymphatic system.

The study focuses on the series of membranes that encase the brain, creating a barrier from the rest of the body and keeping the brain bathed in CSF.  The traditional understanding of what is collectively called the meningeal layer identifies the three individual layers as dura, arachnoid, and pia matter.

 This new layer discovered by the international research team further divides the space between the arachnoid and pia layers, the subarachnoid space, into two compartments, separated by the newly described layer, which the researchers name SLYM (Subarachnoidal LYmphatic-like Membrane).  While the paper mostly describes the function of SLYM in mice, it also reports its presence in the adult human brain as well.

SLYM is a type of membrane that lines other organs in the body, including the lungs and heart, called mesothelium. These membranes typically surround and protect organs, and harbour immune cells.

The new membrane is very thin and delicate, consisting of only a few cells in thickness.  Yet SLYM is a tight barrier, allowing only very small molecules to transit and it also seems to separate “clean” and “dirty” CSF.  This last observation hints at the likely role played by SLYM in the glymphatic system, which requires a controlled flow and exchange of CSF, allowing the influx of fresh CSF while flushing the toxic proteins associated with Alzheimer’s and other neurological diseases from the central nervous system.  This discovery will help researchers more precisely understand the mechanics of the glymphatic system.

Central nervous system immune cells (indicated here expressing CD45) use SLYM as a platform close to the brain’s surface to monitor cerebrospinal fluid for signs of infection and inflammation.

The SLYM also appears important to the brain’s defences.  The central nervous system has its own native population of immune cells, and the membrane’s integrity prevents outside immune cells from entering.  In addition, the membrane appears to host its own population of central nervous system immune cells that use SLYM as an observation point close to the surface of the brain from which to scan passing CSF for signs of infection or inflammation. 

Discovery of the SLYM opens the door for further study of its role in brain disease.  For example, the researchers note that larger and more diverse concentrations of immune cells congregate on the membrane during inflammation and aging.  Furthermore, when the membrane was ruptured during traumatic brain injury, the resulting disruption in the flow of CSF impaired the glymphatic system and allowed non-central nervous system immune cells to enter the brain. 

These and similar observations suggest that diseases as diverse as multiple sclerosis, central nervous system infections, and Alzheimer’s might be triggered or worsened by abnormalities in SLYM function. They also suggest that the delivery of drugs and gene therapeutics to the brain may be impacted by SLYM, which will need to be considered as new generations of biologic therapies are being developed.

Source: University of Rochester Medical Center

Memory Loss and Confusion More Common among Middle-aged Smokers

Photo by Elsa Olofsson on Unsplash

Middle-aged smokers are much more likely to report having memory loss and confusion than nonsmokers, and the likelihood of cognitive decline is lower for those who have quit, even recently, according to a new study appearing in the Journal of Alzheimer’s Disease.

The study is the first to examine the relationship between smoking and cognitive decline using a one-question self-assessment asking people if they’ve experienced worsening or more frequent memory loss and/or confusion.

The findings build on previous research that established relationships between smoking and Alzheimer’s Disease and other forms of dementia, and could point to an opportunity to identify signs of trouble earlier in life, said Jenna Rajczyk, lead author of the study.

It’s also one more piece of evidence that quitting smoking is good not just for respiratory and cardiovascular reasons, but to preserve neurological health, said Rajczyk, a PhD student in Ohio State’s College of Public Health, and senior author Jeffrey Wing, assistant professor of epidemiology.

“The association we saw was most significant in the 45–59 age group, suggesting that quitting at that stage of life may have a benefit for cognitive health,” Wing said. A similar difference wasn’t found in the oldest group in the study, which could mean that quitting earlier affords people greater benefits, he said.

Researchers used data from the 2019 Behavioral Risk Factor Surveillance System Survey to compare subjective cognitive decline (SCD) measures for current smokers, recent former smokers, and those who had quit years earlier. The analysis included 136 018 people 45 and older, and about 11% reported SCD.

The prevalence of SCD among smokers in the study was almost 1.9 times that of nonsmokers. The prevalence among those who had quit less than 10 years ago was 1.5 times that of nonsmokers. Those who quit more than a decade before the survey had an SCD prevalence just slightly above the nonsmoking group.

“These findings could imply that the time since smoking cessation does matter, and may be linked to cognitive outcomes,” Rajczyk said.

The simplicity of SCD, a relatively new measure, could lend itself to wider applications, she said.

“This is a simple assessment that could be easily done routinely, and at younger ages than we typically start to see cognitive declines that rise to the level of a diagnosis of Alzheimer’s Disease or dementia,” Rajczyk said. “It’s not an intensive battery of questions. It’s more a personal reflection of your cognitive status to determine if you’re feeling like you’re not as sharp as you once were.”

Many people don’t have access to more in-depth screenings, or to specialists, making the potential applications for measuring SCD even greater, she said.

Wing said it’s important to note that these self-reported experiences don’t amount to a diagnosis, nor do they confirm independently that a person is experiencing decline out of the normal ageing process. But, he said, they could be a low-cost, simple tool to consider employing more broadly.

Source: Ohio State University

Greater Cognitive Skills in Children who Play More Video Games

Photo by Igor Karimov on Unsplash

Analysing magnetic resonance imaging (MRI) brain scans of nearly 2000 children, researchers found children who played video games for three or more hours a day did better in cognitive skills tests involving impulse control and working memory compared to children who had never played video games. Published in JAMA Network Open, this study analysed data from the ongoing Adolescent Brain Cognitive Development (ABCD) Study, which is supported by the and other entities of the National Institutes of Health.

“This study adds to our growing understanding of the associations between playing video games and brain development,” said National Institute on Drug Abuse (NIDA) Director Nora Volkow, MD. “Numerous studies have linked video gaming to behaviour and mental health problems. This study suggests that there may also be cognitive benefits associated with this popular pastime, which are worthy of further investigation.”

Although a number of studies have investigated the relationship between video gaming and cognitive behaviour, the neurobiological mechanisms underlying the associations are not well understood. Only a handful of neuroimaging studies have addressed this topic, and the sample sizes for those studies have been small, with fewer than 80 participants.

To address this research gap, scientists at the University of Vermont, Burlington, analysed data obtained when children entered the ABCD Study at ages 9 and 10 years old. The research team examined survey, cognitive, and brain imaging data from nearly 2000 participants from within the bigger study cohort, comparing those who reported playing no video games at all and those who reported playing video games for three hours per day or more. This threshold was selected as it exceeds the American Academy of Paediatrics screen time guidelines, which recommend limiting videogames to one to two hours per day for older children. Researchers assessed their performance in two tasks that reflected the children’s ability to control impulsive behaviour and to memorise information, as well as brain activity while performing the tasks.

The researchers found that the children who reported playing video games for three or more hours per day were faster and more accurate on both cognitive tasks than those who never played. They also observed that the differences in cognitive function observed between the two groups was accompanied by differences in brain activity. Functional MRI brain scans found that children who played video games for three or more hours per day showed higher brain activity in regions of the brain associated with attention and memory than in never-gamers. At the same time, those children who played at least three hours of videogames per day showed more brain activity in frontal brain regions that are associated with more cognitively demanding tasks and less brain activity in brain regions related to vision.  

The researchers think these patterns may stem from practicing tasks related to impulse control and memory while playing videogames, which can be cognitively demanding, and that these changes may lead to improved performance on related tasks. Furthermore, the comparatively low activity in visual areas among children who reported playing video games may reflect that this area of the brain may become more efficient at visual processing as a result of repeated practice through video games.

While prior studies have reported associations between video gaming and increases in depression, violence, and aggressive behaviour, this study did not find that to be the case. The three hours or more group tended to report higher mental health and behavioural issues compared to the non-gaming children, but was not statistically significant. The researchers note that this will be an important measure to continue to track and understand as the children mature.

Further, the researchers stress that this cross-sectional study does not allow for cause-and-effect analyses, and that it could be that children who are good at these types of cognitive tasks may choose to play video games. The authors also emphasise that their findings do not mean that children should spend unlimited time on their computers, mobile phones, or TVs, and that the outcomes likely depend largely on the specific activities children engage in. For instance, they hypothesise that the specific genre of video games, such as action-adventure, puzzle solving, sports, or shooting games, may have different effects for neurocognitive development, and this level of specificity on the type of video game played was not assessed by the study.

“While we cannot say whether playing video games regularly caused superior neurocognitive performance, it is an encouraging finding, and one that we must continue to investigate in these children as they transition into adolescence and young adulthood,” said Bader Chaarani, PhD, assistant professor of psychiatry at the University of Vermont and the lead author on the study. “Many parents today are concerned about the effects of video games on their children’s health and development, and as these games continue to proliferate among young people, it is crucial that we better understand both the positive and negative impact that such games may have.”

Through the ABCD Study, researchers will be able to track these children into young adulthood, looking for gaming-related changes in cognitive skills, brain activity, behaviour, and mental health.

Source: National Institutes of Health

Improving Short Term Memory Problems – with Laser Light

Photo by Cottonbro on Pexels

UK and Chinese scientists have demonstrated that laser light therapy is effective in improving short term memory in a study published in Science Advances. The innovative, non-invasive therapy could improve short term, or working memory in people by up to 25%.

The treatment, termed transcranial photobiomodulation (tPBM), is applied to the right prefrontal cortex, an area important for working memory. In their experiment, the team showed how working memory improved among research participants after several minutes of treatment. They were also able to track the changes in brain activity using electroencephalogram (EEG) monitoring during treatment and testing.

Previous studies have shown that laser light treatment will improve working memory in mice, and human studies have shown tPBM treatment can improve accuracy, speed up reaction time and improve high-order functions such as attention and emotion. This is the first study, however, to confirm a link between tPBM and working memory in humans.

Co-author Dongwei Li, a visiting PhD student, said, “People with conditions like ADHD (attention deficit hyperactivity disorder) or other attention-related conditions could benefit from this type of treatment, which is safe, simple and non-invasive, with no side-effects.”

In the study researchers at Beijing Normal University carried out experiments with 90 male and female participants aged between 18 and 25. Participants were treated with laser light to the right prefrontal cortex at wavelengths of 1064 nm, while others were treated at a shorter wavelength, or treatment was delivered to the left prefrontal cortex. Each participant was also treated with a sham, or inactive, tPBM to rule out the placebo effect.

After tPBM treatment over 12 minutes, the participants were asked to remember the orientations or colour of a set of items displayed on a screen. The participants treated with laser light to the right prefrontal cortex at 1064 nm showed clear improvements in memory over those who had received the other treatments. While participants receiving other treatment variations were about to remember between three and four of the test objects, those with the targeted treatment were able to recall between four and five objects.

Data, including from electroencephalogram (EEG) monitoring during the experiment was analysed at the University of Birmingham and showed changes in brain activity that also predicted the improvements in memory performance.

The researchers do not yet know precisely why the treatment results in positive effects on working memory, nor how long the effects will last. Further research is planned to investigate these aspects.

Professor Ole Jensen, also at the Center for Human Brain Health, said, “We need further research to understand exactly why the tPBM is having this positive effect, but it’s possible that the light is stimulating the astrocytes –the powerplants – in the nerve cells within the prefrontal cortex, and this has a positive effect on the cells’ efficiency. We will also be investigating how long the effects might last. Clearly if these experiments are to lead to a clinical intervention, we will need to see long-lasting benefits.”

Source: University of Birmingham

Scientists Unravel The Neurology Underlying Soothing Touch

Man wearing mask with headache
Source: Usman Yousaf on Unsplash

People can achieve some pain relief by rubbing or pressing a part of their body associated with the pain. Observing for the first time how this phenomenon plays out in the brains of mice, MIT scientists suggest that pain-responsive cells in the brain quiet down when these neurons also receive touch inputs.

The team’s discovery, reported in the journal Science Advances, offers researchers a deeper understanding of the complicated relationship between pain and touch and could offer some insights into chronic pain in humans. “We’re interested in this because it’s a common human experience,” says investigator Fan Wang. “When some part of your body hurts, you rub it, right? We know touch can alleviate pain in this way.” But, she says, the phenomenon has been very difficult for neuroscientists to study.

Modelling pain relief

The spinal cord may be where touch-mediated pain relief begins, as prior studies have found pain-responsive neurons that reduce activity in response to touch. But there have been hints that the brain was involved, too. Wang says this aspect of the response has been largely unexplored, because it can be hard to monitor the brain’s response to painful stimuli amidst all the other neural activity happening there. particularly when an animal moves.

So while her team knew that mice respond to a potentially painful stimulus on the cheek by wiping their faces with their paws, they couldn’t follow the specific pain response in the animals’ brains to see if that rubbing helped settle it down. “If you look at the brain when an animal is rubbing the face, movement and touch signals completely overwhelm any possible pain signal,” Wang explains.

She and her colleagues have found a way around this obstacle. Instead of studying the effects of face-rubbing, they have focused their attention on a subtler form of touch: the gentle vibrations produced by the movement of the animals’ whiskers. Mice use their whiskers to explore, moving them back and forth in a rhythmic motion known as whisking to feel out their environment. This motion activates touch receptors in the face and sends information to the brain in the form of vibrotactile signals. The human brain receives the same kind of touch signals when a person shakes their hand as they pull it back from a painfully hot pan — another way we seek touch-mediate pain relief.

Whisking away pain

Wang and her colleagues found that this whisker movement alters the way mice respond to bothersome heat or a poke on the face – both of which usually lead to face rubbing. “When the unpleasant stimuli were applied in the presence of their self-generated vibrotactile whisking … they respond much less,” she says. Sometimes, she says, whisking animals entirely ignore these painful stimuli.

In the brain’s somatosensory cortex, where touch and pain signals are processed, the team found signalling changes that seem to underlie this effect. “The cells that preferentially respond to heat and poking are less frequently activated when the mice are whisking,” Wang says. “They’re less likely to show responses to painful stimuli.” Even when whisking animals did rub their faces in response to painful stimuli, the team found that neurons in the brain took longer to adopt the firing patterns associated with that rubbing movement. “When there is a pain stimulation, usually the trajectory the population dynamics quickly moved to wiping. But if you already have whisking, that takes much longer,” Wang says.

Wang notes that even in the fraction of a second before provoked mice begin rubbing their faces, when the animals are relatively still, it can be difficult to sort out which brain signals are related to perceiving heat and poking and which are involved in whisker movement. Her team developed computational tools to disentangle these, and are hoping other neuroscientists will use the new algorithms to make sense of their own data.

Whisking’s effects on pain signalling seem to depend on dedicated touch-processing circuitry that sends tactile information to the somatosensory cortex from the ventral posterior thalamus. When that pathway was blocked, whisking no longer dampened the animals’ response to painful stimuli. Now, Wang says, she and her team are eager to learn how this circuitry works with other parts of the brain to modulate the perception and response to painful stimuli.

The new findings might shed light on a condition called thalamic pain syndrome, a chronic pain disorder that can develop in patients after a stroke that affects the brain’s thalamus, says Wang. “Such strokes may impair the functions of thalamic circuits that normally relay pure touch signals and dampen painful signals to the cortex.”

Source: MIT

Hyperbaric Therapy Reduces Neuroinflammation in Autism

Depiction of a human brain
Image by Fakurian Design on Unsplash

A new study at Tel Aviv University showed significant improvements in social skills and the condition of the autistic brain through hyperbaric therapy. The study which is reported in the journal International Journal of Molecular Sciences, was conducted on lab models of autism.

Hyperbaric medicine, where patients sit in special high-pressure chambers while breathing pure oxygen, is considered safe and, besides treating decompression sickness in divers, is already in use for other conditions. The use of hyperbaric medicine to treat autism is contentious, with many holding that it is based on pseudoscience. In recent years, scientific evidence has been accumulating that unique protocols of hyperbaric treatments improve the supply of blood and oxygen to the brain, thereby improving brain function.

Changes observed in the brain included a reduction in neuroinflammation, which is known to be associated with autism. A significant improvement was also found in the social functioning of the animal models treated in the pressure chamber. The study’s success has many implications regarding the applicability and understanding of treating autism using pressure chamber therapy.

The breakthrough was led by doctoral student Inbar Fischer, from the laboratory of Dr Boaz Barak of Tel Aviv University.

Improved brain function

“The medical causes of autism are numerous and varied, and ultimately create the diverse autistic spectrum with which we are familiar,” explains Dr Barak. “About 20% of autistic cases today are explained by genetic causes, that is, those involving genetic defects, but not necessarily ones that are inherited from the parents. Despite the variety of sources of autism, the entire spectrum of behavioural problems associated with it are still included under the single broad heading of ‘autism,’ and the treatments and medications offered do not necessarily correspond directly to the reason why the autism developed.”

In the preliminary phase of the study, a girl carrying the mutation in the SHANK3 gene, which is known to lead to autism, received treatments in the pressure chamber, conducted by Prof Shai Efrati. After the treatments, it was evident that the girl’s social abilities and brain function had improved considerably.

In the next stage, and in order to comprehend the success of the treatment more deeply, the team of researchers at Dr Barak’s laboratory sought to understand what being in a pressurised chamber does to the brain. To this end, the researchers used lab models carrying the same genetic mutation in the SHANK3 gene as that carried by the girl who had been treated. The experiment comprised a protocol of 40 one-hour treatments in a pressure chamber over several weeks.

“We discovered that treatment in the oxygen-enriched pressure chamber reduces inflammation in the brain and leads to an increase in the expression of substances responsible for improving blood and oxygen supply to the brain, and therefore brain function,” explains Dr Barak. “In addition, we saw a decrease in the number of microglial cells, immune system cells that indicate inflammation, which is associated with autism.”

Increased social interest

“Beyond the neurological findings we discovered, what interested us more than anything was to see whether these improvements in the brain also led to an improvement in social behaviour, which is known to be impaired in autistic individuals,” adds Dr Barak. “To our surprise, the findings showed a significant improvement in the social behaviour of the animal models of autism that underwent treatment in the pressure chamber compared to those in the control group, who were exposed to air at normal pressure, and without oxygen enrichment. The animal models that underwent treatment displayed increased social interest, preferring to spend more time in the company of new animals to which they were exposed in comparison to the animal models from the control group.”

Inbar Fischer concludes, “the mutation in the animal models is identical to the mutation that exists in humans. Therefore, our research is likely to have clinical implications for improving the pathological condition of autism resulting from this genetic mutation, and likely also of autism stemming from other causes. Because the pressure chamber treatment is non-intrusive and has been found to be safe, our findings are encouraging and demonstrate that this treatment may improve these behavioral and neurological aspects in humans as well, in addition to offering a scientific explanation of how they occur in the brain.”

Source: Tel Aviv University