Category: Neurology

Unlocking the Complex Neurological Puzzle of Depression

Source: Pixabay

By studying the brains of fruit flies, which share similar mechanisms to human ones, scientists at Johannes Gutenberg University Mainz (JGU) are attempting to gain a better understanding of depression-like states and thus improve means of treating them. Their findings include the effect of Asian traditional medicine and its mode of preparation, and the effect of timing, such as getting a reward in the evening as opposed to other times of the day. The results were published recently in the journal Current Biology.

One aspect of their research “We have been looking at the effects of natural substances used in traditional Asian medicine, such as in Ayurveda, in our Drosophila fly model,” explained Professor Roland Strauss at JGU. “Some of these could have an anti-depressive potential or prophylactically strengthen resilience to chronic stress, so that a depression-like state might not even develop.”

The researchers intend to demonstrate efficacy, find optimal formulations, and isolate the active substances from the plant, which could lead to new drugs.

“In the Drosophila model we can pinpoint exactly where these substances are active because we are able to analyse the entire signalling chain,” Strauss pointed out. “Furthermore, every stage in the signalling pathway can also be proven.” The researchers subject the flies to a mild form of recurrent stress, such as irregular phases of vibration of the substrate. This treatment results in the development of a depression-like state (DLS) in the flies, ie, they move more slowly, do not stop to examine unexpectedly encountered sugar, and, unlike their more relaxed counterparts, are less willing to climb wide gaps. Whether or not the natural substances have an effect depends on the preparation of each natural substance, eg, whether it has been extracted with water or alcohol.

The research team has also discovered that if they reward the flies for 30 minutes on the evening of a stressful day, by offering them food with a higher sugar content than usual, or by activating the reward signalling pathway, this can prevent DLS developing. Flies have sugar receptors on their tarsi (the lower part of their legs) and their proboscis, while the end of the signalling pathway at which serotonin is released onto the mushroom body (equivalent to the human hippocampus) have also been located.

The researchers’ investigations showed that the pathway was considerably more complex than anticipated. Three different neurotransmitter systems have to be activated until the serotonin deficiency at the mushroom body, which is present in flies in a DLS, is compensated for by reward. One of these three systems is the dopaminergic system, which also signals reward in humans. Humans might obtain a reward through something other than sugar.

Boosting resilience by preventing depression

In addition, the researchers decided to look for resilience factors in the fly genome. The team intends to find out whether and how the genomes of flies that are able to better cope with stress differ from those that develop a DLS in response to exposure to recurrent mild stress. The hope is that in the future it will be possible to diagnose genetic susceptibility to depression in humans – and then treat this with the natural substances that are also being investigated during the project.

Source: Johannes Gutenberg Universitaet Mainz

Algorithm Rapidly Assesses Level of Consciousness in ICU Patients

Source: Pixabay CC0

Neurological assessment of an ICU patient’s level of consciousness is an important but time-consuming task that may take up to an hour. Now, researchers have developed an algorithm that can accurately track patients’ level of consciousness based on simple physiological markers that are already routinely monitored in hospital settings.

The work, published in Neurocritical Care, may eventually yield a way to reduce the strain on medical staff, and could also provide vital new data to guide clinical decisions and enable the development of new treatments.

“Consciousness isn’t a light switch that’s either on or off – it’s more like a dimmer switch, with degrees of consciousness that change over the course of the day,” said Associate Prof Samantha Kleinberg at Stevens Institute of Technology. “If you only check patients once per day, you just get one data point. With our algorithm, you could track consciousness continuously, giving you a far clearer picture.”

To develop their algorithm, A/Prof Kleiberg’s team gathered a variety of data, simple heart rate monitors up to sophisticated devices that measure brain temperature, and used them to forecast the results of a clinician’s assessment of a patient’s level of consciousness. Yet, even using just the simplest physiological data, the algorithm proved as accurate as a trained clinical examiner, and only slightly less accurate than more sophisticated tests such as MRI.

“That’s hugely important, because it means this tool could potentially be deployed in virtually any hospital setting – not just neurological ICUs where they have more sophisticated technology,” A/Prof Kleinberg explained. The algorithm could be installed as a simple software module on existing bedside patient-monitoring systems, she noted, making it relatively cheap and easy to roll out at scale.

Besides giving doctors better clinical information, and patients’ families a clearer idea of their loved ones’ prognosis, continuous monitoring could help to drive new research and ultimately improve patient outcomes.

“Consciousness is incredibly hard to study, and part of the reason is that there simply isn’t much data to work with,” said A/Prof Kleinberg. “Having round-the-clock data showing how patients’ consciousness changes could one day make it possible to treat these patients far more effectively.”

More work will be needed before the team’s algorithm can be rolled out in clinical settings. The team’s algorithm was trained based on data collected immediately prior to a clinician’s assessment, and further development will be needed to show that it can accurately track consciousness around the clock. Additional data will also be required to train the algorithm for use in other clinical settings such as paediatric ICUs.

A/Prof Kleinberg also hopes to improve the algorithm’s accuracy by cross-referencing different kinds of physiological data, and studying the way they coincide or lag one another over time. Some such relationships are known to correlate with consciousness, potentially making it possible to validate the algorithm’s consciousness ratings during periods when assessments by human clinicians aren’t available.

Source: Stevens Institute of Technology

Gene Therapy Partially Restores Cone Function in Achromatopsia

Eye
Source: Daniil Kuzelev on Unsplash

University of College London researchers have used gene therapy to partially restore the function of cone receptors in two children with achromatopsia, a rare genetic disorder which can cause partial or complete colourblindness.

The findings, published in Brain, suggest that treatment activates previously dormant communication links between the retina and the brain, thanks to the developing adolescent brain’s plastic nature.

The academically-led study has been running alongside a phase 1/2 clinical trial in children with achromatopsia, using a new way to test whether the treatment is changing the neural pathways specific to the cones.

Achromatopsia is caused by disease-causing variants to one of a few genes. As it affect the cones in the retina, are responsible for colour vision, people with achromatopsia are completely colourblind, while they also have very poor vision and photophobia. Their cone cells do not send signals to the brain, but many remain present, so researchers have been seeking to activate the dormant cells.

Lead author Dr Tessa Dekker said: “Our study is the first to directly confirm widespread speculation that gene therapy offered to children and adolescents can successfully activate the dormant cone photoreceptor pathways and evoke visual signals never previously experienced by these patients.

“We are demonstrating the potential of leveraging the plasticity of our brains, which may be particularly able to adapt to treatment effects when people are young.”

The study involved four young people with achromatopsia aged 10 to 15 years old.

The two trials, which each target a different gene implicated in achromatopsia, are testing gene therapies with the primary aim of establishing that the treatment is safe, while also testing for improved vision. Their results have not yet been fully compiled so the overall effectiveness of the treatments remains to be determined.

The accompanying academic study used a novel functional magnetic resonance imaging (fMRI) mapping approach to separate emerging post-treatment cone signals from existing rod-driven signals in patients, allowing the researchers to pinpoint any changes in visual function, after treatment, directly to the targeted cone photoreceptor system. They employed a ‘silent substitution’ technique using pairs of lights to selectively stimulate cones or rods. The researchers also had to adapt their methods to accommodate eye movements due to nystagmus, another symptom of achromatopsia. The results were compared to tests involving nine untreated patients and 28 volunteers with normal vision.

Each of the four children was treated with gene therapy in one eye, enabling doctors to compare the treatment’s effectiveness with the untreated eye.

For two of the four children, there was strong evidence for cone-mediated signals in the brain’s visual cortex coming from the treated eye, six to 14 months after treatment. Before the treatment, the patients showed no evidence of cone function on any tests. After treatment, their measures closely resembled those from normal sighted study participants.

The study participants also completed a test to distinguish between different levels of contrast. This showed there was a difference in cone-supported vision in the treated eyes in the same two children.

The researchers say they cannot confirm whether the treatment was ineffective in the other two study participants, or if there may have been treatment effects that were not picked up by the tests they used, or if effects are delayed.

Co-lead author Dr Michel Michaelides (UCL Institute of Ophthalmology and Moorfields Eye Hospital), who is also co-investigator on both clinical trials, said: “In our trials, we are testing whether providing gene therapy early in life may be most effective while the neural circuits are still developing. Our findings demonstrate unprecedented neural plasticity, offering hope that treatments could enable visual functions using signalling pathways that have been dormant for years.

“We are still analysing the results from our two clinical trials, to see whether this gene therapy can effectively improve everyday vision for people with achromatopsia. We hope that with positive results, and with further clinical trials, we could greatly improve the sight of people with inherited retinal diseases.”

Dr Dekker added: “We believe that incorporating these new tests into future clinical trials could accelerate the testing of ocular gene therapies for a range of conditions, by offering unparalleled sensitivity to treatment effects on neural processing, while also providing new and detailed insight into when and why these therapies work best.”

One of the study participants commented: “Seeing changes to my vision has been very exciting, so I’m keen to see if there are any more changes and where this treatment as a whole might lead in the future.

“It’s actually quite difficult to imagine what or just how many impacts a big improvement in my vision could have, since I’ve grown up with and become accustomed to low vision, and have adapted and overcome challenges (with a lot of support from those around me) throughout my life.”

Source: University College London

Smartphone Use may Help with Memory Skills

Photo by Priscilla du Preez on Unsplash

Instead of causing people to become lazy or forgetful, the use of smartphones and other digital devices could help improve memory skills, report the authors of a new study published in Journal of Experimental Psychology: General.

The research, showed that digital devices serve to aid people storing and recalling crucial information. This, in turn, frees up their memory to remember additional, less important, things.

Neuroscientists have previously expressed concerns that the overuse of technology could result in the breakdown of cognitive abilities and cause ‘digital dementia’.

The findings show that, on the contrary, using a digital device as external memory not only helps people to remember the information saved into the device, but it also helps them to remember unsaved information too.

To demonstrate this, researchers developed a memory task to be played on a touchscreen digital tablet or computer. The test was undertaken by 158 volunteers aged between 18 and 71.

Participants were shown up to 12 numbered circles on the screen, and had to remember to drag some of these to the left and some to the right. The number of circles that they remembered to drag to the correct side determined their pay at the end of the experiment. One side was designated “high value,” meaning that remembering to drag a circle to this side was worth 10 times as much money as remembering to drag a circle to the other “low value” side.

Participants performed this task 16 times. They had to use their own memory to remember on half of the trials and they were allowed to set reminders on the digital device for the other half.

The results found that participants tended to use the digital devices to store the details of the high-value circles. And, when they did so, their memory for those circles was improved by 18%. Their memory for low-value circles was also improved by 27%, even in people who had never set any reminders for low-value circles.

However, results also showed a potential cost to using reminders. When they were taken away, the participants remembered the low-value circles better than the high-value ones, showing that they had entrusted the high-value circles to their devices and then forgotten about them.

Senior author Dr Sam Gilbert said, “We wanted to explore how storing information in a digital device could influence memory abilities.

“We found that when people were allowed to use an external memory, the device helped them to remember the information they had saved into it. This was hardly surprising, but we also found that the device improved people’s memory for unsaved information as well.

“This was because using the device shifted the way that people used their memory to store high-importance versus low-importance information. When people had to remember by themselves, they used their memory capacity to remember the most important information. But when they could use the device, they saved high-importance information into the device and used their own memory for less important information instead.

“The results show that external memory tools work. Far from causing ‘digital dementia,’ using an external memory device can even improve our memory for information that we never saved. But we need to be careful that we back up the most important information. Otherwise, if a memory tool fails, we could be left with nothing but lower-importance information in our own memory.”

Source: University College London

Why do We Struggle to Recognise the Faces of People of Other Races?

An Asian man and two white men enjoying St. Patrick’s Day Photo by Pressmaster on Pexels

In a study published in Scientific Reports, cognitive psychologists at the believe they have discovered the answer to a 60-year-old question as to why people find it more difficult to recognise faces from visually distinct racial backgrounds than they do their own.

This phenomenon named the Other-Race Effect (ORE) was first discovered in the 1960s. Humans seem to use a variety of markers to recognise people, instead of photographically memorising their faces, which may be based on what they observe in others around them. Hair and eye colour may be used by white people to tell apart other white people since those features vary considerably in that racial group. Setting may also be important: some people might not notice that the centre man in the picture above is Asian while his friends on either side are white.

The ORE has consistently been demonstrated through the Face Inversion Effect (FIE) paradigm, where people are tested with pictures of faces presented in their usual upright orientation and inverted upside down. Such experiments have consistently shown that the FIE is larger when individuals are presented with faces from their own race as opposed to faces from other races.

The findings spurred decades of debate, and social scientists took the view that indicates less motivation for people to engage with people of other races, making a weaker memory for them. Cognitive scientists posited it is down to a lack of visual experience of other-race individuals, resulting in less perceptual expertise with other-race faces.

Now, a team in the Department of Psychology at Exeter, using direct electrical current brain stimulation, has found that the ORE would appear to be caused by a lack of cognitive visual expertise and not by social bias.

“For many years, we have debated the underpinning causes of ORE,” said Dr Ciro Civile, the projects lead researcher.

“One of the prevailing views is that it is predicated upon social motivational factors, particularly for those observers with more prejudiced racial attitudes. This report, a culmination of six years of funded research by the European Union and UK Research and Innovation, shows that when you systematically impair a person’s perceptual expertise through the application of brain stimulation, their ability to recognise faces is broadly consistent regardless of the ethnicity of that face.”

The research was conducted at the University of Exeter’s Washington Singer Laboratories, using non-invasive transcranial Direct Current Stimulation (tDCS) specifically designed to interfere with the ability to recognise upright faces. This was applied to the participants’ dorsolateral prefrontal cortex, via a pair of sponges attached to their scalp.

The team studied the responses of nearly 100 White European students to FIE tests, splitting them equally into active stimulation and sham/control groups. The first cohort received 10 minutes of tDCS while performing the face recognition task involving upright and inverted Western Caucasian and East Asian (Chinese, Japanese, Korean) faces. The second group, meanwhile, performed the same task while experiencing 30 seconds of stimulation, randomly administered throughout the 10 minutes – a level insufficient to induce any performance change.

In the control group, the size of the FIE for own-race faces was found to be almost three times larger than the one found for other-race faces confirming the robust ORE. This was mainly driven by participants showing a much better performance at recognising own-race faces in the upright orientation, compared to other-race faces – almost twice as likely to correctly identify that they had seen the face before.

In the active tDCS group, the stimulation successfully removed the perceptual expertise component for upright own-race faces and resulted in no difference being found between the size of the FIE for own versus other-race faces. And when it came to recognising faces that had been inverted, the results were roughly equal for both groups for both races, supporting the fact that people have no expertise whatsoever at seeing faces presented upside down.

“Establishing that the Other-Race Effect, as indexed by the Face Inversion Effect, is due to expertise rather than racial prejudice will help future researchers to refine what cognitive measures should and should not be used to investigate important social issues,” said Ian McLaren, Professor of Cognitive Psychology. “Our tDCS procedure developed here at Exeter can now be used to test all those situations where the debate regarding a specific phenomenon involves perceptual expertise.”

Source: University of Exeter

Stimulating the Vagus Nerve Boosts the Brain’s Learning Centres

Source: Pixabay

Researchers have shown a direct link between vagus nerve stimulation and its connection to the brain’s learning centres. The discovery, reported in the journal Neuron, may lead to treatments that will improve cognitive retention in both healthy and injured nervous systems.

“We concluded that there is a direct connection between the vagus nerve, the cholinergic system that regulates certain aspects of brain function, and motor cortex neurons that are essential in learning new skills,” said Cristin Welle, PhD, senior author of the paper. “This could provide hope to patients with a variety of motor and cognitive impairments, and someday help healthy individuals learn new skills faster.”

Researchers taught healthy mice a difficult task to see if it could help boost learning. Stimulating the vagus nerve during the process was found to help the mice learn the task much faster and achieve a higher performance level. This showed that vagus nerve stimulation can increase learning in a healthy nervous system.

The vagus nerve regulates internal organ functions like digestion, heart rate and respiration, as well as helping control reflex actions like coughing and sneezing.

The study also revealed a direct connection between the vagus nerve and the cholinergic system, which is essential for learning and attention. Each time the vagus nerve was stimulated, researchers could observe the neurons that control learning activated within the cholinergic system. Damage to this system has been linked to Alzheimer’s disease, Parkinson’s disease and other motor and cognitive conditions. Now that this connection has been established in healthy nervous systems, Dr Welle said it could lead to better treatment options for those whose systems have been damaged.

“The idea of being able to move the brain into a state where it’s able to learn new things is important for any disorders that have motor or cognitive impairments,” she said. “Our hope is that vagus nerve stimulation can be paired with ongoing rehabilitation in disorders for patients who are recovering from a stroke, traumatic brain injury, PTSD or a number of other conditions.”

In addition to the study, Dr Welle and her team have applied for a grant that would allow them to use a non-invasive device to stimulate the vagus nerve to treat patients with multiple sclerosis who have developed movement deficits. She also hopes that this device could eventually help speed up skill learning in healthy people.

“I think there’s a huge untapped potential for using vagus nerve stimulation to help the brain heal itself,” she said. “By continuing to investigate it, we can ultimately optimise patient recovery and open new doors for learning.”

Source: University of Colorado Anschutz Medical Campus

The Brain Unconsciously Excels at Spotting Deepfakes

Photo by Cottonbro on Pexels

When looking at real and ‘deepfake’ faces created by AI, observers can’t consciously recognise the difference – but their brains can, according to new research which appears in Vision Research.

Convincing fakes made by computers, deepfake videos, images, audio, or text are rife in the spread of disinformation, fraud and counterfeiting.

For example, in 2016, a Russian troll farm deployed over 50 000 bots on Twitter, making use of deepfakes as profile pictures, to try to influence the outcome of the US presidential election, which according to some research may have boosted Donald Trump’s votes by 3%. More recently, a deepfake video of Volodymyr Zelensky urging his troops to surrender to Russian forces surfaced on social media, muddying people’s understanding of the war in Ukraine with potential, high-stakes implications.

Fortunately, neuroscientists have discovered a new way to spot these insidious fakes: people’s brains are able to detect AI-generated fake faces, even though people could not distinguish between real and fake faces.

When looking at participants’ brain activity, the University of Sydney researchers found deepfakes could be identified 54% of the time. However, when participants were asked to verbally identify the deepfakes, they could only do this 37% of the time.

“Although the brain accuracy rate in this study is low – 54 percent – it is statistically reliable,” said senior researcher Associate Professor Thomas Carlson.

“That tells us the brain can spot the difference between deepfakes and authentic images.”

Spotting bots and scams

The researchers say their findings may be a starting-off point in the battle against deepfakes.

“The fact that the brain can detect deepfakes means current deepfakes are flawed,” Associate Professor Carlson said. “If we can learn how the brain spots deepfakes, we could use this information to create algorithms to flag potential deepfakes on digital platforms like Facebook and Twitter.”

They project that in the more distant future that technology, based on their and similar studies, could developed to alert people to deepfake scams in real time. Security personnel for example might wear EEG-enabled helmets to alert them of a deepfake.

Associate Professor Carlson said: “EEG-enabled helmets could have been helpful in preventing recent bank heist and corporate fraud cases in Dubai and the UK, where scammers used cloned voice technology to steal tens of millions of dollars. In these cases, finance personnel thought they heard the voice of a trusted client or associate and were duped into transferring funds.”

Method: eyes versus brain

The researchers conducted two experiments, one behavioural and one using neuroimaging. In the behavioural experiment, participants were shown 50 images of real and computer-generated fake faces and were asked to identify which were real and which were fake.

Then, a different group of participants were shown the same images while their brain activity was recorded using EEG, without knowing that half the images were fakes.

The researchers then compared the results of the two experiments, finding people’s brains were better at detecting deepfakes than their eyes.

A starting point

The researchers stress that the novelty of their study makes it merely a starting point. It won’t immediately – or even ever – lead to a foolproof way of detecting deepfakes.

Associate Professor Carlson said: “More research must be done. What gives us hope is that deepfakes are created by computer programs, and these programs leave ‘fingerprints’ that can be detected.

“Our finding about the brain’s deepfake-spotting power means we might have another tool to fight back against deepfakes and the spread of disinformation.”

Source: The University of Sydney

MRI Scans of Video Gamers Show Superior Sensorimotor Decision-making

Photo by Igor Karimov on Unsplash

Video gamers who play regularly show superior sensorimotor decision-making skills and enhanced activity in key regions of the brain as compared to non-players, according to a recent US study published in the Neuroimage: Reports journal.

Analysis of functional magnetic resonance imaging (fMRI) scans of video game players suggested that video games could be a useful tool for training in perceptual decision-making, the authors said.

“Video games are played by the overwhelming majority of our youth more than three hours every week, but the beneficial effects on decision-making abilities and the brain are not exactly known,” said lead researcher Mukesh Dhamala, associate professor at Georgia State University.

“Our work provides some answers on that,” Assov Prof Dhamala elaborated. “Video game playing can effectively be used for training – for example, decision-making efficiency training and therapeutic interventions – once the relevant brain networks are identified.”

Assoc Prof Dhamala was the adviser for Tim Jordan, PhD, the paper’s lead author, who had a personal example of how such research could inform the use of video games for training the brain.

Dr Jordan, had weak vision in one eye as a child. As part of a research study when he was about 5, he was asked to cover his good eye and play video games as a way to strengthen the vision in the weak one. Dr Jordan credits video game training with helping him go from legally blind in one eye to building strong capacity for visual processing, allowing him to eventually play lacrosse and paintball. He is now a postdoctoral researcher at UCLA.

The Georgia State research project involved 47 university-aged-age participants, with 28 categorised as regular video game players and 19 as non-players.

The subjects lay inside an fMRI machine with a mirror that let them see a cue immediately followed by a display of moving dots. Participants were asked to press a button in their right or left hand to indicate the direction the dots were moving, or resist pressing either button if there was no directional movement.

Video game players proved to be faster and more accurate with their responses. Analysis of the brain scans found that the differences were associated with enhanced activity in certain parts of the brain.

“These results indicate that video game playing potentially enhances several of the subprocesses for sensation, perception and mapping to action to improve decision-making skills,” the authors wrote. “These findings begin to illuminate how video game playing alters the brain in order to improve task performance and their potential implications for increasing task-specific activity.”

No trade-off was observed between speed and accuracy of response – the video game players were better on both measures.

“This lack of speed-accuracy trade-off would indicate video game playing as a good candidate for cognitive training as it pertains to decision-making,” the authors wrote.

Source: Georgia State University

Retinal Scans May be Able to Detect ASD and ADHD

Eye
Source: Daniil Kuzelev on Unsplash

By measuring the electrical activity of the retina in responses to a light stimulus, researchers found that they may be able to neurodevelopmental disorders such as ASD and ADHD, as reported in new research published in Frontiers in Neuroscience.

In this groundbreaking study, researchers found that recordings from the retina could identify distinct signals for both Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) providing a potential biomarker for each condition.

Using the ‘electroretinogram’ (ERG) – a diagnostic test that measures the electrical activity of the retina in response to a light stimulus – researchers found that children with ADHD showed higher overall ERG energy, whereas children with ASD showed less ERG energy.

Research optometrist at Flinders University, Dr Paul Constable, said the preliminary findings indicate promising results for improved diagnoses and treatments in the future.

“ASD and ADHD are the most common neurodevelopmental disorders diagnosed in childhood. But as they often share similar traits, making diagnoses for both conditions can be lengthy and complicated,” Dr Constable says.

“Our research aims to improve this. By exploring how signals in the retina react to light stimuli, we hope to develop more accurate and earlier diagnoses for different neurodevelopmental conditions.

“Retinal signals have specific nerves that generate them, so if we can identify these differences and localise them to specific pathways that use different chemical signals that are also used in the brain, then we can show distinct differences for children with ADHD and ASD and potentially other neurodevelopmental conditions.”

“This study delivers preliminary evidence for neurophysiological changes that not only differentiate both ADHD and ASD from typically developing children, but also evidence that they can be distinguished from each other based on ERG characteristics.”

According to the World Health Organization, one in 100 children has ASD, with 5–8% of children diagnosed with ADHD.

Attention Deficit Hyperactivity Disorder (ADHD) is a neurodevelopmental condition characterised by being overly active, struggling to pay attention, and difficulty controlling impulsive behaviours. Autism spectrum disorder (ASD) is also a neurodevelopmental condition where children behave, communicate, interact, and learn in ways that are different from most other people.

Co-researcher and expert in human and artificial cognition at the University of South Australia, Dr Fernando Marmolejo-Ramos, says the research has potential to extend across other neurological conditions.

“Ultimately, we’re looking at how the eyes can help us understand the brain,” Dr Marmolejo-Ramos says.

“While further research is needed to establish abnormalities in retinal signals that are specific to these and other neurodevelopmental disorders, what we’ve observed so far shows that we are on the precipice of something amazing.

“It is truly a case of watching this space; as it happens, the eyes could reveal all.”

Source: Flinders University

Newly Discovered Neuron Type may Help Explain Memory Formation

A healthy neuron.
A healthy neuron. Credit: NIH

Scientists publishing in Neuron have described how a newly discovered neuron type may be involved with the formation of memory in the hippocampus, which is marked by high-frequency electrical events.

It is known that memory is represented by changes in the hippocampus. One of the well-established changes in the hippocampus that has been associated with memory is the presence of so-called sharp wave ripples (SWR). These are brief, high-frequency electrical events generated in the hippocampus, and they are believed to represent a major event occurring in the brain in the so-called episodic memory, such as recalling a life event or a friend’s phone number.

However, what happens in the hippocampus when SRWs are generated has not been well understood.

Now a new study sheds light on the existence of a neuron type in the mouse hippocampus that might be a key to better understanding of episodic memory.

Professor Marco Capogna and Assistant professor Wen-Hsien Hou have contributed to the discovery of the novel neuron that is associated with sharp wave ripples and memory.

Possible disruption in dementia and Alzheimer’s

The study describes the novel neuron type in the hippocampus.

“We have found that this new type of neuron is maximally active during SWRs when the animal is awake – but quiet – or deeply asleep. In contrast, the neuron is not active at all when there is a slow, synchronized neuronal population activity called “theta” that can occur when an animal is awake and moves or in a particular type of sleep when we usually dream,” Prof Capogna said.

Because of this dichotomic activity, this novel type of neuron is named theta off-ripples on (TORO).

“How come, TORO-neurons are so sensitive to SWRs? The paper tries to answer this question by describing the functional connectivity of TORO-neurons with other neurons and brain areas, an approach called circuit mapping. We find that TOROs are activated by other types of neurons in the hippocampus, namely CA3 pyramidal-neurons and are inhibited by inputs coming from other brain areas, such as the septum,” Prof Capogna explained.

“Furthermore, the study finds that TOROs are inhibitory neurons that release the neurotransmitter GABA. They send their output locally – as most GABAergic neurons do – within the hippocampus, but also project and inhibit other brain areas outside the hippocampus, such as the septum and the cortex. In this way, TORO-neurons propagate the SWR information broadly in the brain and signal that a memory event occurred,” he concluded.

The team has monitored the activity of the neuron by using electrophysiology – a technique that detects activity of the neurons by measuring voltage versus time, and by using imaging that detects activity by measuring changes in calcium signalling inside the neurons.

Demonstrating a causal link between the activity of TORO-nerve cells and memory will be the next step, and exploring whether inhibition of TORO-neurons and sharp wave ripples occurs in dementia and Alzheimer’s diseases. 

Source: Aarhus University