Research on long-term memories has largely focused on the role of neurons but in recent years, research is revealing that other cell types are also vital in memory formation and storage. A new study reveals the crucial role of vascular system cells (pericytes) in the formation of long-term memories of life events – memories that are lost in diseases such as Alzheimer’s. The research, published in the journal Neuron, shows that pericytes, which wrap around the capillaries work in concert with neurons to help ensure that long-term memories are formed.
Pericytes help maintain the structural integrity of the capillaries. Specifically, they control the amount of blood flowing in the brain and play a key role in maintaining the barrier that stops pathogens and toxic substances from leaking out of the capillaries and into brain tissue.
“We now have a firmer understanding of the cellular mechanisms that allow memories to be both formed and stored,” says Cristina Alberini, a professor in New York University’s Center for Neural Science and the paper’s senior author. “It’s important because understanding the cooperation among different cell types will help us advance therapeutics aimed at addressing memory-related afflictions.”
“This work connects important dots between the newly discovered function of pericytes in memory and previous studies showing that pericytes are either lost or malfunction in several neurodegenerative diseases, including Alzheimer’s disease and other dementia,” explains author Benjamin Bessières, a postdoctoral researcher in NYU’s Center for Neural Science.
The discovery, reported in the new Neuron article, of the pericytes’ significance in long-term memory emerged because Alberini, Bessières, Kiran Pandey, and their colleagues examined the role of insulin-like growth factor 2 (IGF2) – a protein that was known to increase following learning in brain regions, such as the hippocampus, and to play a critical role in the formation and storage of memories.
They found that IGF2’s highest levels in the brain cells of the hippocampus do not come from neurons or glial cells, or other vascular cells, but, rather, from pericytes.
Researchers at the University of Oxford have produced an engineered tissue representing a simplified cerebral cortex by 3D printing human stem cells. The results, published in the journal Nature Communications, showed that, when implanted into mouse brain slices, the structures became integrated with the host tissue.
The breakthrough technique could lead to tailored repairs for brain injuries. The researchers demonstrated for the first time that neural cells can be 3D-printed to mimic the architecture of the cerebral cortex.
Brain injuries, including those caused by trauma, stroke and surgery for brain tumours, typically result in significant damage to the cerebral cortex. For example, each year, around 70 million people globally suffer from traumatic brain injury (TBI), with 5 million of these cases being severe or fatal. Currently, there are no effective treatments for severe brain injuries, leading to serious impacts on quality of life.
Tissue regenerative therapies, especially those in which patients are given implants derived from their own stem cells, could be a promising route to treat brain injuries in the future. Up to now, however, there has been no method to ensure that implanted stem cells mimic the architecture of the brain.
In this new study, the University of Oxford researchers fabricated a two-layered brain tissue by 3D printing human neural stem cells. When implanted into mouse brain slices, the cells showed convincing structural and functional integration with the host tissue.
Lead author Dr Yongcheng Jin (Department of Chemistry, University of Oxford) said: ‘This advance marks a significant step towards the fabrication of materials with the full structure and function of natural brain tissues. The work will provide a unique opportunity to explore the workings of the human cortex and, in the long term, it will offer hope to individuals who sustain brain injuries.’
The cortical structure was made from human induced pluripotent stem cells (hiPSCs), which have the potential to produce the cell types found in most human tissues. A key advantage of using hiPSCs for tissue repair is that they can be easily derived from cells harvested from patients themselves, and therefore would not trigger an immune response.
The hiPSCs were differentiated into neural progenitor cells for two different layers of the cerebral cortex, by using specific combinations of growth factors and chemicals. The cells were then suspended in solution to generate two ‘bioinks’, which were then printed to produce a two-layered structure. In culture, the printed tissues maintained their layered cellular architecture for weeks, as indicated by the expression of layer-specific biomarkers.
When the printed tissues were implanted into mouse brain slices, they showed strong integration, as demonstrated by the projection of neural processes and the migration of neurons across the implant-host boundary. The implanted cells also showed signalling activity, which correlated with that of the host cells. This indicates that the human and mouse cells were communicating with each other, demonstrating functional as well as structural integration.
The researchers now intend to further refine the droplet printing technique to create complex multi-layered cerebral cortex tissues that more realistically mimic the human brain’s architecture. Besides their potential for repairing brain injuries, these engineered tissues might be used in drug evaluation, studies of brain development, and to improve our understanding of the basis of cognition.
The new advance builds on the team’s decade-long track record in inventing and patenting 3D printing technologies for synthetic tissues and cultured cells.
Senior author Dr Linna Zhou (Department of Chemistry, University of Oxford) said: “Our droplet printing technique provides a means to engineer living 3D tissues with desired architectures, which brings us closer to the creation of personalised implantation treatments for brain injury.”
Senior author Associate Professor Francis Szele (Department of Physiology, Anatomy and Genetics, University of Oxford) added: “The use of living brain slices creates a powerful platform for interrogating the utility of 3D printing in brain repair. It is a natural bridge between studying 3D printed cortical column development in vitro and their integration into brains in animal models of injury.”
Senior author Professor Zoltán Molnár (Department of Physiology, Anatomy and Genetics, University of Oxford) said: “Human brain development is a delicate and elaborate process with a complex choreography. It would be naïve to think that we can recreate the entire cellular progression in the laboratory. Nonetheless, our 3D printing project demonstrates substantial progress in controlling the fates and arrangements of human iPSCs to form the basic functional units of the cerebral cortex.”
Figure I Neural measurement tools for studying the emergence of consciousness. Examples of techniques for recording brain activity and/or neuroimaging in infants and foetuses. (A) Infant electroencephalography (EEG) with a geodesic electrode net. (B) Foetal magnetoencephalography (MEG) recorded from a pregnant woman. (C) Infant functional near infrared spectroscopy (fNIRS) recording with multichannel optode cap. (D) An infant is prepared for functional magnetic resonance imaging (fMRI). Source: Bayne et al., 2023
There is evidence that some form of conscious experience is present by birth, and perhaps even in late pregnancy, an international team of researchers has found. The findings, published today in Trends in Cognitive Science, have important clinical, ethical and potentially legal implications, according to the authors.
Converging evidence from studies of functional network connectivity, attention, multimodal integration, and cortical responses to global oddballs suggests that consciousness is likely to be in place in early infancy and may even occur before birth. Over the decades, theorists have argued that consciousness emerges from anywhere from 30 to 35 weeks of pregnancy (based on EEG of the foetus’s brain) to 12 to 15 months of age (based on higher-order representational theory).
In the study, the researchers argue that by birth the infant’s developing brain is capable of conscious experiences that can leave a lasting imprint on their developing sense of self and understanding of their environment.
The team comprised neuroscientists and philosophers from Monash University, in Australia, University of Tübingen, in Germany, University of Minnesota, in the USA, and Trinity College Dublin.
Although each of us was once a baby, infant consciousness remains mysterious, because infants cannot tell us what they think or feel, explains one of the two lead authors of the paper Dr Tim Bayne, Professor of Philosophy at Monash University.
“Nearly everyone who has held a newborn infant has wondered what, if anything, it is like to be a baby. But of course we cannot remember our infancy, and consciousness researchers have disagreed on whether consciousness arises ‘early’ (at birth or shortly after) or ‘late’ – by one year of age, or even much later.”
To provide a new perspective on when consciousness first emerges, the team built upon recent advances in consciousness science. In adults, some markers from brain imaging have been found to reliably differentiate consciousness from its absence, and are increasingly applied in science and medicine. This is the first time that a review of these markers in infants has been used to assess their consciousness.
Co-author of the study, Lorina Naci, Associate Professor in the School of Psychology, who leads Trinity’s ‘Consciousness and Cognition Group, explained: “Our findings suggest that newborns can integrate sensory and developing cognitive responses into coherent conscious experiences to understand the actions of others and plan their own responses.”
The paper also sheds light into ‘what it is like’ to be a baby. We know that seeing is much more immature in babies than hearing, for example. Furthermore, this work suggests that, at any point in time, infants are aware of fewer items than adults, and can take longer to grasp what’s in front of them, but it is easier for them to process more diverse information, such as sounds from other languages.
New guidance has been issued for clinicians on the determination of brain death, also known as death by neurologic criteria. A new consensus practice guideline, developed through a collaboration between the American Academy of Neurology (AAN), the American Academy of Pediatrics (AAP), the Child Neurology Society (CNS), and the Society of Critical Care Medicine (SCCM) is published in Neurology, the medical journal of the American Academy of Neurology.
This guideline updates the 2010 AAN adult practice guidelines and the 2011 AAP/CNS/SCCM paediatric practice guidelines on the determination of brain death. Because of a lack of high-quality evidence on the subject, the experts used an evidence-informed consensus process to develop the guideline.
“Until now, there have been two separate guidelines for determining brain death, one for adults and one for children,” said author Matthew P. Kirschen, MD, PhD, FAAN, of the Children’s Hospital of Philadelphia, and a member of the Child Neurology Society and the Society of Critical Care Medicine. “This update integrates guidance for adults and children into a single guideline, providing clinicians with a comprehensive and practical way to evaluate someone who has sustained a catastrophic brain injury to determine if they meet the criteria for brain death.”
Brain death is a state in which there is complete and permanent cessation of function of the brain in a person who has suffered catastrophic brain injury.
“Brain death means that clinicians cannot observe or elicit any clinical signs of brain function,” said author David M. Greer, MD, FAAN, FCCM, of Boston University in Massachusetts. “Brain death is different from comatose and vegetative states. People do not recover from brain death. Brain death is legal death.”
The consensus practice guideline outlines the standardised procedure for trained clinicians to evaluate people for brain death. As part of this procedure, clinicians perform an evaluation to determine whether there is any clinical functioning of the brain and brainstem, including whether the person breathes on their own. Brain death is declared if a person has a catastrophic brain injury, has no possibility of recovering any brain function, is completely unresponsive, does not demonstrate any brain or brainstem function, and does not breathe on their own.
This guideline includes updates on the prerequisites for brain death determination, the examination and the examiners, apnoea testing and ancillary testing.
In soccer, goalkeepers have a unique role: they must be ready to make split-second decisions based on incomplete information to stop their opponents from scoring a goal. Now researchers reporting in Current Biology on have some of the first solid scientific evidence that goalkeepers show fundamental differences in the way they perceive the world and process multi-sensory information.
“Unlike other football players, goalkeepers are required to make thousands of very fast decisions based on limited or incomplete sensory information,” says Michael Quinn, the study’s first author at Dublin City University who is also a retired professional goalkeeper and son of former Irish international Niall Quinn. “This led us to predict that goalkeepers would possess an enhanced capacity to combine information from the different senses, and this hypothesis was confirmed by our results.”
“While many football players and fans worldwide will be familiar with the idea that goalkeepers are just ‘different’ from the rest of us, this study may actually be the first time that we have proven scientific evidence to back up this claim,” says David McGovern, the study’s lead investigator also from Dublin City University.
Based on his own history as a professional goalkeeper, Quinn already had a feeling that goalkeepers experience the world in a distinctive way. In his final year working on a psychology degree, he wanted to put this notion to the test.
To do it, the researchers enlisted 60 volunteers, including professional goalkeepers, professional outfield players, and age-matched controls who don’t play soccer. They decided to look for differences among the three groups in what’s known as temporal binding windows – that is, the time window within which signals from the different senses are likely to be perceptually fused or integrated.
In each trial, participants were presented with one or two images (visual stimuli) on a screen. Those images could be presented along with one, two, or no beeps (auditory stimuli). Those stimuli were presented with different amounts of time in between.
In these tests, trials with one flash and two beeps generally led to the mistaken perception of two flashes, providing evidence that the auditory and visual stimuli have been integrated. This mistaken perception declines as the amount of time between stimuli increases, allowing researchers to measure the width of a person’s temporal binding window, with a narrower temporal binding window indicating more efficient multisensory processing.
their tests showed that goalkeepers had marked differences in their multisensory processing ability. More specifically, goalkeepers had a narrower temporal binding window relative to outfielders and non-soccer players, indicating a more precise and speedy estimation of the timing of audiovisual cues.
The test results revealed another difference too. Goalkeepers didn’t show as much interaction between the visual and auditory information. The finding suggests that the goalies had a greater tendency to separate sensory signals. In other words, they integrated the flashes and beeps to a lesser degree.
“We propose that these differences stem from the idiosyncratic nature of the goalkeeping position that puts a premium on the ability of goalkeepers to make quick decisions, often based on partial or incomplete sensory information,” the researchers write.
They speculate that the tendency to segregate sensory information stems from goalies need to make quick decisions based on visual and auditory information coming in at different times. For example, goalkeepers watch how a ball is moving in the air and also make use of the sound of the ball being kicked. But the relationship between those cues in time will depend on where the outfielder making the shot is on the field. After repeated exposure to those scenarios, goalkeepers may start to process sensory cues separately rather than combining them.
The researchers say they hope to explore other questions in future studies, including whether players with other highly specialised positions, such as strikers and centre-backs, may also show perceptual differences. They’re also curious to know which comes first. “Could the narrower temporal binding window observed in goalkeepers stem from the rigorous training regimens that goalkeepers engage in from an early age?” McGovern asks. “Or could it be that these differences in multisensory processing reflect an inherent, natural ability that draws young players to the goalkeeping position? Further research that tracks the developmental trajectory of aspiring goalkeepers will be required to tease between these possibilities.”
In cases where standard therapies fail, an in-development drug called XEN1101 reduces seizure frequency by more than 50% in some patients and in some cases eliminates them, according to a new study published in JAMA Neurology. Unlike several treatments that must be started at low doses and slowly ramped up, the new drug can safety be taken at its most effective dose from the start, the authors say.
Focal seizures, the most common type seen in epilepsy, occur when nerve cells in a particular brain region send out a sudden, excessive burst of electrical signals. Along with seizures, this uncontrolled activity can lead to abnormal behaviour, periods of lost awareness, and mood changes. While many available therapies control or reduce seizures, they fail to stop seizures in about one-third of patients and may cause harsh side effects, experts say.
Led by researchers at NYU Grossman School of Medicine, a new clinical trial found that patients who added XEN1101 to their current antiseizure treatments saw a 33% to 53% drop in monthly seizures, depending on their dose. By contrast, those given a placebo had on average 18% fewer seizures during the treatment phase of the trial, which lasted eight weeks. Most patients then volunteered to extend the trial, with about 18% of those treated with the new drug remaining entirely seizure free after six months, and about 11% having no seizures after a year or longer.
“Our findings show that XEN1101 may offer a swift, safe, and effective way to treat focal epilepsy,” said study lead author, neurologist Jacqueline French, MD. “These promising results offer hope for those who have struggled for decades to get their symptoms under control.”
French, a professor in the Department of Neurology at NYU Langone Health, notes that XEN1101 was well tolerated by the study participants, who reported side effects similar to other antiseizure treatments, including dizziness, nausea, and fatigue, and the majority felt well enough to continue the regimen. Another benefit of the drug, she adds, is that it takes more than a week to break down, so levels in the brain remain consistent over time. This steadiness allows the treatment to be started at full strength and helps to avoid dramatic spikes that worsen side effects, and dips that allow seizures to return. This lengthy breakdown time also allows for a “grace period” if a dose is accidently skipped or taken late.
XEN1101 is part of a class of chemicals called potassium-channel openers, which avert seizures by boosting the flow of potassium out of nerves, stopping them from firing. French notes that while other drugs of this kind have been explored for epilepsy patients in the past, such treatments were taken out of use because the compounds were later found to gradually build up in the skin and eyes, prompting safety concerns, the researchers say.
Meanwhile, XEN1101 combines the effectiveness of potassium-channel openers with the safety of more traditional drugs, says French, who is also a member of NYU Langone’s Comprehensive Epilepsy Center.
For the study, which included 285 men and women with epilepsy and ran from January 2019 to September 2021, the research team recruited adults with epilepsy who had already tried and stopped taking an average of six drugs that failed to treat their focal seizures. Patients in the trial had to have experienced at least four episodes a month despite ongoing treatment to qualify. The patients were randomly provided either a daily oral capsule of XEN1101 (in 10mg, 20mg, or 25mg doses) or placebo.
Among the results, the trial revealed no signs of dangerous side effects such as heart problems, allergic reactions, or concerning skin discolourations. However, French says that the research team plans to expand the number of patients exposed to the drug and monitor for potential issues that could arise in the long term, or include specific groups of people, such as pregnant women. In addition, the team also intends to explore XEN1101 for other types of seizures, including those that broadly affect the brain at the same time (generalised seizures).
“Our study highlights the importance of finding as many therapeutic options as possible for those who suffer from seizures,” says French. “Since everyone responds differently, treating epilepsy cannot be a one-size-fits-all approach.”
Neuroinflammation can lead to serious neurological or psychiatric diseases, for which there is presently one biomarker available for medical imaging to visualise cerebral inflammation. Trouble is, it has been unclear how to interpret this biomarker. Researchers have now found that a large quantity of this protein indicates a large quantity of inflammatory cells, but its presence is not a sign of their overactivation. These results, published in Nature Communications, pave the way for optimal observation of neuroinflammatory processes with other potential biomarkers, and a re-evaluation of prior research.
In the brain, microglial cells play an important role in inflammation and its potential overactivation. They can be ”activated” when dysfunction occurs, phagocytise pathological cells or proteins and even produce protective substances. Currently, in medical imaging, only one marker can be used to locate and measure microglia non-invasively and in vivo: the TSPO protein, which is present in these cells. This protein can be observed by Positron Emission Tomography (PET), a common imaging technique.
A TSPO of insight
”Hundreds of studies have used PET scans of this protein to explore and quantify microglia. However, no study has succeeded in precisely interpreting the significance of its quantity in the context of an inflammatory reaction,” explains Stergios Tsartsalis, senior clinical associate in the Department of Psychiatry at the UNIGE Faculty of Medicine. Together with other researchers, Stergios Tsartsalis sought to determine if a large quantity of TSPO correspond to a large quantity of inflammatory cells, and whether it is a sign of their overactivation.
The international research team worked on the brains of mouse models of Alzheimer’s disease, amyotrophic lateral sclerosis and multiple sclerosis, and on post-mortem brain samples from patients affected by the same diseases. ”We discovered that a high density of TSPO protein is indeed an indicator of a high density of microglia. On the other hand, the observation of TSPO does not allow us to say whether or not the inflammatory cells are overactivated,” explains the UNIGE researcher, co-first author of the study.
Re-reading the past, optimising the future
This discovery highlights the value of medical imaging of TSPO: it makes it possible to identify cases where the neuroinflammatory disease is linked to a deregulation in the number of glial cells. In addition, the scientists have identified two markers of the state of microglia activation in humans – the LCP2 and TFEC proteins – setting the stage for new medical imaging approaches.
”These results represent a further step towards understanding the role of microglia in neuroinflammation. They will help to optimise the focus of future studies and also to review the conclusions of previous research,” enthuses Stergios Tsartsalis.
Selective serotonin reuptake inhibitors (SSRIs) normally take a few weeks before any improvements manifest, but the reasons why it takes so long have remained unclear since their first introduction 50 years ago. Now, new research provides the first human evidence that this is due to physical changes in the brain, which leads to greater brain plasticity developing over the first few weeks of SSRI intake. This may also begin to explain one of the mechanisms of how antidepressants work.
This work is presented at the ECNP conference in Barcelona, and also has been accepted in a peer-reviewed journal.
Clinician have long been puzzled as to why SSRIs take a relatively long time before having an effect. Researchers in Copenhagen, Innsbruck, and University of Cambridge have undertaken a randomised, double-blind placebo-controlled study in a group of healthy volunteers which shows a gradual difference in how many nerve cell connections (synapses) the brain cells have between those taking the antidepressants and a control group, depending on how long the treatment lasts.
In the study, 17 volunteers were given a 20mg daily dose of the SSRI escitalopram, with 15 volunteers given a placebo. Between three and five weeks after starting the trial, their brains were scanned with a PET (Positron Emission Tomography) scanner, which showed the amount of synaptic vesicle glycoprotein 2A in the brain: this is an indicator of the presence of synapses, so the more of the protein is found in an area, the more synapses are present in that area (ie, greater synaptic density). These scans showed significant between-group differences in how the synapse density evolved over time.
Researcher Professor Gitte Knudsen (of Copenhagen University Hospital) said:
“We found that with those taking the SSRI, over time there was a gradual increase in synapses in the neocortex and the hippocampus of the brain, compared to those taking placebo. We did not see any effect in those taking placebo.”
The neocortex, which takes up around half of the brain’s volume, deals with higher functions, such as sensory perception, emotion, and cognition. The hippocampus, which is found deep in the brain, handles functions of memory and learning.
Professor Knudsen continued, “This points towards two main conclusions. Firstly, it indicates that SSRIs increase synaptic density in the brain areas critically involved in depression. This would go some way to indicating that the synaptic density in the brain may be involved in how these antidepressants function, which would give us a target for developing novel drugs against depression. The second point is that our data suggest that synapses build up over a period of weeks, which would explain why the effects of these drugs take time to kick in.
Commenting, Professor David Nutt (Imperial College, London) said “The delay in therapeutic action of antidepressants has been a puzzle to psychiatrists ever since they were first discerned over 50 years ago. So these new data in humans that uses cutting edge brain imaging to demonstrate an increase in brain connections developing over the period that the depression lifts are very exciting. Also they provide more evidence enhancing serotonin function in the brain can have enduring health benefits.”
This is an independent comment, Professor Nutt was not involved in this work..
In some cases of hearing loss, a cochlear implant is required. Photo by Brett Sayles
With age, many people will eventually need hearing aids. In some cases, the reason for this may be a signalling pathway that controls auditory sensory cell function and is downregulated with age. In the journal iScience, researchers at the University of Basel report the clues they have uncovered about this process, which may yield potential therapies to slow its progression.
Nearly everyone eventually experiences hearing loss: loud noises or simple aging gradually cause the auditory sensory cells and their synapses in the inner ear to degenerate and die off. The only treatment option is a hearing aid or, in extreme cases, a cochlear implant.
“In order to develop new therapies, we need to better understand what the auditory sensory cells need for proper function,” explains Dr Maurizio Cortada from the Department of Biomedicine at the University of Basel and University Hospital Basel. In collaboration researchers at the Biozentrum, Cortada investigated which signalling pathways influence the sensory hair cells in the inner ear. In the process, the researchers discovered a central regulator.
This signaling pathway, known by researchers as the mTORC2-signaling pathway, plays an important role, among other things, for cell growth and the cytoskeleton. The role it plays for the hair cells in the inner ear has not previously been studied.
When the researchers removed a central gene of this signalling pathway in the hair cells of the inner ear of mice, the animals gradually lost their hearing. By the age of twelve weeks, they were completely deaf, the authors report in the study.
Shortening ‘hair’ and fewer synapses
Closer examination indicated that the sensory hair cells in the inner ear lost their sensors without the mTORC2 signalling pathway: the distinctive fibre bundles known as stereocilia. Through electron microscopes, the researchers observed the shortening of stereocilia. The number of synapses that transmit the signals to the auditory nerve was also reduced.
“From other studies, we know that the production of key proteins in this signaling pathway decreases with age,” Cortada explains. There may be a connection to the loss of synapses and the reduced function of the auditory sensory cells in the inner ear that leads to hearing loss with increasing age.
“If this is confirmed, it would be a possible starting point for future therapies,” says the researcher. The middle and inner ear, for example, would be readily accessible for locally-administered medications or gene therapies. The results could pave the way for the development of such treatment options.
Using CRISPR gene editing, stem cells and human neurons, researchers have isolated the impact of a gene that is commonly mutated in autism. This new study, published today in The American Journal of Human Genetics, ties mutations in the gene CHD8 with a broad spectrum of molecular and cellular defects in human cortical neurons.
Autism is a highly heritable disorder with a recent increase in incidence – approximately 1 in 40 children in the US are diagnosed with autism. Over the past decade, sequencing studies have found many genes associated with autism but it has been challenging to understand how mutations in certain genes drive complex changes in brain activity and function.
The team, led by researchers at the New York Genome Center and New York University (NYU) and the Broad Institute, team developed an integrated approach to understand how mutations in the CHD8 gene alter genome regulation, gene expression, neuron function, and are tied to other key genes that play a role in autism.
For more than a decade, it has been known that individuals with mutations in the CHD8 gene tend to have many similar ailments, such as autism, an abnormally large head size, digestive issues and difficulty sleeping. The CHD8 gene is a regulator of proteins called chromatin that surround the DNA but it is unclear how this particular gene might relate to major alterations in neural development and, in turn, result in autism.
The research team identified numerous changes in physical state of DNA, which makes the genome more accessible to regulators of gene expression, and, in turn, drives aberrant expression of hundreds of genes. These molecular defects resulted in clear functional changes in neurons that carry the CHD8 mutation. These neurons are much less talkative: They are activated less often and send fewer messages across their synapses.
The study authors initially observed these changes using human cortical neurons differentiated from stem cells where CRISPR was used to insert a CHD8 mutation. These findings were further bolstered by similar reductions in neuron and synapse activity when examining neurons from mice with a CHD8 mutation. These substantial defects in neuron function were circumvented when extra CHD8 was added to the cell using a gene therapy approach. In this case, extra copies of a healthy CHD8 gene without any mutation were added using a viral vector. Upon differentiation, the team found that the neurons rescued by the treatment returned to a normal rate of activity and synaptic communication, indicating that this gene therapy approach may be sufficient to restore function.
Lastly, when examining disrupted genes, the authors found that the CHD8 mutation seemed to specifically alter other genes that have been implicated in autism or intellectual disability, but not genes associated with unrelated disorders like cardiovascular disease. This suggest that CHD8 might influence selectively those genes that tend to be involved in neurodevelopmental disorders, providing an explanation for some of the particular characteristics of individuals carrying a CHD8 mutation.