Month: July 2022

Cancer Cells Seek their ‘Goldilocks’ Zone of Tissue Softness

Source: National Cancer Institute on Unsplash

Research into the mechanisms of cell migration and the impact of tissue rigidity on cell positioning and steering has found that cancer cell migrate towards and opens new possibilities for stopping and directing it.

An international team of scientist have uncovered for the first time how tissue stiffness determines cell positioning and regulates all types of cell migration ranging from the neuronal growth cone turning to dissemination of malignant cancer cells in brain tumors and breast cancer.

Each cell in the body has a specific task and carefully determined position within a tissue. Cell positioning is regulated by many factors, including tissue rigidity. Cells are capable of probing and sensing their environment, and different cell types have different preferences for optimal conditions. A little bit like Goldilocks in the story trying out the different beds of the bear family and finding one bed to be too soft, the other one two hard and one to be just right. While this has been well-known for a long time, it has remained a mystery to researchers how cells are able to steer themselves to the optimal environment.

“The prevailing view among scientists was that all cell types prefer high-rigidity environments and migrate towards increasing stiffness. This process has been coined the term ‘durotaxis‘ – migration towards hard from Greek and Latin,” said Aleksi Isomursu, a doctoral researcher.

“I was visiting the University of Minnesota for a research project and noticed that brain cancer cells grown on engineered substrates with alternating stiffness show the opposite behaviour they turned towards soft,” Isomursu continues.

This observation launched an interdisciplinary research project involving cancer cell biology, computational modelling and engineering and involving researchers from three continents. As the outcome, the researchers uncovered the basic mechanism all cell types use to steer themselves towards their optimal environment.

These results will be important for future research in stopping and directing cancer cell migration.

“I experimented with different types of drugs and identified ones that could make brain cancer cells stop moving or change direction,” explains Postdoctoral Researcher Mathilde Mathieu.

Identification of the mechanism of cell steering provides explanations for many thus far mysterious steps in cancer dissemination, for example how cancer cells migrate out from the stiff core of a breast tumour.

“These findings have been gaining a lot of interest in researchers and we have even played around with the idea of launching a new term – ‘mollitaxis‘, migration towards soft,” says the Principal Investigator of the laboratory at the University of Turku, Professor Johanna Ivaska.

Source: University of Turku

Nuclear Stress Testing Identifies Candidates Most in Need of Angioplasty

Photo by Robina Weermeijer on Unsplash

Patients identified by nuclear stress testing as having severe stress-induced myocardial ischaemia may benefit from angioplasty, while those with mild or no ischaemia will not, according to a new study reported in the Journal of the American College of Cardiology.

Following stress testing, coronary revascularisation restores blood flow to blocked arteries. For patients with severe ischaemia, early revascularisation saw a more than 30% reduction in mortality compared to patients with severe ischaemia who were treated with medication, but there was no benefit for the other groups.

Conducted by the Icahn School of Medicine at Mount Sinai, this is the first large-scale study to investigate stress testing in patient management when applied to the full spectrum of patients who have both varying degrees of myocardial ischaemia and heart function. This new study can help guide physicians on how to manage caring for patients with suspected heart disease.

Stress tests are indicated when physicians suspect that a patient’s chest pain or other clinical symptoms are from coronary artery disease (CAD). These help determine if a patient has obstructive CAD which leads to significant ischaemia. If the ischaemia due to obstructive CAD is severe, adequate blood flow can be restored with coronary artery bypass grafting surgery or percutaneous coronary intervention (PCI), where a catheter is used to place stents in the blocked coronary arteries. Nuclear stress testing is the most common stress test used to detect myocardial ischaemia.

“There is keen interest in assessing how measurement of myocardial ischaemia during stress testing can help shape physicians’ decision to refer patients for coronary revascularisation procedures, but this issue has not been well studied among patients who have underlying heart damage,” explains lead author Alan Rozanski, MD. “Our study, which evaluated a large number of patients with pre-existing heart damage who underwent cardiac stress testing, finally addresses this clinical void.”

The researchers analysed records of more than 43 000 patients who underwent nuclear stress testing with suspected CAD between 1998 and 2017 with a median 11-year follow-up for mortality/survival. The investigators grouped patients according to both their level of myocardial ischaemia during stress testing as well as their left ventricular ejection fraction (LVEF). Low LVEF measurements indicate prior heart damage that could be from scarring of the heart due to a prior heart attack.

The study provides two important clinical insights. First, the study showed that the frequency of myocardial ischemia during stress testing varies according to patients’ heart function. Of the 39 883 patients with normal heart function (LVEF > 55%), fewer than 8% of them had ischaemia. However, among the 3560 patients with reduced heart function (LVEF less than 45%, which indicates prior heart damage), more than 40% of them had myocardial ischaemia. The study also showed that the presence of myocardial ischaemia increases the risk of death in patients with normal and reduced heart function. Among both groups of patients, performing bypass or PCI procedures was not associated with improved survival among patients with either no or only mild ischaemia during the cardiac stress test. Among patients with severe ischaemia, coronary procedures were associated with more than 30% higher survival rates compared to those managed with medication only. This was the case for patients with and without heart damage.

“These results confirm the benefits of stress testing for clinical management. What you want from any test when considering coronary revascularisation procedures is that the test will identify a large percentage of patients who are at low clinical risk and do so correctly, while identifying only a small percentage of patients who are at high clinical risk and do so correctly. That is what we found with nuclear stress testing in this study,” explains Dr Rozanski. “Importantly, the presence of severe ischaemia does not necessarily mean that coronary revascularisation should be applied. New data from a large clinical trial suggests that when medical therapy is optimised it may be as effective as coronary revascularisation in such patients. But regardless, the presence of severe ischaemia indicates high clinical risk which then requires aggressive management to reduce clinical risk.”

Source: The Mount Sinai Hospital / Mount Sinai School of Medicine

Cancer Drug Candidate Spurs Nerve Regeneration

Photo by Cottonbro on Pexels

A candidate cancer drug currently in development has been also shown to stimulate regeneration of damaged nerves after spinal trauma. Four weeks after spinal cord injury, animals treated with the candidate drug, AZD1390, were “indistinguishable” from uninjured animals, according to the researchers.

The study, published in Clinical and Translational Medicine, demonstrated in cell and animal models that the candidate drug, AZD1390, can block the response to DNA damage in nerve cells and promote regeneration of damaged nerves. This restored sensory and motor function after spinal injury.

The announcement comes weeks after the same research team showed a different investigational drug, AZD1236, can reduce damage after spinal cord injury, by blocking the inflammatory response.

AZD1390 is also under investigation by AstraZeneca to block ATM-dependent signalling and repair of DNA double strand breaks (DSBs), an action which sensitises cancer cells to radiation treatment. The ATM protein kinase pathway – a critical biochemical pathway regulating the response to DNA damage. The DNA Damage Response system (DDR) is activated by DNA damage, including DSBs in the genome, which also happen in several common cancers and also after spinal cord injury.

Professor Zubair Ahmed, from the University’s Institute of Inflammation and Ageing and Dr Richard Tuxworth from the Institute of Cancer and Genomic Sciences hypothesised the persistent activation of this system may prevent recovery from spinal cord injury, and that blocking it would promote nerve repair and restore function after injury.

Their initial studies found that AZD1390 stimulated nerve cell growth in culture, and inhibited the ATM protein kinase pathway – a critical biochemical pathway regulating the response to DNA damage.

AZD1390 was tested in animal models following spinal cord injury. Oral treatment with AZD1390 significantly suppressed the ATM protein kinase pathway, stimulated nerve regeneration beyond the site of injury, and improved the ability of these nerves to carry electrical signals across the injury site.

Professor Ahmed commented: “This is an exciting time in spinal cord injury research with several different investigational drugs being identified as potential therapies for spinal cord injury. We are particularly excited about AZD1390 which can be taken orally and reaches the site of injury in sufficient quantities to promote nerve regeneration and restore lost function.

“Our findings show a remarkable recovery of sensory and motor functions, and AZD1390-treated animals being indistinguishable from uninjured animals within 4 weeks of injury.”

Dr Tuxworth added: “This early study shows that AZD1390 could be used as a therapy in life-changing conditions. In addition, repurposing this existing investigational drug potentially means we can reach the clinic significantly faster than developing a new drug from scratch.”

Source: University of Birmingham

In Liver Cancer, Protective p53 Protein Turns Traitor

Genetics
Image source: Pixabay

p53 is one of the most important proteins in cancer biology. Often referred to as a “guardian of the genome”, p53 activates in response to cellular stressors such as DNA damage. Its activation induces processes such as cell death that prevent cancer development. Because of this, p53 mutations are extremely common in cancers, including hepatocellular carcinoma.

However, a study published in Cancer Research, revealed that the constant activation of p53 in liver cells of patients suffering from chronic liver disease (CLD) could actually promote the development of liver cancer.

CLD can be brought on by different factors including viruses, alcohol use, and fat accumulation, all of which can induce p53 activation. Previous studies have shown that p53 is in a constant state of activation in the liver cells of CLD patients. Yet, it is not clear what role this plays in CLD pathophysiology.

“Clinical data clearly show that p53 is activated in the hepatocytes of individuals with CLD,” says Yuki Makino, lead author of the study at Osaka University. “Because p53 is such a vital part of how the human body prevents tumor formation, its role in CLD became even more intriguing.”

To create a mouse model with p53 accumulation in hepatocytes, researchers deleted Mdm2, the protein responsible for regulating p53 expression by targeting it for degradation. These mice developed liver inflammation with higher amounts of hepatocyte apoptosis and senescence-associated secretory phenotype (SASP), a phenomenon where cells produce signals within the microenvironment that can cause nearby cells to become cancerous. In fact, mice with p53 accumulation did have increased liver tumour development.

“We also observed an expanded population of hepatic progenitor cells (HPCs), which have stem cell-like characteristics,” explained senior author Tetsuo Takehara. “When the HPCs were isolated, grown in culture, and then injected under the skin of lab mice, these animals developed tumors. This suggested that HPCs played a key part in the liver tumor formation seen in the animals with p53 accumulation.”

Interestingly, acceleration of liver tumour development and the other observed phenotypes did not occur when p53 was deleted in addition to Mdm2 in the hepatocytes. These results demonstrated the significance of constant p53 activity in the tumorigenesis.

“We then compared samples from 182 CLD patients with 23 healthy liver samples,” said Dr Makino. “The CLD liver biopsy samples showed activated p53 was positively correlated with apoptosis levels, SASP, HPC-associated gene expression, and later cancer development.”

The authors concluded that constitutively activated p53 in hepatocytes of CLD patients can create a microenvironment that is supportive of tumour formation from HPCs. Their work proposes a novel and paradoxical mechanism of liver tumorigenesis because p53 is one of the most well-known tumor suppressor genes. These data could highlight p53 as a potential cancer-prevention treatment target for CLD patients.

Source: Osaka University

Type of Macular Degeneration Linked to Cardiovascular Disease

Credit: National Eye Institute

Patients with a certain subtype of age-related macular degeneration (AMD) are at significant risk for cardiovascular disease and stroke, according to new research published in Retina.

“For the last three decades researchers have suggested an association between AMD and cardiovascular disease, but there has been no conclusive data on this until now. Our retinal team answered this important question by focusing on two different varieties of AMD that can be seen with advanced retinal imaging. We discovered that only one form of AMD, that with subretinal drusenoid deposits, is tightly connected to high-risk vascular diseases, and the other form, known as drusen, is not,” explained lead author R. Theodore Smith, MD, PhD, Professor at Mount Sinai. “If ophthalmologists diagnose or treat someone with the specific subretinal drusenoid deposits form of AMD, but who otherwise seems well, that patient may have significant undetected heart disease, or possibly carotid artery stenosis that could result in a stroke. We foresee that in the future, as an improved standard of care, such patients will be considered for early referral to a cardiologist for evaluation and possibly treatment.”

AMD is the leading cause of visual impairment and blindness over the age of 65. Drusen is one major form of early AMD: small yellow cholesterol deposits form in a layer under the retina, depriving it of blood and oxygen, leading to vision loss. Drusen formation can be slowed by appropriate vitamin supplementation.

The other major form of early AMD is the presence of subretinal drusenoid deposits (SDD), which is lesser known, which needs advanced retinal imaging to detect. These deposits are also made of fatty lipids and other materials, but form in a different layer beneath the light sensitive retina cells, where they are also associated with vision loss. There is no known treatment for SDD at present.

Mount Sinai researchers analysed 126 patients with AMD, using optical coherence tomography (OCT) which captures high-resolution cross-sectional scans of the retina. Patients also answered health history questionnaires. Of the patients on the study, 62 had SDD and 64 had drusen; 51 of the 126 total patients (40%) reported having cardiovascular disease or a past stroke, and most (66%) of those patients had SDD. By contrast, of the 75 patients who did not have known heart disease or stroke, relatively few (19%) had SDD. The odds of patients with cardiovascular disease or stroke having SDD was three times than in patients without.

The researchers suggested that the underlying cardiovascular disease likely compromises blood circulation in the eye, leading to the SDDs beneath the retina.

“We believe poor ocular circulation that causes SDDs is a manifestation of underlying vascular disease. This has important public health implications and can facilitate population screening and disease detection with major impact,” explained author Jagat Narula, MD, PhD, Associate Dean of Global Affairs and Professor of Medicine (Cardiology), and Radiology, at the Icahn School of Medicine at Mount Sinai. “Seen in an eye clinic, such patients should be prompted to see a cardiologist. On the other hand, if clinically substantiated in prospective studies, SDDs could emerge as a risk marker for underlying vascular disease in asymptomatic patients in primary care or a cardiology clinic. The temporal relationship between SDDs and macrovascular disease will also need to be established in prospective studies which are currently in progress.”

Analysis of patient blood samples revealed genetic risk factors may also play a role in SDD cases in addition to vascular causes. Specifically, they found that the ARMS2 gene acted independently of vascular disease to cause SDD in some patients.

“This study further demonstrates that AMD is not a single condition or an isolated disease, but is often a signal of systemic malfunction which could benefit from targeted medical evaluation in addition to localised eye care,” noted Richard B. Rosen, MD, Chief of the Retina Service for the Mount Sinai Health System. “It helps bring us one step closer to unraveling the mystery of this horrible condition which robs so many patients of the pleasure of good vision during their later years. “

Source: The Mount Sinai Hospital / Mount Sinai School of Medicine

Social Media Viewing of Tobacco Content Linked to Use

Photo by Freestocks on Unsplash

People who have viewed tobacco content on social media are more than twice as likely than non-viewers to report using tobacco and, among those who have never used tobacco, more predisposed to use in the future.

A meta-analysis of 29 studies published in JAMA Pediatrics analysed data from a total of 139 624 participants. The study draws on data across age groups, countries, content types and platforms and is the first large-scale effort linking social media content to tobacco use.

“We casted a wide net across the tobacco and social media literature and synthesised everything into a single association summarising the relationship between social media exposure and tobacco use,” said Scott Donaldson, PhD, the study’s first author. “What we found is that these associations are robust and have public health implications at the population level.”

The findings come amid growing concerns about the potential harms of social media use, particularly among young people. They build a compelling argument that online tobacco content has the power to influence viewers’ offline tobacco use.

“The proliferation of social media has offered tobacco companies new ways to promote their products, especially to teens and young adults,” said Assistant Professor Jon-Patrick Allem, the paper’s senior author. “Our hope is that policymakers and other stakeholders can use our study as a basis for decision making and action.”

Effects across age, content type and platform

Compared to those not reporting exposure tobacco content, people who did report exposure were more than twice as likely to use tobacco in their lifetime, to have used it in the past 30 days, or to be susceptible to future tobacco use if they had never used tobacco before.

“Of particular importance is the fact that people who had never before used tobacco were more susceptible,” Prof Allem said. “This suggests that exposure to tobacco-related content can pique interest and potentially lead nonusers to transition to tobacco use.”

The sample included populations from across the United States, India, Australia, and Indonesia. Adolescents made up 72% of the participants, while young adults and adults accounted for 15% and 13%, respectively.

Tobacco content included both ‘organic’ or user-generated posts, such as videos of friends smoking or vaping, and promotional material, including advertising or sponsorships from tobacco companies. Items depicted in posts ranged from cigarettes and e-cigarettes to cigars, hookah and smokeless tobacco products. Tobacco content appeared on a range of social media platforms, including Facebook, Instagram, Twitter, YouTube, Snapchat, Pinterest and Tumblr.

Both active engagement with tobacco content (eg commenting or liking) and passive engagement (just viewing) were associated with lifetime use, recent use and susceptibility to future use. People who saw content on two or more social media platforms faced even higher odds of use or susceptibility to use than those who saw tobacco-related content on just one platform.

The researchers suggest that future research should use longitudinal or experimental designs to determine whether exposure to tobacco content on social media directly leads to tobacco use. As the data in meta-analysis was drawn mostly from surveys conducted at a single point in time, a causal relationship between viewing and use could not be established.

Preventing harm from tobacco content

The study’s authors point to three levels of action that can help address the abundance of tobacco content on social media.

“First of all, we can work on designing and delivering interventions that counter the influence of pro-tobacco content, for example by educating teens about how the tobacco industry surreptitiously markets its products to them,” Allem said.

Social media platforms can also implement safeguards to protect users, especially young people, from tobacco content, for instance by including warning labels on posts that include tobacco-related terms or images. At the federal level, regulators might also choose to place stricter limits on the way tobacco companies are permitted to promote their products online.

The researchers next plan to explore the effectiveness and reach of social media tobacco prevention campaigns. They also aim to delve deeper into specific platforms used by young people, such as TikTok, and investigate how tobacco-related videos can impact susceptibility.

Source: University of Southern California

You Can’t Outrun a Poor Diet, Large Study Shows

Tired woman after exercise
Photo by Ketut Subiyanto on Pexels

New research reported in the British Journal of Sports Medicine has found that the detrimental effects of a poor diet on mortality risk are not counteracted by exercise, showing that you cannot ‘outrun’ a poor diet.

Study participants with both high levels of physical activity and a high-quality diet were found to have the lowest mortality risk.

The University of Sydney-led researchers analysed the independent and joint effects of diet and physical activity with all-cause, cardiovascular disease and cancer mortality using a large population-based sample of 360 600 adults from the UK Biobank.

High quality diets were defined as including at least five portions of fruit and vegetables every day, two portions of fish per week and lower consumption of red meat, particularly processed meat.

The study revealed that for those who had high levels of physical activity and a high-quality diet, their mortality risk was reduced by 17% from all causes, 19% from cardiovascular disease and 27% from selected cancers, as compared with those with the worst diet who were physically inactive.

Lead author Associate Professor Melody Ding at the University of Sydney said: “Both regular physical activity and a healthy diet play an important role in promoting health and longevity.

“Some people may think they could offset the impacts of a poor diet with high levels of exercise or offset the impacts of low physical activity with a high-quality diet, but the data shows that unfortunately this is not the case,” said Associate Professor Melody Ding, the study’s lead author.

small collection of studies have previously found that high-intensity exercise may counteract detrimental physiological responses to over-eating.

However, the long-term effects on how diet and physical activity interact with each other remained less explored. The study findings confirm the importance of both physical activity and quality diet in all-cause and cause-specific mortality.

“This study reinforces the importance of both physical activity and diet quality for achieving the greatest reduction in mortality risk,” said Assoc Prof Ding.

“Public health messages and clinical advice should focus on promoting both physical activity and dietary guidelines to promote healthy longevity.”

Source: The University of Sydney

Maternal Phthalates Exposure Increases Preterm Birth Risk

pregnant woman holding her belly
Source: Anna Hecker on Unsplash

A National Institutes of Health study has found that pregnant women who were exposed to multiple phthalates during pregnancy had an increased risk of preterm birth. The most significant correlation was for a phthalate most commonly used in nail polish and cosmetics.

Used in a great variety of products such as cosmetics and food packaging, phthalates are endocrine-disrupting chemicals that are known to have a wide range of health effects on humans. This especially true of children, due to their impact on the developmental system, as well as the reproductive system.

Researchers analysed data from more than 6000 pregnant women in the US, and found that women with higher concentrations of several phthalate metabolites in their urine had increased risks of preterm birth.

“Having a preterm birth can be dangerous for both baby and mom, so it is important to identify risk factors that could prevent it,” said epidemiologist Kelly Ferguson, PhD, the senior author on the study published in JAMA Pediatrics.

Data from 16 US studies that included individual participant data on prenatal urinary phthalate metabolites (representing exposure to phthalates) as well as the timing of delivery. Researchers analysed data from a total of 6045 pregnant women who delivered between 1983-2018, 9% of whom delivered preterm. Phthalate metabolites were detected in more than 96% of urine samples.

Exposure to four of the 11 phthalates found in the pregnant women was associated with a 14–16% greater probability of having a preterm birth. The most consistent findings were for exposure to a phthalate that is used commonly in personal care products like nail polish and cosmetics.

Using statistical models to simulate interventions that reduce phthalate exposures, the researchers found that reducing the mixture of phthalate metabolite levels by 50% could prevent preterm births by 12% on average. Interventions targeting behaviours, such as trying to select phthalate-free personal care products (if listed on label), voluntary actions from companies to reduce phthalates in their products, or changes in standards and regulations could contribute to exposure reduction and protect pregnancies.

“It is difficult for people to completely eliminate exposure to these chemicals in everyday life, but our results show that even small reductions within a large population could have positive impacts on both mothers and their children,” said Barrett Welch, PhD, first author on the study.

Eating fresh, home-cooked food, avoiding processed food that comes in plastic containers or wrapping, and selecting fragrance-free products or those labeled ‘phthalate-free’, are examples of things people can do that may reduce their exposures. Changes to the amount and types of products that contain phthalates could also reduce exposures.

The researchers are undertaking further studies to better understand the mechanisms behind how phthalates affect pregnancy and to find ways for mothers to reduce their exposures.

Source: National Institutes of Health

The Brain Unconsciously Excels at Spotting Deepfakes

Photo by Cottonbro on Pexels

When looking at real and ‘deepfake’ faces created by AI, observers can’t consciously recognise the difference – but their brains can, according to new research which appears in Vision Research.

Convincing fakes made by computers, deepfake videos, images, audio, or text are rife in the spread of disinformation, fraud and counterfeiting.

For example, in 2016, a Russian troll farm deployed over 50 000 bots on Twitter, making use of deepfakes as profile pictures, to try to influence the outcome of the US presidential election, which according to some research may have boosted Donald Trump’s votes by 3%. More recently, a deepfake video of Volodymyr Zelensky urging his troops to surrender to Russian forces surfaced on social media, muddying people’s understanding of the war in Ukraine with potential, high-stakes implications.

Fortunately, neuroscientists have discovered a new way to spot these insidious fakes: people’s brains are able to detect AI-generated fake faces, even though people could not distinguish between real and fake faces.

When looking at participants’ brain activity, the University of Sydney researchers found deepfakes could be identified 54% of the time. However, when participants were asked to verbally identify the deepfakes, they could only do this 37% of the time.

“Although the brain accuracy rate in this study is low – 54 percent – it is statistically reliable,” said senior researcher Associate Professor Thomas Carlson.

“That tells us the brain can spot the difference between deepfakes and authentic images.”

Spotting bots and scams

The researchers say their findings may be a starting-off point in the battle against deepfakes.

“The fact that the brain can detect deepfakes means current deepfakes are flawed,” Associate Professor Carlson said. “If we can learn how the brain spots deepfakes, we could use this information to create algorithms to flag potential deepfakes on digital platforms like Facebook and Twitter.”

They project that in the more distant future that technology, based on their and similar studies, could developed to alert people to deepfake scams in real time. Security personnel for example might wear EEG-enabled helmets to alert them of a deepfake.

Associate Professor Carlson said: “EEG-enabled helmets could have been helpful in preventing recent bank heist and corporate fraud cases in Dubai and the UK, where scammers used cloned voice technology to steal tens of millions of dollars. In these cases, finance personnel thought they heard the voice of a trusted client or associate and were duped into transferring funds.”

Method: eyes versus brain

The researchers conducted two experiments, one behavioural and one using neuroimaging. In the behavioural experiment, participants were shown 50 images of real and computer-generated fake faces and were asked to identify which were real and which were fake.

Then, a different group of participants were shown the same images while their brain activity was recorded using EEG, without knowing that half the images were fakes.

The researchers then compared the results of the two experiments, finding people’s brains were better at detecting deepfakes than their eyes.

A starting point

The researchers stress that the novelty of their study makes it merely a starting point. It won’t immediately – or even ever – lead to a foolproof way of detecting deepfakes.

Associate Professor Carlson said: “More research must be done. What gives us hope is that deepfakes are created by computer programs, and these programs leave ‘fingerprints’ that can be detected.

“Our finding about the brain’s deepfake-spotting power means we might have another tool to fight back against deepfakes and the spread of disinformation.”

Source: The University of Sydney

MRI Scans of Video Gamers Show Superior Sensorimotor Decision-making

Photo by Igor Karimov on Unsplash

Video gamers who play regularly show superior sensorimotor decision-making skills and enhanced activity in key regions of the brain as compared to non-players, according to a recent US study published in the Neuroimage: Reports journal.

Analysis of functional magnetic resonance imaging (fMRI) scans of video game players suggested that video games could be a useful tool for training in perceptual decision-making, the authors said.

“Video games are played by the overwhelming majority of our youth more than three hours every week, but the beneficial effects on decision-making abilities and the brain are not exactly known,” said lead researcher Mukesh Dhamala, associate professor at Georgia State University.

“Our work provides some answers on that,” Assov Prof Dhamala elaborated. “Video game playing can effectively be used for training – for example, decision-making efficiency training and therapeutic interventions – once the relevant brain networks are identified.”

Assoc Prof Dhamala was the adviser for Tim Jordan, PhD, the paper’s lead author, who had a personal example of how such research could inform the use of video games for training the brain.

Dr Jordan, had weak vision in one eye as a child. As part of a research study when he was about 5, he was asked to cover his good eye and play video games as a way to strengthen the vision in the weak one. Dr Jordan credits video game training with helping him go from legally blind in one eye to building strong capacity for visual processing, allowing him to eventually play lacrosse and paintball. He is now a postdoctoral researcher at UCLA.

The Georgia State research project involved 47 university-aged-age participants, with 28 categorised as regular video game players and 19 as non-players.

The subjects lay inside an fMRI machine with a mirror that let them see a cue immediately followed by a display of moving dots. Participants were asked to press a button in their right or left hand to indicate the direction the dots were moving, or resist pressing either button if there was no directional movement.

Video game players proved to be faster and more accurate with their responses. Analysis of the brain scans found that the differences were associated with enhanced activity in certain parts of the brain.

“These results indicate that video game playing potentially enhances several of the subprocesses for sensation, perception and mapping to action to improve decision-making skills,” the authors wrote. “These findings begin to illuminate how video game playing alters the brain in order to improve task performance and their potential implications for increasing task-specific activity.”

No trade-off was observed between speed and accuracy of response – the video game players were better on both measures.

“This lack of speed-accuracy trade-off would indicate video game playing as a good candidate for cognitive training as it pertains to decision-making,” the authors wrote.

Source: Georgia State University