Researchers from the University of Birmingham have designed and developed a novel diagnostic device to detect traumatic brain injury (TBI) by shining a safe laser into the eye.
The technique is radically different from other diagnostic methods and is expected to be developed into a hand-held device for use in the critical ‘golden hour’ after traumatic brain injury, when life critical decisions on treatment must be made.
The device, described in Science Advances, incorporates a class 1, CE marked, eye-safe laser and a unique Raman spectroscopy system, which uses light to reveal the biochemical and structural properties of molecules by detecting how they scatter light, to detect the presence and levels of known biomarkers for brain injury.
There is an urgent need for new technologies to improve the timeliness of TBI diagnosis. TBI is caused by sudden shock or impact to the head, which can cause mild to severe injury to the brain, and rapid intervention is necessary to prevent further irreversible damage.
Diagnosis at the point of injury is difficult. Moreover, radiological investigations such as X-ray or MRI are very expensive and slow to show results.
Birmingham researchers, led by Professor Pola Goldberg Oppenheimer from the School of Chemical Engineering, designed and developed the novel diagnostic hand-held device to assess patients as soon as injury occurs.
It is fast, precise and non-invasive for the patient, causing no additional discomfort, can provide information on the severity of the trauma, and will be suitable to be used on-site to assess TBI.
Professor Pola Goldberg Oppenheimer said: “Early diagnosis of TBI is crucial, as life-critical decisions on treatment must be made with the first ‘golden hour’ after injury. However current diagnostic procedure relies on observation by ambulance crews, and MRI or CT scans at a hospital – which may be some distance away.”
The device works by scanning the retina where the optic nerve sits. Since the optic nerve is so closely linked to the brain, it carries the same biological information in the form of protein and lipid biomarkers.
These biomarkers exist in a very tightly regulated balance, meaning even the slightest change may have serious effects on the ‘brain-health’. TBI causes these biomarkers to change, indicating that something is wrong.
Previous research has demonstrated the technology can accurately detect the changes in animal brain and eye tissues with different levels of brain injuries — picking up the slightest changes.1,2,3
The device detailed in the current paper detects and analyses the composition and balance of these biomarkers to create ‘molecular fingerprints’.
The current study details the development, manufacture, and optimisation of a proof-of-concept prototype, and its use in reading biochemical fingerprints of brain injury on the optic nerve, to see whether it is a viable and effective approach for initial ‘on the scene’ diagnosis of TBI.
The researchers constructed a phantom eye to test its alignment and ability to focus on the back of the eye, used animal tissue to test whether it could discern between TBI and non-TBI states, and also developed decision support tools for the device, using AI, to rapidly classify TBIs.
The device is now ready for further evaluation including clinical feasibility and efficacy studies, and patient acceptability.
The researchers expect the diagnostic device to be developed into a portable technology which is suitable for use in point-of-care conditions capable to rapidly determine whether TBI occurs as well as classify whether it is mild, moderate or severe, and therefore, direct triage appropriately and in timely manner.
An intense international effort to improve the resolution of magnetic resonance imaging (MRI) for studying the human brain has culminated in an ultra-high resolution 7 Tesla scanner that records up to 10 times more detail than current 7T scanners and over 50 times more detail than current 3T scanners, the mainstay of most hospitals.
This next generation or NexGen 7T functional MRI (fMRI) scanner can resolve features 0.4mm across, compared to the 2–3mm typical of today’s standard 3T fMRIs. It is described in a paper published in Nature Methods.
“The NexGen 7T scanner is a new tool that allows us to look at the brain circuitry underlying different diseases of the brain with higher spatial resolution in fMRI, diffusion and structural imaging, and therefore to perform human neuroscience research at higher granularity,” said David Feinberg, the director of the project to build the scanner. “The ultra-high resolution scanner will allow research on underlying changes in brain circuitry in a multitude of brain disorders, including degenerative diseases, schizophrenia and developmental disorders, including autism spectrum disorder.”
The improved resolution will help neuroscientists probe the neuronal circuits in different regions of the brain’s neocortex and allow researchers to track signals propagating from one area of the cortex, and perhaps discover underlying causes of developmental disorders. This could lead to better ways of diagnosing brain disorders, perhaps by identifying new biomarkers that would allow diagnosis of mental disorders earlier or, more specifically, in order to choose the best therapy.
“Normally, MRI is not fast enough at all to see the direction of the information being passed from one area of the brain to another,” Feinberg said. “The scanner’s higher spatial resolution can identify activity at different depths in the brain’s cortex to indirectly reveal brain circuitry by differentiating activity in different cell layers of the cortex.”
This is possible because neuroscientists have found in vision brain areas that the superficial and deepest cortex layers incorporate ‘top-down’ circuits, that is, they receive information from higher cortical brain areas, whereas the middle cortex involves ‘bottom-up’ circuitry, receiving sensory input. Pinpointing the fMRI activity to a specific depth in the cortex lets neuroscientists track the flow of information throughout the brain and cortex.
With the higher spatial resolution, neuroscientists will be able to home in on the activity of something on the order of 850 individual neurons within a single voxel – a 3D pixel – instead of the 600 000 recorded with standard hospital MRIs, said Silvia Bunge, a UC Berkeley professor of psychology who is one of the first to use the NexGen 7T to conduct research on a human brain.
“We were able to look at the layer profile of the prefrontal cortex, and it’s beautiful,” said Bunge, who studies abstract reasoning. “It’s so exciting to have this state-of-the-art, world-class machine.”
For William Jagust, a UC Berkeley professor of public health who studies the brain changes associated with Alzheimer’s disease, the improved resolution could finally help connect the dots between observed changes due to Alzheimer’s that occur in the brain – abnormal clumps of protein called beta amyloid and tau – and changes in memory.
“We know that part of the memory system in the brain degenerates as we get older, but we know little about the actual changes to the memory system – we can only go so far because of the resolution of our current MRI systems,” said Jagust. “With this new scanner, we think we’re going to be able to take apart a lot more carefully exactly where things have gone wrong. This could help with diagnosis or predicting outcomes in normal people.”
Jack Gallant, a UC Berkeley professor of psychology, hopes the scanner will help neuroscientists discover how functional changes in the brain lead to developmental and mental disorders such as dyslexia, autism and schizophrenia, or that result from neurological disorders, such as dementia and stroke.
“Mental disorders have an enormous impact on individuals, families and society. Together they represent about 10% of the US GDP. Mental disorders are fundamentally disorders of brain function, but functional measures are not used currently to diagnose most brain disorders or to look to see if a treatment’s working. Instead, these disorders are diagnosed behaviourally. This is a weak approach, because there are a lot of different mental brain states that can lead to exactly the same behaviour,” Gallant said. “What we need is more powerful MRI machines like this so that we can map, at high resolution, how information is represented in the brain. To me this is the big potential clinical benefit of ultra-high resolution MRI.”
Funding initiatives lead to ‘quantum leap’
The breakthrough came about through $22 million of funding from various government and private sector sources.
Incorporating newly developed hardware technology from those groups, Siemens collaborated with Feinberg’s team to rebuild a conventional 7 Tesla MRI scanner delivered to UC Berkeley in 2000 to improve the spatial resolution in pictures captured during brain scans.
“There’s been a large increase throughout the world of sites that use 7T MRI scanners, but they were mostly for development and were difficult to use,” said Nicolas Boulant, a physicist visiting from the NeuroSpin project at the University of Paris in Saclay, where he leads the team that operates the world’s only 11.7 Tesla MRI scanner, the strongest magnetic field employed to date. “David’s team really put together many ingredients to make a quantum leap at 7 Tesla, to go beyond what was achievable before and gain performance.”
Boulant hopes to adapt some of the new ingredients in the NexGen 7T – in particular, redesigned gradient coils – to eventually achieve even better resolution with the 11.7 Tesla MRI scanner. The gradient coils generate a rising magnetic field across the brain so that each part of the brain sees a different field strength, which helps to precisely map brain activity.
“The higher the magnetic field, the more difficult it is to really grab the potential promised by these higher-field MRI scanners to see finer details in the human brain,” he said. “You need all this peripheral equipment, which needs to be on steroids to meet those promises. The NexGen 7T is really a game-changer when you want to do neuro MRI.”
To reach higher spatial resolution, the NexGen 7T scanner had to be designed with a greatly improved gradient coil and with larger receiver array coils – which pick up the brain signals – using from 64 to 128 channels to achieve a higher signal-to-noise ratio (SNR) in the cortex and faster data acquisition. All these improvements were combined with a higher signal from the ultra-high field 7T magnet to achieve cumulative gains in the scanner performance.
The extremely powerful gradient coil is the first to be made with three layers of wire windings. Designed by Peter Dietz at Siemens in Erlangen, Germany, the “Impulse” gradient has 10 times the performance of gradient systems in current 7T scanners. Mathias Davids, then a physics graduate student at Heidelberg University in Mannheim, Germany, and a member of Feinberg’s team, collaborated with Dietz in performing physiologic modelling to allow a faster gradient slew rate – a measure of how quickly the magnetic field changes across the brain – while remaining under the neuronal stimulation thresholds of the human body.
“It’s designed so that the gradient pulses can be turned on and off much quicker – in microseconds – to record the signals much quicker, and also so the much higher amplitude gradients can be utilised without stimulating the peripheral nerves in the body or stimulating the heart, which are physiologic limitations,” Feinberg said.
A second key development in the scanner, Feinberg said, is the 128-channel receiver system that replaces the standard 32 channels. The large receiver coil arrays built by Shajan Gunamony of MR CoilTech in Glasgow, UK, gave a higher signal-to-noise ratio in the cerebral cortex and also provided higher parallel imaging acceleration for faster data acquisition to encode large image matrices for ultra high resolution fMRI and structural MRI.
To take advantage of the new hardware technology, Suhyung Park, Rüdiger Stirnberg, Renzo Huber, Xiaozhi Cao and Feinberg designed new pulse sequences of precisely timed gradient pulses to rapidly achieve ultra high resolution. The smaller voxels, measured in units of cubic millimetres and less than 0.1 microlitre, provide a 3D image resolution that is 10 times higher than that of previous 7T fMRIs and 125 times higher than the typical hospital 3T MRI scanners used for medical diagnosis.
The most common MRI scanners employ superconducting magnets that produce a steady magnetic field of 3 Tesla – 90 000 times stronger than Earth’s magnetic field and 3000 times stronger than a fridge magnet.
“A 3T fMRI scanner can resolve spatial details with a resolution of about 2 to 3mm. The cortical circuits that underpin thought and behaviour are about 0.5mm across, so standard research scanners cannot resolve these important structures,” Gallant said.
In contrast, fMRI focuses on blood flow in arteries and veins and can vividly distinguish oxygenated haemoglobin funnelling into working areas of the brain from deoxygenated haemoglobin in less active areas. This allows neuroscientists to determine which areas of the brain are engaged during a specific task.
But again, the 3mm resolution of a 3T fMRI can distinguish only large veins, not the small ones that could indicate activity within microcircuits.
The NexGen 7T will allow neuroscientists to pinpoint activity within the thin cortical layers in the grey matter, as well as within the narrow column circuits that are organised perpendicular to the layers. These columns are of special interest to Gallant, who studies how the world we see is represented in the visual cortex. He has actually been able to reconstruct what a person is seeing based solely on recordings from the brain’s visual cortex.
“The machine that David has built, in theory, should get down to 500 microns, or something like that, which is way better than anything else – we’re very near the scale you would want if you’re getting signals from a single column, for example,” Gallant said. “It’s fantastic. The whole thing about MRI is how big is the little volumetric unit, the voxel […] that’s the only thing that matters.”
For the moment, NexGen 7T brain scanners must be custom-built from regular 7T scanners but should be a lot cheaper than the $22 million required to build the first one.
Feinberg said that UC Berkeley’s NexGen 7T scanner technology will be disseminated by Siemens and MR CoilTech Ltd.
“My view is that we may never be able to understand the human brain on the cellular synaptic circuitry level, where there are more connections than there are stars in the universe,” Feinberg said. ” But we are now able to see signal patterns of brain circuits and begin to tease apart feedback and feed forward circuitry in different depths of the cerebral cortex. And in that sense, we will soon be able to understand the human brain organisation better, which will give us a new view into disease processes and ultimately allow us to test new therapies. We are seeking a better understanding and view of brain function that we can reliably test and reproducibly use noninvasively.”
A 2000-year-old practice by Chinese herbalists – examining the human tongue for signs of disease – is now being embraced by computer scientists using machine learning and artificial intelligence.
Tongue diagnostic systems are fast gaining traction due to an increase in remote health monitoring worldwide, and a new paper in AIP Conference Proceedings provides more evidence of the increasing accuracy of this technology to detect disease.
Engineers from Middle Technical University (MTU) in Baghdad and the University of South Australia (UniSA) used a USB web camera and computer to capture tongue images from 50 patients with diabetes, renal failure and anaemia, comparing colours with a data base of 9000 tongue images.
Using image processing techniques, they correctly diagnosed the diseases in 94 per cent of cases, compared to laboratory results. A voicemail specifying the tongue colour and disease was also sent via a text message to the patient or nominated health provider.
MTU and UniSA Adjunct Associate Professor Ali Al-Naji and his colleagues have reviewed the worldwide advances in computer-aided disease diagnosis, based on tongue colour.
“Thousands of years ago, Chinese medicine pioneered the practice of examining the tongue to detect illness,” Assoc Prof Al-Naji says.
“Conventional medicine has long endorsed this method, demonstrating that the colour, shape, and thickness of the tongue can reveal signs of diabetes, liver issues, circulatory and digestive problems, as well as blood and heart diseases.
“Taking this a step further, new methods for diagnosing disease from the tongue’s appearance are now being done remotely using artificial intelligence and a camera – even a smartphone.
“Computerised tongue analysis is highly accurate and could help diagnose diseases remotely in a safe, effective, easy, painless, and cost-effective way. This is especially relevant in the wake of a global pandemic like COVID, where access to health centres can be compromised.”
Diabetes patients typically have a yellow tongue, cancer patients a purple tongue with a thick greasy coating, and acute stroke patients present with a red tongue that is often crooked.
A 2022 study in Ukraine analysing tongue images of 135 COVID patients via a smartphone showed that 64% of patients with a mild infection had a pale pink tongue, 62% of patients with a moderate infection had a red tongue, and 99% of patients with a severe COVID infection had a dark red tongue.
Previous studies using tongue diagnostic systems have accurately diagnosed appendicitis, diabetes, and thyroid disease.
“It is possible to diagnose with 80% accuracy more than 10 diseases that cause a visible change in tongue colour. In our study we achieved a 94% accuracy with three diseases, so the potential is there to fine tune this research even further,” Assoc Prof Al-Naji says.
A new artificial intelligence (AI)-based method can provide as much information on subtle neurodegenerative changes in the brain captured by computed tomography (CT) as compared to magnetic resonance imaging (MRI). The method, reported in the journal Alzheimer’s & Dementia, could enhance diagnostic support, particularly in primary care, for conditions such as dementia and other brain disorders.
Compared to MRI, which requires powerful superconducting magnetics and their associated cryogenic cooling, computed tomography (CT) is a relatively inexpensive and widely available imaging technology. CT is considered inferior to MRI when it comes to reproducing subtle structural changes in the brain or flow changes in the ventricular system. Certain imaging must therefore currently be carried out by specialist departments at larger hospitals equipped with MRI.
AI trained on MRI images
Created with deep learning, a form of AI, the software has been trained to transfer interpretations from MRI images to CT images of the same brains. The new software can provide diagnostic support for radiologists and other professionals who interpret CT images.
“Our method generates diagnostically useful data from routine CT scans that, in some cases, is as good as an MRI scan performed in specialist healthcare,” says Michael Schöll, a professor at Sahlgrenska Academy who led the work involved in the study, carried out in collaboration with researchers at Karolinska Institutet, the National University of Singapore, and Lund University
“The point is that this simple, quick method can provide much more information from examinations that are already carried out on a routine basis within primary care, but also in certain specialist healthcare investigations. In its initial stage, the method can support dementia diagnosis, however, it is also likely to have other applications within neuroradiology.”
Reliable decision-making support
This is a well-validated clinical application of AI-based algorithms, and has the potential to become a fast and reliable form of decision-making support that effectively reduces the number of false negatives. The researchers believe that this solution can improve diagnostics in primary care, optimising patient flow to specialist care.
“This is a major step forward for imaging diagnosis,” says Meera Srikrishna, a postdoctor at the University of Gothenburg and lead author of the study.
“It is now possible to measure the size of different structures or regions of the brain in a similar way to advanced analysis of MRI images. The software makes it possible to segment the brain’s constituent parts in the image and to measure its volume, even though the image quality is not as high with CT.”
Applications for other brain diseases
The software was trained on images of 1117 people, all of whom underwent both CT and MRI imaging. The current study mainly involved healthy older individuals and patients with various forms of dementia. Another application that the team is now investigating is for normal pressure hydrocephalus (NPH).
With NPH, the team has obtained new results indicating that the method can be used both during diagnosis and to monitor the effects of treatment. NPH is a condition that occurs particularly in older people, whereby fluid builds up in the cerebral ventricular system and results in neurological symptoms. About two percent of all people over the age of 65 are affected. Because diagnosis can be complicated and the condition risks being confused with other diseases, many cases are likely to be missed.
“NPH is difficult to diagnose, and it can also be hard to safely evaluate the effect of shunt surgery to drain the fluid in the brain,” continues Michael. “We therefore believe that our method can make a big difference when caring for these patients.”
The software has been developed over the course of several years, and development is now continuing in cooperation with clinics in Sweden, the UK, and the US together with a company, which is a requirement for the innovation to be approved and transferred to healthcare.
In photoacoustic imaging, laser light is pulsed through the skin into tissues, which release ultrasound signals with which the internal structure can be imaged. This works well for people with light skin but has trouble getting clear pictures from patients with darker skin. A Johns Hopkins University-led team found a way to deliver clear pictures of internal anatomy, regardless of skin tone. Their technique is described in the journal Photoacoustics.
In experiments the new imaging technique produced significantly sharper images for all people – and excelled with darker skin tones. It produced much clearer images of arteries running through the forearms of all participants, compared to standard imaging methods where it was nearly impossible to distinguish the arteries in darker-skinned individuals.
“When you’re imaging through skin with light, it’s kind of like the elephant in the room that there are important biases and challenges for people with darker skin compared to those with lighter skin tones,” said co-senior author Muyinatu “Bisi” Bell, Associate Professor at Johns Hopkins. “Our work demonstrates that equitable imaging technology is possible.”
“We show not only there is a problem with current methods but, more importantly, what we can do to reduce this bias,” Bell said.
The findings advance a 2020 report that showed pulse oximeters, which measure oxygen rates in the blood, have higher error rates in Black patients.
“There were patients with darker skin tones who were basically being sent home to die because the sensor wasn’t calibrated toward their skin tone,” Bell said.
Bell’s team created a new algorithm to process information from photoacoustic imaging, a method that combines ultrasound and light waves to render medical images. Body tissue absorbing this light expands, producing subtle sound waves that ultrasound devices turn into images of blood vessels, tumours, and other internal structures. But in people with darker skin tones, melanin absorbs more of this light, which yields cluttered or noisy signals for ultrasound machines.
The team was able to filter the unwanted signals from images of darker skin, in the way a camera filter sharpens a blurry picture, to provide more accurate details about the location and presence of internal biological structures.
The researchers are now working to apply the new findings to breast cancer imaging, since blood vessels can accumulate in and around tumours. Bell believes the work will improve surgical navigation as well as medical diagnostics.
“We’re aiming to mitigate, and ideally eliminate, bias in imaging technologies by considering a wider diversity of people, whether it’s skin tones, breast densities, body mass indexes – these are currently outliers for standard imaging techniques,” Bell said. “Our goal is to maximise the capabilities of our imaging systems for a wider range of our patient population.”
Early findings of a pair of studies from the University of Michigan Rogel Cancer Center shed light on new ways to anticipate recurrence in HPV-positive head and neck cancer sooner. The papers, published in Cancer and Oral Oncology, offer clinical and technological perspectives on how to measure if recurrence is happening earlier than current blood tests allow, and provide a framework for a new, more sensitive blood test that could help in this monitoring.
“When metastatic head and neck cancer returns, it impacts their quality of life and can be disfiguring, interfering with the ability to talk, swallow, and even breathe,” said Paul Swiecicki, MD, associate medical director for the Oncology Clinical Trials Support Unit at Rogel. “As of now, there’s no test to monitor for its recurrence except watching for symptoms or potentially using a blood test which may not detect cancer until shortly before it clinically recurs.”
The paper in Cancer aims to identify different clinical ways that providers can more strategically track for recurrence. To do this, Swiecicki and his team needed to first understand what patient population was at the highest risk to then figure out an appropriate monitoring pattern.
The team examined 450 patients with metastatic head and neck cancer, including people with HPV-positive and HPV-negative cancer. HPV-positive cancer is caused by the human papillomavirus and is increasingly more common in head and neck cancer patients. The team identified some predictors of when recurrences would happen, and to what organs the recurrent cancer would most commonly spread. Patients with HPV-positive cancers were found to develop recurrent disease significantly later than those that were HPV-negative, and also were more likely to spread to the lungs. Taken together, these characteristics may help create a “surveillance” method in the future that combines routine blood testing and imaging to hopefully catch these recurrences and intervene before it’s incurable.
Swiecicki is quick to mention that, at this point, the results of this study are largely theoretical and provide a helpful framework to direct further research. That’s where the newly developed blood test, highlighted in Oral Oncology, comes into play.
Current blood biomarker tests which test for pieces of tumour-shed DNA, may not be sensitive enough to detect a recurrence significantly earlier than clinical surveillance, though several studies with multiple types of tests are ongoing. A research team, led by Muneesh Tewari, MD, PhD, Swiecicki and Chad Brenner, PhD, aimed to create a highly sensitive blood test to detect cancer even when a smaller number of DNA fragments were present, with the intention of providing a better option for detecting cancer earlier in patients.
Not only is this test more sensitive and able to detect a smaller number of DNA fragments in blood, but it’s innovative in other ways too, says first author Chandan Bhambhani, PhD: “We achieved this level of sensitivity by looking for nine different pieces of the HPV genome DNA all at once,” Bhambhani says.
Tewari says this is a step towards a more proactive approach to tackling recurrence in head and neck cancer. “As of now, we only have the tools to react to symptoms when they recur. We want to find a way to be able to detect what’s causing the symptoms much, much sooner, even before the symptoms appear.”
As a clinician, Swiecicki agrees. “It’s exciting to have the ability to potentially detect cancer before it’s incurable and offer us a window for clinical trials to see if we could intervene on cancer to help give people both a better quality of life and perhaps longer quality of life, and even convert their disease from incurable to curable. We don’t know if that’s the case yet, but this is the first tool needed for that to develop.”
Combining two types of heart scan techniques could help detect hypertrophic cardiomyopathy (HCM) before symptoms and signs on conventional tests appear, according to a new study led by UCL researchers. To do this, they used two cutting-edge heart scanning techniques: cardiac diffusion tensor imaging (cDTI), which shows the heart’s microstructure and cardiac MRI perfusion (perfusion CMR), which reveals microvascular disease. Their findings, published in Circulation, will help doctors select appropriate treatments.
HCM is a disease which affects around 1 in 500 in the UK, causing thickening of heart muscle and can lead to heart failure and cardiac arrest.
Researchers studied the hearts of three groups: healthy people, people who already had HCM, and people with an HCM-causing genetic mutation but no overt signs of disease.
The scans showed that people with overt signs of HCM have very abnormal organisation of their heart muscle cells and a high rate and severity of microvascular disease compared to healthy volunteers, helping doctors more accurately spot the early signs of HCM.
Crucially, the scans were also able to identify abnormal microstructure and microvascular disease in the people who had a problematic gene but no symptoms or muscle thickening. They found that 28% had defects in their blood supply, compared to healthy volunteers. This meant that doctors were able to more accurately spot the early signs of HCM developing in patient’s hearts.
The first drug to slow HCM progression, mavacamten, has recently been approved for use in Europe and will allow doctors to reduce the severity of the disease once symptoms and muscle thickening have appeared. Genetic therapies are also in development which could prevent symptoms entirely by intercepting HCM development at an early stage.
Perfusion CMR is already being used in some clinics to help differentiate people with HCM from other causes of muscle thickening. The researchers think that these revolutionary new therapies, combined with cDTI and perfusion CMR scans, give doctors the best ever chance of treating people at risk of HCM early enough that the condition never develops.
Dr George Joy, who led the research with Professor James Moon and Dr Luis Lopes (all UCL Institute of Cardiovascular Science), said: “The ability to detect early signs of HCM could be crucial in trials testing treatments aimed at preventing early disease from progressing or correcting genetic mutations. The scans could also enable treatment to start earlier than we previously thought possible.
“We now want to see if we can use the scans to identify which patients without symptoms or heart muscle thickening are most at risk of developing severe HCM and its life-changing complications. The information provided from scans could therefore help doctors make better decisions on how best to care for each patient.”
Dr Luis Lopes (UCL Institute of Cardiovascular Science), senior author of the study, said: “By linking advanced imaging to our cohort of HCM patients (and relatives) with extensive genetic testing, this study detected microstructural abnormalities in vivo in mutation carriers for the first time and was the first to compare these parameters in HCM patients with and without a causal mutation.
“The findings allow us to understand more about the early subclinical manifestations of this serious condition but also provide additional clinical tools for screening, monitoring and hopefully in the near future for therapeutic decision-making.”
Researchers at the University of Eastern Finland have identified plasma protein-based biomarkers capable of identifying adolescents at risk of developing mental health issues. Such biomarkers could revolutionise early detection and prevention of mental health problems in young people.The results were published in Nature Mental Health.
Some 10–20% of adolescents struggle with mental health conditions, with the majority going undiagnosed and untreated. This points to a need for new, early indicators of mental health problems to catch these cases and intervene with treatment before the conditions progress.
In the study carried out in the research group of Professor Katja Kanninen, the researchers used self-reported Strengths and Difficulties Questionnaire (SDQ) scores to evaluate mental health risk in participants aged between 11 and 16 years. Blood sample analyses showed that 58 proteins were significantly associated with the SDQ score. Bioinformatic analyses were used to identify the biological processes and pathways linked with the identified plasma protein biomarker candidates. Key enriched pathways related to these proteins included immune responses, blood coagulation, neurogenesis, and neuronal degeneration. The study employed a novel symbolic regression algorithm to create predictive models that best separate low and high SDQ score groups.
According to Professor Kanninen, plasma biomarker studies in mental disorders are an emerging field.
“Alterations in plasma proteins have been previously associated with various mental health disorders, such as depression, schizophrenia, psychotic disorders, and bipolar disorders. Our study supports these earlier findings and further revealed that specific plasma protein alterations could indicate a high risk for mental dysfunction in adolescents,” Professor Kanninen notes.
According to the researchers, this pilot study will be followed by more specific investigations of the potential biomarkers for identification of individuals at risk of mental health problems, opening a new avenue for advancements in adolescent mental health care.
In osteoporosis, treatment would be most effective with early detection – something not yet possible with current X-ray based osteoporosis diagnostic tests, which lack the requisite sensitivity. Now, researchers reporting in ACS Central Science have developed a biosensor that could someday help identify those most at risk for osteoporosis using less than a drop of blood.
Early intervention is critical to reducing the morbidity and mortality associated with osteoporosis. The most common technique used to measure changes in bone mineral density (BMD) – dual-energy X-ray absorptiometry – is not sensitive enough to detect BMD loss until a significant amount of damage has already occurred. Several genomic studies, however, have reported genetic variations known as single nucleotide polymorphisms (SNPs) that are associated with increased risk for osteoporosis. Using this information, Ciara K. O’Sullivan and colleagues wanted to develop a portable electrochemical device that would allow them to quickly detect five of these SNPs in finger-prick blood samples in a step toward early diagnosis.
The device involves an electrode array to which DNA fragments for each SNP are attached. When lysed whole blood is applied to the array, any DNA matching the SNPs binds the sequences and is amplified with recombinase polymerase that incorporates ferrocene, a label that facilitates electrochemical detection. Using this platform, the researchers detected osteoporosis-associated SNPs in 15 human blood samples, confirming their results with other methods.
As the DNA does not have to be purified from the blood, the analysis can be performed quickly (about 15 minutes) and inexpensively (< $0.5 per SNP). Furthermore, because the equipment and reagents are readily accessible and portable, the researchers say that the device offers great potential for use at point-of-care settings, rather than being limited to a centralised laboratory. The technology is also versatile and can be readily adapted to detect other SNPs, as the researchers showed previously when identifying drug resistance in Tuberculosis mycobacterium from sputum and cardiomyopathy risk from blood. Although the device does not diagnose osteoporosis itself, it might help physicians identify people whom they should monitor more closely.
University of Virginia School of Medicine researchers have discovered a lipid biomarker to identify pregnant women at risk of preeclampsia, complications from which are the second-leading cause of maternal death around the world. Their findings are published in the Journal of Lipid Research.
The UVA scientists, led by Charles E. Chalfant, PhD, say that their finding opens the door to simple blood tests to screen patients. Further, the approach worked regardless of whether the women were on aspirin therapy, which is commonly prescribed to women thought to be at risk.
“Clinicians have been seeking simple tests to predict risk of preeclampsia before symptoms appear. Although alterations in some blood lipid levels have been known to occur in preeclampsia, they have not been endorsed as useful biomarkers. Our study presents the first comprehensive analysis of lipid species, yielding a distinctive profile associated with the development of preeclampsia,” said Chalfant. “The lipid ‘signature’ we described could significantly improve the ability to identify patients needing preventative treatment, like aspirin, or more careful monitoring for early signs of disease so that treatment could be initiated in a timely fashion.”
Preeclampsia affects up to 7% of all pregnancies. Symptoms typically appear after 20 weeks and include high blood pressure, kidney problems and abnormalties in blood clotting. The condition is associated with dangerous complications such as kidney and liver dysfunction and seizures, as well as a lifelong increased risk of heart disease for the mothers. An estimated 70 000 women around the world die from preeclampsia and its complications each year.
Doctors commonly recommend low-dose aspirin for at-risk women, but it works for only about half of patients, and it needs to be started within the first 16 weeks of pregnancy – well before symptoms appear. That makes it all the more important to identify women at risk early on, and to better understand preeclampsia in general.
Chalfant and his team wanted to find ‘biomarkers’ in the blood of pregnant women that could reveal their risk of developing preeclampsia. They examined blood plasma samples collected from 57 women in their first 24 weeks of pregnancy, then looked at whether the women went on to develop preeclampsia. The researchers found significant differences in ‘bioactive’ lipids in the blood of women who developed preeclampsia and those who did not.
This, the researchers say, should allow doctors to stratify women’s risk of developing preeclampsia by measuring lipid changes in their blood. The changes represent an important ‘lipid fingerprint’, the scientists say, that could be a useful tool for identifying, preventing and better treating preeclampsia.
“The application of our comprehensive lipid profiling method to routine obstetrical care could significantly reduce maternal and neonatal morbidity and mortality,” Chalfant said. “It represents an example of how personalised medicine could address a significant public health challenge.”