Rising Health Care Prices Result in Non-healthcare Job Cuts

Photo by Inzmam Khan

Rising health care prices in the US are leading employers outside the health care sector to lay off employees, according to a new study co-authored by a Yale economist.

The study, published June 24 as a working paper by the National Bureau of Economic Research (NBER), found that when health care prices increased, non-health care employers responded by reducing their payroll and cutting the jobs of middle-class workers. For the average county, a 1% increase in health care prices would reduce aggregate income in the area by approximately $8 million annually.

The study was conducted by a team of leading economists from Yale, the University of Chicago, the University of Wisconsin-Madison, Harvard University, the US Internal Revenue Service (IRS), and the US Department of the Treasury.

“When health care prices go up, jobs outside the health care sector go down,” said Zack Cooper, an associate professor of health policy and of economics at Yale University. “It’s broadly understood that employer-sponsored health insurance creates a link between health care markets and labour markets. Our research shows that middle- and lower-income workers are shouldering rising health care prices, and in many cases, it’s costing them their jobs. Bottom line: Rising health care costs are increasing economic inequality.”

“Rising prices are hurting the employment outcomes for workers who never went to the hospital.”

Zack Cooper, Yale economist

To better understand how rising health care prices affect labour market outcomes, the researchers brought together insurance claims data on approximately a third of adults with employer-sponsored insurance, health insurance premium data from the US Department of Labor, and IRS data from every income tax return filed in the United States between 2008 and 2017. They then used these data to trace out how an increase in health care prices, such as a $2000 increase on a $20 000 hospital bill, flows through to health spending, insurance premiums, employer payrolls, income and unemployment in counties, and the tax revenue collected by the federal government. 

“Many think that it’s insurers or employers who bear the burden of rising health care prices. We show that it’s really the workers themselves who are impacted,” said Zarek Brot-Goldberg, an assistant professor at the University of Chicago. “It’s vital to understand that rising health care prices aren’t just impacting patients. Rising prices are hurting the employment outcomes for workers who never went to the hospital.”

Hospital Mergers Raised Prices

For the new study, the authors used hospital mergers as a vehicle to assess the effect of price increases. From 2000 to 2020, there were over 1000 hospital mergers among the approximately 5000 US hospitals. In past work, the authors found that approximately 20% of hospital mergers should have been expected to raise prices by lessening competition, according to merger guidelines from the Department of Justice and the Federal Trade Commission. These mergers, on average, raised prices by 5%.

“We can use our analysis to estimate the effect of hospital mergers,” said Stuart Craig, an assistant professor at the University of Wisconsin-Madison Business School. “Our results show that a hospital merger that raised prices by 5% would result in $32 million in lost wages, 203 lost jobs, a $6.8 million reduction in federal tax revenue, and a death from suicide or overdose of a worker outside the health sector.”

The study also showed that because rising health care prices leads firms to let go of workers, a knock-on effect of hospital mergers is that they lead to increases in government spending on unemployment insurance and reductions in the tax revenue collected by the federal government.

“It’s vital to point out that hospital mergers raise spending by the federal government and lower tax revenue at the same time,” said Cooper. “When prices in the US health sector rise, it’s actually a net negative for the economy. It’s leading to fewer jobs and precipitating all the consequences we associate with workers becoming unemployed.”

Source: Yale University

AI Models that can Identify Patient Demographics in X-rays are Also Unfair

Photo by Anna Shvets

Artificial intelligence models often play a role in medical diagnoses, especially when it comes to analysing images such as X-rays. But these models have been found not perform as well across all demographic groups, usually faring worse on women and people of colour.

These models have also been shown to develop some surprising abilities. In 2022, MIT researchers reported that AI models can make accurate predictions about a patient’s race from their chest X-rays – something that the most skilled radiologists can’t do.

Now, in a new study appearing in Nature, the same research team has found that the models that are most accurate at making demographic predictions also show the biggest “fairness gaps”, ie having reduced accuracy diagnosing images of people of different races or genders. The findings suggest that these models may be using “demographic shortcuts” when making their diagnostic evaluations, which lead to incorrect results for women, Black people, and other groups, the researchers say.

“It’s well-established that high-capacity machine-learning models are good predictors of human demographics such as self-reported race or sex or age. This paper re-demonstrates that capacity, and then links that capacity to the lack of performance across different groups, which has never been done,” says senior author Marzyeh Ghassemi, an MIT associate professor of electrical engineering and computer science.

The researchers also found that they could retrain the models in a way that improves their fairness. However, their approached to “debiasing” worked best when the models were tested on the same types of patients they were trained on, such as patients from the same hospital. When these models were applied to patients from different hospitals, the fairness gaps reappeared.

“I think the main takeaways are, first, you should thoroughly evaluate any external models on your own data because any fairness guarantees that model developers provide on their training data may not transfer to your population. Second, whenever sufficient data is available, you should train models on your own data,” says Haoran Zhang, an MIT graduate student and one of the lead authors of the new paper.

Removing bias

As of May 2024, the FDA has approved 882 AI-enabled medical devices, with 671 of them designed to be used in radiology. Since 2022, when Ghassemi and her colleagues showed that these diagnostic models can accurately predict race, they and other researchers have shown that such models are also very good at predicting gender and age, even though the models are not trained on those tasks.

“Many popular machine learning models have superhuman demographic prediction capacity – radiologists cannot detect self-reported race from a chest X-ray,” Ghassemi says. “These are models that are good at predicting disease, but during training are learning to predict other things that may not be desirable.”

In this study, the researchers set out to explore why these models don’t work as well for certain groups. In particular, they wanted to see if the models were using demographic shortcuts to make predictions that ended up being less accurate for some groups. These shortcuts can arise in AI models when they use demographic attributes to determine whether a medical condition is present, instead of relying on other features of the images.

Using publicly available chest X-ray datasets from Beth Israel Deaconess Medical Center (BIDMC) in Boston, the researchers trained models to predict whether patients had one of three different medical conditions: fluid buildup in the lungs, collapsed lung, or enlargement of the heart. Then, they tested the models on X-rays that were held out from the training data.

Overall, the models performed well, but most of them displayed “fairness gaps” – that is, discrepancies between accuracy rates for men and women, and for white and Black patients.

The models were also able to predict the gender, race, and age of the X-ray subjects. Additionally, there was a significant correlation between each model’s accuracy in making demographic predictions and the size of its fairness gap. This suggests that the models may be using demographic categorisations as a shortcut to make their disease predictions.

The researchers then tried to reduce the fairness gaps using two types of strategies. For one set of models, they trained them to optimise “subgroup robustness,” meaning that the models are rewarded for having better performance on the subgroup for which they have the worst performance, and penalised if their error rate for one group is higher than the others.

In another set of models, the researchers forced them to remove any demographic information from the images, using “group adversarial” approaches. Both strategies worked fairly well, the researchers found.

“For in-distribution data, you can use existing state-of-the-art methods to reduce fairness gaps without making significant trade-offs in overall performance,” Ghassemi says. “Subgroup robustness methods force models to be sensitive to mispredicting a specific group, and group adversarial methods try to remove group information completely.”

Not always fairer

However, those approaches only worked when the models were tested on data from the same types of patients that they were trained on, eg from BIDMC.

When the researchers tested the models that had been “debiased” using the BIDMC data to analyse patients from five other hospital datasets, they found that the models’ overall accuracy remained high, but some of them exhibited large fairness gaps.

“If you debias the model in one set of patients, that fairness does not necessarily hold as you move to a new set of patients from a different hospital in a different location,” Zhang says.

This is worrisome because in many cases, hospitals use models that have been developed on data from other hospitals, especially in cases where an off-the-shelf model is purchased, the researchers say.

“We found that even state-of-the-art models which are optimally performant in data similar to their training sets are not optimal – that is, they do not make the best trade-off between overall and subgroup performance – in novel settings,” Ghassemi says. “Unfortunately, this is actually how a model is likely to be deployed. Most models are trained and validated with data from one hospital, or one source, and then deployed widely.”

The researchers found that the models that were debiased using group adversarial approaches showed slightly more fairness when tested on new patient groups than those debiased with subgroup robustness methods. They now plan to try to develop and test additional methods to see if they can create models that do a better job of making fair predictions on new datasets.

The findings suggest that hospitals that use these types of AI models should evaluate them on their own patient population before beginning to use them, to make sure they aren’t giving inaccurate results for certain groups.

Is it Time to Stop Recommending Strict Salt Restriction in Heart Failure?

Credit: Pixabay CC0

For decades, it’s been thought that people with heart failure should drastically reduce their dietary salt intake, but some studies have suggested that salt restriction could be harmful for these patients. A recent review in the European Journal of Clinical Investigation that assessed all relevant studies published between 2000 and 2023 has concluded that there is no proven clinical benefit to this strategy for patients with heart failure.

Most relevant randomised trials were small, and a single large, randomised clinical trial was stopped early due to futility. Although moderate to strict salt restriction was linked with better quality of life and functional status, it did not affect mortality and hospitalisation rates among patients with heart failure.

“Doctors often resist making changes to age-old tenets that have no true scientific basis; however, when new good evidence surfaces, we should make an effort to embrace it,” said author Paolo Raggi MD, PhD, of the University of Alberta.

Source: Wiley

Malignant Melanoma Resists Treatment by Subverting Immune Cells

3D structure of a melanoma cell derived by ion abrasion scanning electron microscopy. Credit: Sriram Subramaniam/ National Cancer Institute

Malignant melanoma is one of the most aggressive types of cancer. Despite recent progress in effective therapies, the tumours of many patients are either resistant from the outset or become so during the course of treatment.

A University of Zurich (UZH) study published in Cell Reports Medicine has now identified a mechanism involving subverted immune cells that impedes the effectiveness of therapies. The result provides new ideas for treatments to suppress the development of resistance.

Comparing resistant and non-resistant tumour cells

For the study, the team utilised an innovative fine-needle biopsy to sample tumour cells before and during therapy. This allowed the researchers to analyse each cell individually. The patients providing the samples were undergoing targeted cancer therapy for malignant melanoma, which inhibits signalling pathways for tumour formation.

“It was important that some of the tumours responded to the therapy, while others showed resistance,” says study leader Lukas Sommer, professor of stem cell biology at the Institute of Anatomy at UZH. This allowed the team to compare the metabolism and environment of resistant and non-resistant tumour cells and look for significant differences.

Interaction between tumour factor and immune cells

One of the most relevant findings concerned the POSTN gene: it codes for a secreted factor that plays an important role in resistant tumours. In fact, the tumours of patients with rapidly progressing disease despite treatment showed increased POSTN levels. In addition, the microenvironment of these tumours contained a larger number of a certain type of macrophage – a subtype of immune cell that promotes the development of cancer.

Through a series of further experiments – both with human cancer cells and with mice – the research team was able to show how the interaction of increased POSTN levels and this type of macrophage triggers resistance: the POSTN factor binds to receptors on the surface of the macrophages and polarises them to protect melanoma cells from cell death. “This is why the targeted therapy no longer works,” says Sommer.

No resistance without cancer-promoting macrophages

The team considers this mechanism a promising starting point. “The study highlights the potential of targeting specific types of macrophages within the tumour microenvironment to overcome resistance,” says Sommer. “In combination with already known therapies, this could significantly improve the success of treatment for patients with malignant melanoma.”

Source: University of Zurich

New Guidance Available for Peanut Desensitisation Therapy

Photo by Corleto on Unsplash

Based on focus groups with children and young people with peanut allergy, experts have published guidance for clinicians working in the UK’s National Health Service (NHS) to help them safely and equitably implement Palforzia® peanut oral immunotherapy. Their recommendations are published in Clinical & Experimental Allergy.

In 2022, the National Institute for Health and Care Excellence in the UK recommended the use of Palforzia® – which has defatted peanut powder as its active ingredient – for desensitising children and young people with peanut allergy in the NHS.

The new consensus guidance will inform and support healthcare professionals as they implement Palforzia® for desensitisation and as they gradually increase peanut dosing in patients.

“It is great we can now offer an actual treatment for peanut allergy, rather than just recommend avoidance and educate patients on how to recognise and manage reactions, but the challenge in our current NHS is how we can provide this to eligible patients equitably, regardless of where they live and their backgrounds,” said corresponding author Tom Marrs, PhD, of Guy’s and St Thomas’ NHS Foundation Trust. “This guidance outlines what NHS services need to be able to offer this treatment at scale and to advocate for patients so that we can develop best-practice models.”

Source: Wiley

Innovative Cuffless Blood Pressure Device Improves Hypertension Management

A new study led by an investigator from Brigham and Women’s Hospital, evaluated a cuffless monitor that uses optical sensors to record blood pressure continually and efficiently, without disruption to the patient. The study, published in Frontiers in Medicine, highlights promising advancements in hypertension diagnosis, risk assessment and management that may be enabled by use of cuffless devices. 

“The successful management of hypertension depends on patients being able to take blood pressure measurements easily and reliably outside of the traditional doctor’s office setting,” said corresponding author Naomi Fisher, MD, of the Division of Endocrinology, Diabetes and Hypertension at Brigham and Women’s Hospital.  “Cuffless devices have the potential to revolutionise hypertension management.  They provide many more readings than traditional devices, during both the day and night, which can help confirm the diagnosis of hypertension and guide medication titration.” 

Medical guidelines increasingly recommend the incorporation of at-home blood pressure monitoring into hypertension diagnosis and management. This is because isolated blood pressure readings taken at a clinician’s office may be inaccurate: for some, blood pressure tends to rise in medical settings (“white coat hypertension”) while others have normal blood pressure during examination despite hypertensive readings at home (“masked hypertension”).  

Time-in-target-range (TTR) describes how often a patient’s blood pressure is in the normal range, and it is emerging as a promising metric of cardiovascular risk. But TTR requires more frequent blood pressure readings that can feasibly be obtained by patients with traditional blood pressure cuffs, which can be inconvenient, burdensome and sometimes uncomfortable for patients.  

Fisher, who designed and led the study, collaborated with co-authors from Aktiia SA, a Swiss biotechnology company, to analyse over 2.2 million blood pressure readings from 5189 subjects in Europe and the U.K. who wore a cuffless wrist monitor manufactured by Aktiia. On average, the Aktiia device collected 29 readings per day, a substantial increase from the number of blood pressure readings patients typically take with home devices (guidelines recommend four per day, which is more than most patients measure). Over a 15-day period, the researchers obtained an average of 434 readings from each patient.  

By calculating TTR over a 15-day period, the researchers were able to risk stratify participants by percentage of readings in target range and compare these classifications to those generated via traditional measurement patterns, using either 24-hour or week-long daytime monitoring schedules. They found that the traditional methods misclassified 26 and 45 percent of subjects, respectively, compared to the reference TTR. They determined that continual monitoring for seven days is required to obtain 90 percent or greater accuracy in hypertension risk classification, a frequency of measurement that may only be possible with cuffless monitors.  

Though the cuffless device studied here has not been approved by the US Food and Drug Administration, it has been validated in multiple studies and is available for over-the-counter purchase in Europe and the UK. Work to evaluate and set standards for such devices in the U.S. is ongoing. 

“For the first time, by using a cuffless device, we can collect continual out-of-office blood pressure readings and use these data to calculate a new metric, time-in-target-range, which shows great promise as a predictor of risk,” Fisher said. “The use of cuffless devices could create a shift in the paradigm of blood pressure monitoring and hypertension management.” 

Source: Brigham and Women’s Hospital

Using AI, Scientists Discover High-risk Form of Endometrial Cancer

Dr Ali Bashashati observes an endometrial cancer sample on a microscope slide. Credit: University of British Columbia

A discovery by researchers at the University of British Columbia promises to improve care for patients with endometrial cancer, the most common gynaecologic malignancy.  Using artificial intelligence (AI) to spot patterns across thousands of cancer cell images, the researchers have pinpointed a distinct subset of more stubborn endometrial cancer that would otherwise go unrecognised by traditional pathology and molecular diagnostics.

The findings, published in Nature Communications, will help doctors identify patients with high-risk disease who could benefit from more comprehensive treatment.

“Endometrial cancer is a diverse disease, with some patients much more likely to see their cancer return than others,” said Dr Jessica McAlpine, professor at UBC. “It’s so important that patients with high-risk disease are identified so we can intervene and hopefully prevent recurrence. This AI-based approach will help ensure no patient misses an opportunity for potentially lifesaving interventions.”

AI-powered precision medicine

The discovery builds on work by Dr McAlpine and colleagues in the Gynaecologic Cancer Initiative, who in 2013 helped show that endometrial cancer can be classified into four subtypes based on the molecular characteristics of cancerous cells, with each posing a different level of risk to patients.

Dr McAlpine and team then went on to develop an innovative molecular diagnostic tool, called ProMiSE, that can accurately discern between the subtypes. The tool is now used across parts of Canada and internationally to guide treatment decisions.

Yet, challenges remain. The most prevalent molecular subtype, encompassing approximately 50% of all cases, is largely a catch-all category for endometrial cancers lacking discernible molecular features.

“There are patients in this very large category who have extremely good outcomes, and others whose cancer outcomes are highly unfavourable. But until now, we have lacked the tools to identify those at-risk so that we can offer them appropriate treatment,” said Dr McAlpine.

Dr McAlpine turned to long-time collaborator and machine learning expert Dr.Ali Bashashati, an assistant professor of biomedical engineering and pathology and laboratory medicine at UBC, to try and further segment the category using advanced AI methods.

Dr Bashashati and his team developed a deep learning AI model that analyses images of tissue samples collected from patients. The AI was trained to differentiate between different subtypes, and after analysing over 2300 cancer tissue images, pinpointed the new subgroup that exhibited markedly inferior survival rates.

“The power of AI is that it can objectively look at large sets of images and identify patterns that elude human pathologists,” said Dr Bashashati. “It’s finding the needle in the haystack. It tells us this group of cancers with these characteristics are the worst offenders and represent a higher risk for patients.”

Bringing the discovery to patients

The team is now exploring how the AI tool could be integrated into clinical practice alongside traditional molecular and pathology diagnostics.

“The two work hand-in-hand, with AI providing an additional layer on top of the testing we’re already doing,” said Dr McAlpine.

One benefit of the AI-based approach is that it’s cost-efficient and easy to deploy across geographies. The AI analyses images that are routinely gathered by pathologists and healthcare providers, even at smaller hospital sites in rural and remote communities, and shared when seeking second opinions on a diagnosis.

The combined use of molecular and AI-based analysis could allow many patients to remain in their home communities for less intensive surgery, while ensuring those who need treatment at a larger cancer centre can do so.  

“What is really compelling to us is the opportunity for greater equity and access,” said Dr Bashashati. “The AI doesn’t care if you’re in a large urban centre or rural community, it would just be available, so our hope is that this could really transform how we diagnose and treat endometrial cancer for patients everywhere.”

Source: University of British Columbia

Datacentres Form Part of Healthcare Critical Systems – Carrying the Load and so Much More

Photo by Christina Morillo

By Ben Selier, Vice President: Secure Power, Anglophone Africa at Schneider Electric

The adage, knowledge is king couldn’t be more applicable when it comes to the collection and utilisation of data.  And at the heart of this knowledge and resultant information lies the datacentre. Businesses and users count on datacentres, and more so in critical services such as healthcare.

Many hospitals today rely heavily on electronic health records (EHR), and this information resides and is backed up in on-premises datacentres or in the cloud. Datacentres are therefore a major contributor to effective and modernised healthcare.

There are several considerations when designing datacentres for healthcare. For one, hospitals operate within stringent legislation when it comes to the protection of patient information.  The National Health Act (No. 61 of 2003), for example, stipulates that information must not be given to others unless the patient consents or the healthcare practitioner can justify the disclosure.

Datacentres form part of critical systems

To add an extra layer of complexity, in South Africa, datacentres should feature built-in continuous uptime and energy backup due to the country’s unstable power supply.  Hospitals must therefore be designed to be autonomous from the grid, especially when they provide emergency and critical care.

Typically, datacentres are classified in tiers, with the Uptime Institute citing that a Tier-4 datacentre provides 99.995% availability, annual downtime of 0.4 hours, full redundancy, and power outage protection of 96 hours.

In healthcare and when one considers human lives, downtime is simply not an option. And whilst certain healthcare systems and its resultant availability are comparable to a typical Tier-3 or Tier-4 scenario, critical systems in hospitals carry a higher design consideration and must run 24/7 with immediate availability.

In healthcare, the critical infrastructure of a hospital enjoys priority.  What this means is the datacentre is there to protect the IT system which in turn ensures the smooth running of these critical systems and equipment.  There is therefore a delicate balance between the critical systems and infrastructure, and the datacentre, one can’t exist without the other.

Design considerations

To realise the above, hospitals must feature a strong mix of alternative energy resources such as backup generators, uninterrupted power supply (UPS) and renewables such as rooftop solar.

Additionally, like most organisations, storage volume and type and cloud systems will also vary from hospital to hospital. To this end, datacentre design for hospitals is anything but cookie cutter; teams need to work closely with the hospital whilst meeting industry standards for healthcare.

When designing healthcare facilities system infrastructure, the following should also be considered:

  • Software like Building Management Systems (BMS) are not just about building efficiency but also offer benefits such as monitoring and adjusting indoor conditions like temperature control, humidity, and air quality.

The BMS contributes to health and safety and critical operations in hospitals whilst also enabling patient comfort.

  • Maintenance – both building and systems maintenance transcend operational necessity and become a matter of life or death.
  • As mentioned, generators are essential when delivering continuous power which means enough fuel must be stored to run it. Here, hospitals must store fuel safely and in compliance with stringent regulations. In South Africa, proactively managing the refuelling timelines is also critical.  The response times of refuelling these (fuel) bunkers can be severely hindered by issues such as traffic congestion as a result of outages and lights now working.

Selecting the right equipment for hospitals is therefore a delicate balance between technological advancement and safety. For instance, while lithium batteries offer many benefits, when used in hospitals, it is paramount that it is also stored in dry, cool and safe location.

Here, implementing an extinguishing system is a must to alleviate any potential damage from fire or explosions.  That said, lithium batteries are generally considered safe to use but it’s important to be cognisant of its potential safety hazards.

Ultimately, hospitals carry the added weight of human lives which means the design of critical systems require meticulously planning and executed.

How does Oxygen Depletion Disrupt Memory Formation in the Brain?

Scientists identify a positive molecular feedback loop which could explain stroke-induced memory loss.

Ischaemic and haemorrhagic stroke. Credit: Scientific Animations CC4.0

In learning, neurons communicate with each other, and the connections between them getting stronger with repetition. This is known as long-term potentiation or LTP.  

Another type of LTP occurs when the brain is deprived of oxygen temporarily – anoxia-induced long-term potentiation or aLTP. aLTP blocks the former process, thereby impairing learning and memory. Therefore, some scientists think that aLTP might be involved in memory problems seen in conditions like stroke. 

Researchers at the Okinawa Institute of Science and Technology (OIST) and their collaborators have studied the aLTP process in detail. They found that maintaining aLTP requires the amino acid glutamate, which triggers nitric oxide (NO) production in both neurons and brain blood vessels. This process forms a positive glutamate-NO-glutamate feedback loop. Their study, published in iScience, indicates that the continuous presence of aLTP could potentially hinder the brain’s memory strengthening processes and explain the memory loss observed in certain patients after experiencing a stroke.  

The brain’s response to low oxygen 

When there is a lack of oxygen in the brain, the neurotransmitter glutamate is released from neurons in large amounts. This increased glutamate causes the production of NO. NO produced in neurons and brain blood vessels boosts glutamate release from neurons during aLTP. This glutamate-NO-glutamate loop continues even after the brain gets enough oxygen. 

“We wanted to know how oxygen depletion affects the brain and how these changes occur,” stated Dr Han-Ying Wang, a researcher in the former Cellular and Molecular Synaptic Function Unit at OIST and lead author of the study,. “It’s been known that nitric oxide is involved in releasing glutamate in the brain when there is a shortage of oxygen, but the mechanism was unclear.”  

During a stroke, when the brain is deprived of oxygen, amnesia – the loss of recent memories – can be one of the symptoms. Investigating the effects of oxygen deficiency on the brain is important because of the potential medicinal benefits. “If we can work out what’s going wrong in those neurons when they have no oxygen, it may point in the direction of how to treat stroke patients,” Dr Patrick Stoney, a scientist in OIST’s Sensory and Behavioral Neuroscience Unit, explained. 

Brain tissues from mice were placed in a saline solution, mimicking the natural environment in the living brain. Normally, this solution is oxygenated to meet the high oxygen demands of brain tissue. However, replacing the oxygen with nitrogen allowed the researchers to deprive the cells of oxygen for precise lengths of time.  

The tissues were then examined under a microscope and electrodes were placed on them to record electrical activity of the individual cells. The cells were stimulated in a way that mimics how they would be stimulated in living mice. 

Stopping memory and learning activity 

The aLTP process is activated when the brain is deprived of oxygen
The aLTP process is activated when the brain is temporarily deprived of oxygen and glutamate levels increase. If aLTP is maintained for an extended period, this hijacks the normal functioning of the memory strengthening process (LTP), resulting in memory loss. Blocking nitric oxide (NO) synthesis or the molecular pathways that boost glutamate release eventually stops aLTP. Credit: Wang et al., 2024 

The scientists found that maintaining aLTP requires NO production in both neurons and in blood vessels in the brain. Collaborating scientists from OIST’s Optical Neuroimaging Unit showed that in addition to neurons and blood vessels, aLTP requires the activity of astrocytes, another type of brain cell. Astrocytes connect and support communication between neurons and blood vessels. 

“Long-term maintenance of aLTP requires continuous synthesis of nitric oxide. NO synthesis is self-sustaining, supported by the NO-glutamate loop, but blocking molecular steps for NO-synthesis or those that trigger glutamate release eventually disrupt the loop and stop aLTP,” Prof. Tomoyuki Takahashi, leader of the former Cellular and Molecular Synaptic Function Unit at OIST, explained.  

Notably, the cellular processes that support aLTP are shared by those involved in memory strengthening and learning (LTP). When aLTP is present, it hijacks molecular activities required for LTP and removing aLTP can rescue these memory enhancing mechanisms. This suggests that long-lasting aLTP may obstruct memory formation, possibly explaining why some patients have memory loss after a short stroke. 

Prof Takahashi emphasised that the formation of a positive feedback loop formed between glutamate and NO when the brain is temporarily deprived of oxygen is an important finding. It explains long-lasting aLTP and may offer a solution for memory loss caused by a lack of oxygen.  

Source: Okinawa Institute of Science and Technology

Life Healthcare Concludes Agreement to Sub-License “RM2”

Photo by Khwanchai Phanthong on Pexels

Life Healthcare through its wholly owned subsidiary Life Molecular Imaging Limited (LMI), has entered into a contract with Lantheus Holdings Inc. (“Lantheus”), to sub-license one of LMI’s early-stage novel radiotherapeutic and radio diagnostic products (RM2).

“As part of Life Healthcare’s strategy to monetise LMI’s product development portfolio, we are delighted to have found a partner for our RM2 product”, said Pete Wharton-Hood, Life Healthcare, CEO.  “Through this agreement, LMI has secured a partnership for the development of this early-stage diagnostic and therapeutic product through to commercialisation. This exciting opportunity unlocks some of the value in LMI’”, continued Wharton-Hood.

Lantheus will make an upfront payment of $35 million for the sub-licensing rights to RM2, as per the agreement. In addition, several payments will potentially be paid to LMI on the achievement of development and regulatory milestones as well as royalty payments when the product is sold commercially.

The sub-licensing agreement secures Lantheus’ rights to develop the product and complete the early development in collaboration with LMI. “LMI is uniquely positioned to assist in this area, says Wharton -Hood and we are pleased by this development as it showcases and harnesses the specialised, dedicated and focused talent within LMI”. “With Lantheus’ experience in developing and providing access to radiotheranostics in cancer, we are confident in our decision to hand them the reins for this promising theranostic pair and are honored to work with them toward improving the future of people with prostate and breast cancer,” said Ludger Dinkelborg, CEO, Life Molecular Imaging.

Lantheus Holdings, Inc. is listed on NASDAQ in the United States of America and is the leading radiopharmaceutical-focused company committed to delivering life-changing science to enable clinicians to Find, Fight and Follow disease to deliver better patient outcomes. Lantheus has been providing radiopharmaceutical solutions for more than 65 years and has identified value and commercial opportunity in continuing the development of RM2.

LMI is a wholly owned subsidiary in Life Healthcare and is registered in the United Kingdom. The company has a product Neuraceq® which has been approved in many countries and is used to detect amyloid plaque in the brain through a PET-CT Scan and has multiple products in early clinical development. LMI also provides clinical research services for pharmaceutical companies.

Life Healthcare has retained R1bn to provide for funding requirements of LMI as part of the Alliance Medical Group disposal which was concluded earlier this year “This transaction will reduce the quantum required and Life Healthcare will consider distributing a portion of the surplus to shareholders as part of the full year dividend,” stated Wharton-Hood.

About RM2

RM2 is a 9 amino acid peptide that binds to Gastrin Releasing Peptide receptor (GRPr); and can be used to treat multiple malignant tumors like prostate, breast, lung, glioma, and ovarian tumors.

About Life Molecular Imaging

LMI is a wholly owned subsidiary in Life Healthcare and is registered in the United Kingdom. The company has one globally approved product Neuraceq ® that is used to detect amyloid plaque in the brain through a PET-CT scan and has multiple products in early clinical development as well as providing clinical research services for pharmaceutical companies.