How a trial that mirrors intensive care practices is pinpointing life-saving coronavirus treatments

When the H1N1 swine flu outbreak hit in 2009 it caused typical flu symptoms, but severe cases led to pneumonia and lung failure.

Doctors were unsure which treatments would work, but steroids, which dampen down inflammation, seemed like a good bet. Clinicians in intensive care units decided to set up a trial to see if steroids helped patients severely ill with swine flu.

But this proved impossible. ‘The pandemic came and went and we missed an opportunity to test whether even a simple intervention like steroids worked or not,’ said Professor Alistair Nichol, ICU doctor in St Vincent’s University Hospital in Dublin, Ireland, who worked in an Australian hospital during the 2009 pandemic.

‘There was a peak in intensive care unit (ICU) transmissions and some clinicians got frustrated in trying to set up a drug trial (that never happened),’ recalled Dr Lennie Derde, an intensive care doctor at UMC Utrecht in the Netherlands.

That pandemic killed tens of thousands of people, but ICU clinicians expected worse in the future. ‘This one was moderately bad,’ recalled ICU physician Professor Steve Webb at the Royal Perth Hospital in Australia, ‘but if we had a really bad one, we were just woefully unfit to be able to do drug trials.’ So ICU clinicians came together to be better prepared. ‘Nobody doubted there would be a next time,’ said Prof. Nichol.

Community-acquired pneumonia

Typically, a viral pandemic begins with a spike in unusual pneumonia cases in ICUs.

In March 2020, patients flooded ICUs with SARS-CoV-2 infections. This time, some ICU clinicians were ready from the get-go. A number of coronavirus patients were quickly recruited into an existing clinical trial calledREMAP-CAP, set up in the wake of the 2009 swine flu outbreak, to test which drugs worked on their pneumonia.

It was ‘set up in peacetime,’ explained Prof. Nichol, ‘so as to be ready for wartime if a pandemic was to arrive.’

‘We were able to flick the switch to include pandemic patients,’ said Dr Derde. ‘That is why we were able to include our first patient on the 9th of March.’

The trial tests various drugs – rather than the usual one or two – for community-acquired pneumonia, with all the regulatory and ethical approvals in place. It was set up by a project called PREPARE for patients suffering from severe community-acquired pneumonia caused by bacteria or viruses, which kills about one in five people with it in ICU, although the figures can be much higher.

Results on steroids from the trial contributed to the World Health Organisation recommending them for COVID-19 patients in September 2020. By then, REMAP-CAP, which now involves about 300 hospitals, was testing lots of interventions (it currently has 31).

Doing this sets it apart from a traditional randomised controlled trial, which is the gold standard to prove whether a drug is effective or not. Patients are usually given either one or two drugs or a placebo, and then outcomes are compared. REMAP-CAP is different – and more flexible – in that it tests many types of treatments at the same time. One patient can receive multiple interventions, which is not unusual for someone severely ill. A regimen of treatments is ‘much more in line with your typical clinical treatment,’ said Dr Derde, who is the European coordinator of the trial.

The clinical trial design means that ‘we can analyse the interactions between drugs, which is a huge advantage in a pandemic,’ she said.

‘A single Covid patient can be randomised for up to eight separate aspects of their treatment,’ said Prof. Webb. ‘We’re obviously learning much more quickly, because we’re testing so many different things simultaneously.’ A patient is enrolled in one treatment ‘domain’, and then is randomly assigned to one of a handful of interventions in that domain.

The 12 different ‘domains’ of treatment include anti-coagulation treatments, anti-inflammatory drugs, and immune modulating drugs. ‘One patient might be involved in six or seven different domains of the trial,’ said Prof. Nichol. In fact, this better reflects how a patient with COVID-19 is treated in hospital, according to the doctors.

Monoclonal

A big result – presented in a pre-print study, so not yet peer reviewed – is that two monoclonal antibodies (tocilizumab and sarilumab) reduced death from COVID-19 in severely ill patients, and time spent in ICU. Those benefits seem to be on top of those from steroids, says Prof. Webb.

Tocilizumab and sarilumab block a chemical signal called interleukin-6 (IL-6), which stokes inflammation. In severe COVID-19, the inflammatory response of a patient’s immune system begins to damage body tissue, in friendly fire, while attacking the virus. There was therefore good reason to think the drugs might be effective.

Indeed, once a new disease turns up, and its mechanisms are reported, an array of medications will be considered by physicians. But they do not know which ones work. The results for COVID-19, explains Prof. Webb, ‘was quite a scattergun approach of clinicians using repurposed medicines.’

Sometimes, as in the case of the malaria drug hydroxychloroquine, a small study and lots of hype persuades some that an old drug is worth a try. But it requires a large clinical trial, such as REMAP-CAP, which involves 6,000 patients, to show whether that drug is safe and effective. Physicians can now prescribe the IL-6 blockers and steroids with greater confidence, while hydroxychloroquine is left on the shelf.

Indeed, finding that a drug does not work, or causes a negative effect, can be equally beneficial. ‘The anti-virals and the anti-coagulants have not just been ineffective, but may result in worse outcomes for patients,’ said Prof. Webb, about recent preliminary findings. ‘It is just as important to identify treatments that are harmful.’

It is difficult for a physician to know whether a drug is helping or hindering their patients. ‘For an individual doctor at a patient’s bedside, it can be hard to tease out if giving someone something helped or if they would have gotten better anyway,’ explained Prof. Nichol. Severely ill patients usually receive multiple interventions, which makes knowing which treatment was effective tougher.

Combinations

The REMAP-CAP trial was created to be multifactorial, meaning that the effects of many different treatment combinations on patients can show up in the data. Bayesian statistics, which applies probabilities to statistical problems, makes sense of all the data to answer crucial questions, such as when a treatment reaches a positive result, or if it is shown not to work, or if there are positive interactions between drugs. Such a strategy with multiple treatments had been used for cancer, but never a global ICU study.

At the heart of these trials is a simple fact: it is impossible to guess which treatments will work. Small studies can be biased and inconclusive, which is why large randomised trials are needed.

‘Years ago, I used to think that there were things that definitely would work, and then they didn’t,’ said Prof. Webb. ‘I’ve been doing clinical trials for so long now that the only thing I am confident of is that, when you do trials, you will get surprises.’

Dr Derde praises the European Union’s actions in 2014 ‘because they funded the European part of what is now REMAP-CAP, with the vision that they needed to set up an infrastructure for pandemic research.’ Other funders around the world, from Australia to New Zealand to Canada and the US, subsequently joined the effort.

REMAP-CAP is an international effort now, with 300 hospitals taking part in about 20 countries, testing out 31 different interventions. The trial will continue to prove which treatments work in severely ill COVID-19 patients. As the virus surges in many countries and until the vaccine rollout can control the pandemic, proven drugs are needed to save lives.



Technology.org

Scientists focus on cone targets to enhance temperature of electron beams

Intense short-pulse laser-driven production of bright high-energy sources, such as X-rays, neutrons and protons, has been shown to be an invaluable tool in the study of high energy density science.

In an effort to address some of the most challenging applications, such as X-ray radiography of high areal density objects for industrial and national security applications, both the yield and energy of the sources must be increased beyond what has currently been achieved by state-of-the-art high-intensity laser systems.

This image shows the intensification of the laser in simulations and the electrons being accelerated.

A team of scientists from Lawrence Livermore National Laboratory (LLNL), University of Austin and General Atomics took on this challenge. Specifically, the team conducted experimental measurements of hot electron production using a short-pulse, high-contrast laser on cone and planar targets.

The cone geometry is a Compound Parabolic Concentrator (CPC) designed to focus the laser to the tip. The cone geometry shows higher hot electron temperatures than planar foils. Simulations identified that the primary source of this temperature enhancement is the intensity increase caused by the CPC.

Led by LLNL postdoctoral appointee Dean Rusby, the research findings are featured in Physical Review E.

“We were able enhance the temperature of the electron beam from our high-intensity laser interactions by shooting into a focusing cone target,” Rusby said. “It shows that we understand how the compound parabolic concentrator works under these laser conditions.”

Rusby said increasing the coupling into high-energy electrons in these interactions is crucial for developing applications from laser-plasma interactions.

“It is very encouraging to see significant enhancements are possible using the CPC target platform on a petawatt 100 fs class laser system, which is already capable of near diffraction limited operation,” said Andrew MacPhee, co-author of the paper. “Non-imaging optics applied to laser target interactions are redefining the parameter space accessible to the community.”

The team used the Texas petawatt laser system at the University of Austin during a six-week period, which has a short pulse and high contrast that allowed the experiment to work. The target is a compound CPC that is specifically designed to focus more laser energy toward the tip and increase the intensity.

“The increase in electron temperature strongly agreed with the increase we would expect when using the CPC,” Rusby said.

The Department of Energy’s Office of Science supported the LaserNetUS initiative at Texas Petawatt and LLNL’s Laboratory Directed Research and Development program funded the team and the crucially important target development from General Atomics.

The team has been awarded additional time through LaserNetUS at the Texas petawatt to continue research on CPCs targets. This time, the team will concentrate on the acceleration of the protons from the rear surface and the enhancement that the CPCs provide.

This image shows the experimental setup displaying the target, laser and electron spectrometer. A 3D drawing of the CPC, tantalum substrate and the incoming laser also is shown.

Andrew Mackinnon, a co-author of the paper and a principal investigator for a Strategic Initiative Laboratory Directed Research & Development, is using these CPC targets for the project.

“These experiments showed that miniature plasma mirror targets do improve coupling of petawatt-class lasers to MeV (mega-electronvolt) electrons, which benefits potential applications such as laser-based MeV radiography,” he said.

 

In addition to Rusby, MacPhee and Mackinnon, co-authors include Paul King, Arthur Pak, N. Lemos, Shaun Kerr, Ginny Cochran, Anthony Link, Andreas Kemp, Scott Wilks, George Williams, Felicie Albert, Maurice Aufderheide, H. Chen, Craig Siders and Andrew Macphee from LLNL; I. Pagano, A. Hannasch, H. Quevedo, M. Spinks and M. Donovan from the University of Austin; and M. J.-E. Manuel, Z. Gavin and A. Haid from General Atomics.

Source: LLNL




Technology.org

Disaster Response and Mitigation in an AI World

PNNL combines AI, cloud computing, with damage assessment tool to predict path of wildfires and evaluate impact of natural disasters

After the destructive California wildfires of 2019, the U.S. government put together a White House Executive forum to develop better ways of protecting the nation and key infrastructure, such as the power grid, from wildfires and other disasters. In 2020 alone, more than 10.3 million acres burned across the United States, a level three times higher than the 1990–2000 10-year average. Between fire suppression costs, direct and indirect costs, wildfires in 2020 cost the United States upwards of $170 billion. Add in floods, hurricanes, and other natural disasters, and the toll of disasters on the livelihoods of Americans is astronomical.

Andre Coleman and his team of researchers at Pacific Northwest National Laboratory (PNNL) are part of the First Five Consortium, a group of government, industry, and academia experts committed to lessening the impact of natural disasters using technology. Coleman and team are expanding PNNL’s operational Rapid Analytics for Disaster Response (RADR) image analytics and modeling suite to mitigate damage to key energy infrastructure. Using a combination of image capturing technology (satellite, airborne, and drone images), artificial intelligence (AI), and cloud computing, Coleman and the team work to not only assess damage but predict it as well.

Image assessed by RADR from the Mammoth Fire south of Panguitch, UT. The bright red areas indicate active fire fronts while violet indicates smoldering areas. (Image: Pacific Northwest National Laboratory)

Accurately forecasting the movement of natural disasters—wildfires, floods, hurricanes, windstorms, tornados, and earthquakes—gives first responders a jump, allowing them to take measures to reduce damage, conduct advanced resource planning, and increase infrastructure restoration time. For example, should a fire reach an electrical substation or other grid infrastructure, an entire community—homes, businesses, and schools—would experience a power outage that could take days to restore.

“This is an exciting and timely effort to apply artificial intelligence to reduce the impact of wildfires, protect energy infrastructure, and ultimately save lives,” said Pamela Isom, acting director of the U.S. Department of Energy (DOE) Artificial Intelligence and Technology Office. “The work has the potential to make a difference in what we expect will be a very challenging wildfire season. This has been a very productive collaboration among several partners, including our colleagues at the Department of Defense’s Joint Artificial Intelligence Center, Department of Homeland Security, and at PNNL.”

Since 2014, Coleman and team have been working with these technologies. The project originally started with the creation of a change-detection algorithm, which analyzes different types of satellite imagery and determines what changed in the landscape after a storm. Authorities use the tool to rapidly assess the physical damage impact of natural disasters, often before ground teams can get in. The first iteration of the tool was used during the 2016 hurricane season to evaluate hurricane damage and determine if energy infrastructure—electric grid, petroleum, and gas facilities—was damaged or at risk.

Overall, RADR analytical products bring value, but Coleman and the team recognize opportunities to expand the functionality of the tools and seek to improve RADR response time, damage assessment, visibility, prediction capability, and data accessibility.

To improve timeliness and ground-level assessments, the team incorporated new and different image sources. RADR can pull in images from a variety of satellites with different sensing capabilities, including domestic and international government satellites that are offered as open data as well as commercial satellites that are available through the International Disasters Charter. Having multiple sources of overhead imagery improves response time to just a few hours with the key limitation being the latency of overhead imagery, or the time between images being collected and being available for analysis. Once imagery is received, the RADR software can generate an analysis in just over 10 minutes.

To peer through wildfire smoke and cloud cover, the team added infrared imagery to RADR. The new capability provides a clearer view of the landscape that was previously unavailable, giving responders information such as damage to key infrastructure or a safe location to set up relief efforts that responders may not have otherwise been privy to.

Imagery assessed by RADR indicates where thermal hot spots are through cloud and smoke coverage. The imagery and assessment aids first responders in their fight against wildfires. (Image: Pacific Northwest National Laboratory)

The team is also integrating publicly available and crowdsourced images from social media. Often in a disaster, social media networks like Twitter, Flickr, and Instagram offer a wealth of real-time data as users post pictures of what is going on around them. By pairing overhead imagery with on-the-ground images, the team can provide a more complete assessment. Satellite images, for example, may show damage to a generation resource, power lines, or the electric grid; however, ground images may indicate otherwise. The tool takes all these images, removes the redundant ones, and sews the images together to provide a more accurate view of changing conditions.

As with any computational model, it’s only as good as the data. The added imagery sources provide additional data for RADR to interpret, improving accuracy. To predict possible outcomes of a wildfire, the team is combining the imagery analysis with weather, fuel, and forecast data. For example, wind, vegetation, and anything a fire can consume all factor into the size of a fire and the direction it takes. By marrying imagery with fuel data and wildfire models, the team hopes to be able to accurately predict the path a fire takes.

Of course, the assessments need to get in the right hands. Coordinating a response requires local, regional, and national resources, each in different locations but needing the data as quickly as possible in a format that can be readily accessed and interpreted, particularly in a data communication constrained environment. A cloud-based system provides an end-to-end pipeline for retrieving available imagery, processing the analytics, and disseminating data to be used directly in a user’s own software, through desktop web browsers, and/or via mobile applications. Added visual analytics produce images and datasets that can be easily discernable to a wide audience of responders.

Recent years have brought an increase in the frequency and severity of wildfires, floods, and other extreme weather events. Coleman and team hope that at the very least the added capabilities of RADR will give responders information that can be used to make informed decisions, reduce or plan for damage to key energy infrastructure, plan relief efforts, and save lives.

Source: PNNL




Technology.org

Study Looks More Closely at Mars’ Underground Water Signals

In 2018, scientists working with data from ESA’s (the European Space Agency’s) Mars Express orbiter announced a surprising discovery: Signals from a radar instrument reflected off the Red Planet’s south pole appeared to reveal a liquid subsurface lake. Several more such reflections have been announced since then.

The bright white region of this image shows the icy cap that covers Mars’ south pole, composed of frozen water and frozen carbon dioxide. ESA’s Mars Express imaged this area of Mars on Dec. 17, 2012, in infrared, green and blue light, using its High Resolution Stereo Camera. Credit: ESA/DLR/FU Berlin/Bill Dunford

In a new paper published in the journal Geophysical Research Letters, two scientists at NASA’s Jet Propulsion Laboratory in Southern California describe finding dozens of similar radar reflections around the south pole after analyzing a broader set of Mars Express data, but many are in areas that should be too cold for water to remain liquid.

“We’re not certain whether these signals are liquid water or not, but they appear to be much more widespread than what the original paper found,” said Jeffrey Plaut of JPL, co-principal investigator of the orbiter’s MARSIS (Mars Advanced Radar for Subsurface and Ionospheric Sounding) instrument, which was built jointly by the Italian Space Agency and JPL. “Either liquid water is common beneath Mars’ south pole or these signals are indicative of something else.”

Frozen Time Capsule

The radar signals originally interpreted as liquid water were found in a region of Mars known as the South Polar Layered Deposits, named for the alternating layers of water ice, dry ice (frozen carbon dioxide), and dust that have settled there over millions of years. These layers are believed to provide a record of how the tilt in Mars’ axis has shifted over time, just as changes in Earth’s tilt have created ice ages and warmer periods throughout our planet’s history. When Mars had a lower axial tilt, snowfall and layers of dust accumulated in the region and eventually formed the thick layered ice sheet found there today.

By beaming radio waves at the surface, scientists can peer below these icy layers, mapping them in detail. Radio waves lose energy when they pass through material in the subsurface; as they reflect back to the spacecraft, they usually have a weaker signal. But in some cases, signals returning from this region’s subsurface were brighter than those at the surface. Some scientists have interpreted these signals to imply the presence of liquid water, which strongly reflects radio waves.

Plaut and Aditya Khuller, a doctoral student at Arizona State University who worked on the paper while interning at JPL, aren’t sure what the signals indicate. The areas hypothesized to contain liquid water span about 6 to 12 miles (10 to 20 kilometers) in a relatively small region of the Martian south pole. Khuller and Plaut expanded the search for similar strong radio signals to 44,000 measurements spread across 15 years of MARSIS data over the entirety of the Martian south polar region.

Unexpected ‘Lakes’

The analysis revealed dozens of additional bright radar reflections over a far greater range of area and depth than ever before. In some places, they were less than a mile from the surface, where temperatures are estimated to be minus 81 degrees Fahrenheit (minus 63 degrees Celsius) – so cold that water would be frozen, even if it contained salty minerals known as perchlorates, which can lower the freezing point of water.

Khuller noted a 2019 paper in which researchers calculated the heat needed to melt subsurface ice in this region, finding that only recent volcanism under the surface could explain the potential presence of liquid water under the south pole.

“They found that it would take double the estimated Martian geothermal heat flow to keep this water liquid,” Khuller said. “One possible way to get this amount of heat is through volcanism. However, we haven’t really seen any strong evidence for recent volcanism at the south pole, so it seems unlikely that volcanic activity would allow subsurface liquid water to be present throughout this region.”

What explains the bright reflections if they’re not liquid water? The authors can’t say for sure. But their paper does offer scientists a detailed map of the region that contains clues to the climate history of Mars, including the role of water in its various forms.

“Our mapping gets us a few steps closer to understanding both the extent and the cause of these puzzling radar reflections,” said Plaut.

Source: JPL



Technology.org

NASA’s Webb Will Use Quasars to Unlock the Secrets of the Early Universe

Quasars are very bright, distant and active supermassive black holes that are millions to billions of times the mass of the Sun. Typically located at the centers of galaxies, they feed on infalling matter and unleash fantastic torrents of radiation. Among the brightest objects in the universe, a quasar’s light outshines that of all the stars in its host galaxy combined, and its jets and winds shape the galaxy in which it resides.

This is an artist’s concept of a galaxy with a brilliant quasar at its center. A quasar is a very bright, distant and active supermassive black hole that is millions to billions of times the mass of the Sun. Among the brightest objects in the universe, a quasar’s light outshines that of all the stars in its host galaxy combined. Quasars feed on infalling matter and unleash torrents of winds and radiation, shaping the galaxies in which they reside. Using the unique capabilities of Webb, scientists will study six of the most distant and luminous quasars in the universe.
Credits: NASA, ESA and J. Olmsted (STScI)

Shortly after its launch later this year, a team of scientists will train NASA’s James Webb Space Telescope on six of the most distant and luminous quasars. They will study the properties of these quasars and their host galaxies, and how they are interconnected during the first stages of galaxy evolution in the very early universe. The team will also use the quasars to examine the gas in the space between galaxies, particularly during the period of cosmic reionization, which ended when the universe was very young. They will accomplish this using Webb’s extreme sensitivity to low levels of light and its superb angular resolution.

Webb: Visiting the Young Universe

As Webb peers deep into the universe, it will actually look back in time. Light from these distant quasars began its journey to Webb when the universe was very young and took billions of years to arrive. We will see things as they were long ago, not as they are today.

“All these quasars we are studying existed very early, when the universe was less than 800 million years old, or less than 6 percent of its current age. So these observations give us the opportunity to study galaxy evolution and supermassive black hole formation and evolution at these very early times,” explained team member Santiago Arribas, a research professor at the Department of Astrophysics of the Center for Astrobiology in Madrid, Spain. Arribas is also a member of Webb’s Near-Infrared Spectrograph (NIRSpec) Instrument Science Team.

The light from these very distant objects has been stretched by the expansion of space. This is known as cosmological redshift. The farther the light has to travel, the more it is redshifted. In fact, the visible light emitted at the early universe is stretched so dramatically that it is shifted out into the infrared when it arrives to us. With its suite of infrared-tuned instruments, Webb is uniquely suited to studying this kind of light.

Studying Quasars, Their Host Galaxies and Environments, and Their Powerful Outflows

The quasars the team will study are not only among the most distant in the universe, but also among the brightest. These quasars typically have the highest black hole masses, and they also have the highest accretion rates — the rates at which material falls into the black holes.

“We’re interested in observing the most luminous quasars because the very high amount of energy that they’re generating down at their cores should lead to the largest impact on the host galaxy by the mechanisms such as quasar outflow and heating,” said Chris Willott, a research scientist at the Herzberg Astronomy and Astrophysics Research Centre of the National Research Council of Canada (NRC) in Victoria, British Columbia. Willott is also the Canadian Space Agency’s Webb project scientist. “We want to observe these quasars at the moment when they’re having the largest impact on their host galaxies.”

An enormous amount of energy is liberated when matter is accreted by the supermassive black hole. This energy heats and pushes the surrounding gas outward, generating strong outflows that tear across interstellar space like a tsunami, wreaking havoc on the host galaxy.

Outflows play an important role in galaxy evolution. Gas fuels the formation of stars, so when gas is removed due to outflows, the star-formation rate decreases. In some cases, outflows are so powerful and expel such large amounts of gas that they can completely halt star formation within the host galaxy. Scientists also think that outflows are the main mechanism by which gas, dust and elements are redistributed over large distances within the galaxy or can even be expelled into the space between galaxies – the intergalactic medium. This may provoke fundamental changes in the properties of both the host galaxy and the intergalactic medium.

Examining Properties of Intergalactic Space During the Era of Reionization

More than 13 billion years ago, when the universe was very young, the view was far from clear. Neutral gas between galaxies made the universe opaque to some types of light. Over hundreds of millions of years, the neutral gas in the intergalactic medium became charged or ionized, making it transparent to ultraviolet light. This period is called the Era of Reionization. But what led to the reionization that created the “clear” conditions detected in much of the universe today? Webb will peer deep into space to gather more information about this major transition in the history of the universe. The observations will help us understand the Era of Reionization, which is one of the key frontiers in astrophysics.

The team will use quasars as background light sources to study the gas between us and the quasar. That gas absorbs the quasar’s light at specific wavelengths. Through a technique called imaging spectroscopy, they will look for absorption lines in the intervening gas. The brighter the quasar is, the stronger those absorption line features will be in the spectrum. By determining whether the gas is neutral or ionized, scientists will learn how neutral the universe is and how much of this reionization process has occurred at that particular point in time.

“If you want to study the universe, you need very bright background sources. A quasar is the perfect object in the distant universe, because it’s luminous enough that we can see it very well,” said team member Camilla Pacifici, who is affiliated with the Canadian Space Agency but works as an instrument scientist at the Space Telescope Science Institute in Baltimore. “We want to study the early universe because the universe evolves, and we want to know how it got started.”

The team will analyze the light coming from the quasars with NIRSpec to look for what astronomers call “metals,” which are elements heavier than hydrogen and helium. These elements were formed in the first stars and the first galaxies and expelled by outflows. The gas moves out of the galaxies it was originally in and into the intergalactic medium. The team plans to measure the generation of these first “metals,” as well as the way they’re being pushed out into the intergalactic medium by these early outflows.

The Power of Webb

Webb is an extremely sensitive telescope able to detect very low levels of light. This is important, because even though the quasars are intrinsically very bright, the ones this team is going to observe are among the most distant objects in the universe. In fact, they are so distant that the signals Webb will receive are very, very low. Only with Webb’s exquisite sensitivity can this science be accomplished. Webb also provides excellent angular resolution, making it possible to disentangle the light of the quasar from its host galaxy.

The quasar programs described here are Guaranteed Time Observations involving the spectroscopic capabilities of NIRSpec.

The James Webb Space Telescope will be the world’s premier space science observatory when it launches in 2021. Webb will solve mysteries in our solar system, look beyond to distant worlds around other stars, and probe the mysterious structures and origins of our universe and our place in it. Webb is an international program led by NASA with its partners, ESA (European Space Agency) and the Canadian Space Agency.

Source: NASA




Technology.org

Scientists obtain magnetic nanopowder for 6G technology

Material scientists have developed a fast method for producing epsilon iron oxide and demonstrated its promise for next-generation communications devices. Its outstanding magnetic properties make it one of the most coveted materials, such as for the upcoming 6G generation of communication devices and for durable magnetic recording. The work was published in the Journal of Materials Chemistry C, a journal of the Royal Society of Chemistry.

Illustration of the change in absorption frequency for particles of different sizes. Image credit: Journal of Materials Chemistry C.

Iron oxide (III) is one of the most widespread oxides on Earth. It is mostly found as the mineral hematite (or alpha iron oxide, α-Fe2O3). Another stable and common modification is maghemite (or gamma modification, γ-Fe2O3). The former is widely used in industry as a red pigment, and the latter as a magnetic recording medium. The two modifications differ not only in crystalline structure ( alpha-iron oxide has hexagonal syngony and gamma-iron oxide has cubic syngony) but also in magnetic properties.

Сrystalline structures of iron oxide (III) polymorphs. Image credit: Evgeny Gorbachev.

In addition to these forms of iron oxide (III), there are more exotic modifications such as epsilon-, beta-, zeta-, and even glassy. The most attractive  phase is epsilon iron oxide, ε-Fe2O3. This modification has an extremely high coercive force (the ability of the material to resist an external magnetic field). The strength reaches 20 kOe at room temperature, which is comparable to the parameters of magnets based on expensive rare-earth elements. Furthermore, the material absorbs electromagnetic radiation in the sub-terahertz frequency range (100-300 GHz) through the effect of natural ferromagnetic resonance.The frequency of such resonance is one of the criteria for the use of materials in wireless communications devices – the 4G standard uses megahertz and 5G uses tens of gigahertz. There are plans to use the sub-terahertz range as a working range in the sixth generation (6G) wireless technology, which is being prepared for active introduction in our lives from the early 2030s.

The resulting material is suitable for the production of converting units or absorber circuits at these frequencies. For example, by using composite ε-Fe2O3 nanopowders it will be possible to make paints that absorb electromagnetic waves and thus shield rooms from extraneous signals, and protect signals from interception from the outside. The ε-Fe2O3 itself can also be used in 6G reception devices.

Epsilon iron oxide is an extremely rare and difficult form of iron oxide to obtain. Today, it is produced in very small quantities, with the process itself taking up to a month. This, of course, rules out its widespread application. The authors of the study developed a method for accelerated synthesis of epsilon iron oxide capable of reducing the synthesis time to one day (that is, to carry out a full cycle of more than 30 times faster!) and increasing the quantity of the resulting product. The technique is simple to reproduce, cheap and can be easily implemented in industry, and the materials required for the synthesis – iron and silicon – are among the most abundant elements on Earth.

“Although the epsilon-iron oxide phase was obtained in pure form relatively long ago, in 2004, it still has not found industrial application due to the complexity of its synthesis, for example as a medium for magnetic – recording. We have managed to simplify the technology considerably,” says Evgeny Gorbachev, a PhD student in the Department of Materials Sciences at Moscow State University and the first author of the work.

The authors of the experiment, Liudmila Alyabyeva and Evgeny Gorbachev, at the MIPT Laboratory of Terahertz Spectroscopy. Image credit: MIPT.

The key to successful application of materials with record-breaking characteristics is research into their fundamental physical properties. Without in-depth study, the material may be undeservedly forgotten for many years, as has happened more than once in the history of science. It was the tandem of materials scientists at Moscow State University, who synthesised the compound, and physicists at MIPT, who studied it in detail, that made the development a success.

“Materials with such high ferromagnetic resonance frequencies have enormous potential for practical applications. Today, terahertz technology is booming: it is the Internet of Things, it is ultra-fast communications, it is more narrowly focused scientific devices, and it is next-generation medical technology. While the 5G standard, which was very popular last year, operates at frequencies in the tens of gigahertz, our materials are opening the door to significantly higher frequencies (hundreds of gigahertz), which means that we are already dealing with 6G standards and higher. Now it’s up to engineers, we are happy to share the information with them and look forward to being able to hold a 6G phone in our hands,” says Dr. Liudmila Alyabyeva, Ph.D., senior researcher at the MIPT Laboratory of Terahertz Spectroscopy , where the terahertz research was carried out.

The editors considered the development so important that they put an illustration of the article on the cover of the 19th issue of the Royal Society of Chemistry’s Journal of Materials Chemistry C. The figure shows the change in absorption frequency for particles of different sizes. Credit: Journal of Materials Chemistry C.

Source: MIPT




Technology.org

New treatment demonstrated for people with vaccine clots

A new lifesaving treatment for people suffering from vaccine-related blood clots has been demonstrated by scientists of the Faculty of Health Sciences.

Image credit: Pixabay (Free Pixabay license)

Researchers at the McMaster Platelet Immunology Laboratory (MPIL) are recommending two treatments, a combination of anti-clotting drugs with high doses of intravenous immunoglobulin, to combat vaccine-induced immune thrombotic thrombocytopenia (VITT).

The treatment’s effectiveness was described in a report describing three Canadian patients who received the AstraZeneca vaccine, and who subsequently developed VITT. Two suffered clotting in their legs and the third had clots blocking arteries and veins inside their brain.

“If you were a patient with VITT, I’d be telling you we know of a treatment approach. We can diagnose it accurately with our tests, treat it and we know exactly how the treatment works,” said Ishac Nazy, scientific director of the lab and associate professor of medicine.

“Our job is to understand this disease mechanism so we can improve diagnosis and patient management. This study brings together successful lab diagnostics and patient care. It’s a true translational medicine approach, which is really our forte, bench-side to bedside.”

VITT occurs when antibodies attack a blood protein, called platelet factor 4 (PF4), which results in activation of platelets in the blood, causing them to clump together and form clots. Blood samples taken from the patients after treatment showed reduced antibody-mediated platelet activation in all cases.

While the study patients were older, many VITT cases have affected younger people. However, Nazy and his MPIL colleagues said VITT is a rare disorder, regardless of people’s age.

The lab’s scientists include professors of medicine Donald Arnold and John Kelton and professor of pathology and molecular medicine Ted Warkentin. Together they devised an effective VITT test and treatment by building on their previous investigations of heparin-induced thrombocytopaenia (HIT).

While the two conditions are similar, using a standard HIT antibody test to detect VITT can yield false negative results.

This led the scientists to modify the HIT test to detect VITT-specific antibodies that are found, albeit rarely, in patients who had a COVID-19 vaccine.

Subsequent lab tests on patient blood samples showed how high doses of immunoglobulin coupled with blood-thinner medications shut down platelet activation and stopped clot formation.

“We now understand the mechanism that leads to platelet activation and clotting,” said Nazy.

The study was published by the The New England Journal of Medicine. External funding for the study was provided by the Canadian Institutes for Health Research.

Source: McMaster University




Technology.org

Productive 3D bioprinter could help speed up drug development

A 3D printer that rapidly produces large batches of custom biological tissues could help make drug development faster and less costly. U.S. National Science Foundation-funded nanoengineers at the University of California San Diego; developed the high-throughput bioprinting technology, which 3D prints with record speed — it can produce a 96-well array of living human tissue samples within 30 minutes.

The ability to rapidly produce such samples could accelerate high-throughput preclinical drug screening and disease modeling, the scientists said.

Examples of the geometries the high-throughput 3D bioprinter can rapidly produce. Image credit: UCSD

“This research has the potential to facilitate the improved investigation of diseases, eventually leading to novel therapies and successful treatments,” said Nora Savage, a program director in NSF’s Directorate for Engineering.

The process for a pharmaceutical company to develop a new drug can take up to 15 years and cost up to $2.6 billion. It generally begins with screening tens of thousands of drug candidates in test tubes. Successful candidates then get tested in animals, and any that pass this stage move on to clinical trials. With luck, one of these candidates will make it into the market as an FDA-approved drug.

The high-throughput 3D bioprinting technology could accelerate the first steps of this process. It would enable drug developers to rapidly build up large quantities of human tissues on which they could test and weed out drug candidates much earlier.

“With human tissues, you can get better data — real human data — on how a drug will work,” said Shaochen Chen, a nanoengineer at UC San Diego. “Our technology can create these tissues with high-throughput capability, high reproducibility and high precision. This could really help the pharmaceutical industry quickly identify and focus on the most promising drugs.”

The work was published in the journal Biofabrication. The researchers note that while their technology might not eliminate animal testing, it could minimize failures encountered during that stage.

Source: NSF




Technology.org

Worrying insights into the chemicals in plastics

ETH Zurich researchers examined chemicals in plastics worldwide. They found an unexpectedly high number of substances of potential concern intentionally used in everyday plastic products. A lack of transparency limits management of these chemicals.

Plastic is practical, cheap and incredibly popular. Every year, more than 350 million tonnes are produced worldwide. These plastics contain a huge variety of chemicals that may be released during their lifecycles – including substances that pose a significant risk to people and the environment. However, only a small proportion of the chemicals contained in plastic are publicly known or have been extensively studied.

A team of researchers led by Stefanie Hellweg, ETH Zurich Professor of Ecological Systems Design, has for a first time compiled a comprehensive database of plastic monomers, additives and processing aids for use in the production and processing of plastics on the world market, and systematically categorized them on the basis of usage patterns and hazard potential.

The study, just published in the scientific journal Environmental Science & Technology, provides an enlightening but worrying insight into the world of chemicals that are intentionally added to plastics.

A high level of chemical diversity

The team identified around 10,500 chemicals in plastic. Many are used in packaging (2,489), textiles (2,429) and food-​contact applications (2,109); some are for toys (522) and medical devices, including masks (247).

Of the 10,500 substances identified, the researchers categorized 2,480 substances (24 percent) as substances of potential concern.

“This means that almost a quarter of all the chemicals used in plastic are either highly stable, accumulate in organisms or are toxic. These substances are often toxic to aquatic life, cause cancer or damage specific organs,” explains Helene Wiesinger, doctoral student at the Chair of Ecological Systems Design and lead author of the study. About half are chemicals with high production volumes in the EU or the US.

“It is particularly striking that many of the questionable substances are barely regulated or are ambiguously described,” continues Wiesinger. In fact, 53 percent of all the substances of potential concern are not regulated in the US, the EU or Japan. More surprisingly, 901 hazardous substances are approved for use in food contact plastics in these regions. Finally, scientific studies are lacking for about 10 percent of the identified substances of potential concern.

Plastic monomers, additives and processing aids

Plastics are made of organic polymers built up from repeating monomer units. A wide variety of additives, such as antioxidants, plasticisers and flame retardants, give the polymer matrix the desired properties. Catalysts, solvents and other chemicals are also used as processing aids in production.

“Until now, research, industry and regulators have mainly concentrated on a limited number of dangerous chemicals known to be present in plastics,” says Wiesinger. Today, plastic packaging is seen as a main source of organic contamination in food, while phthalate plasticisers and brominated flame retardants are detectable in house dust and indoor air. Earlier studies have already indicated that significantly more plastic chemicals used worldwide are potentially hazardous.

Nevertheless, the results of the inventory came as an unpleasant surprise to the researchers. “The unexpectedly high number of substances of potential concern is worrying,” says Zhanyun Wang, senior scientist in Hellweg’s group. Exposure to such substances can have a negative impact on the health of consumers and workers and on polluted ecosystems. Problematic chemicals can also affect recycling processes and the safety and quality of recycled plastics.

Wang stresses that even more chemicals in plastics could be problematic. “Recorded hazard data are often limited and scattered. For 4,100 or 39 per cent of all the substances we identified, we were not able to categorize them due to a lack of hazard classifications” he says.

A lack of data and transparency

The two researchers identified the lack of transparency in chemicals in plastics and dispersed data silos as the main problem. In over two and a half years of detective work, they combed through more than 190 publicly accessible data sources from research, industry and authorities and identified 60 sources with sufficient information about intentionally added substances in plastics. “We found multiple critical knowledge and data gaps, in particular for the substances and their actual uses. This ultimately hinders consumers’ choice of safe plastic products”, they say.

Wiesinger and Wang are pursuing the goal of a sustainable circular plastic economy. They see an acute need for effective global chemicals management; such a system would have to be transparent and independent and oversee all hazardous substances in full. The two researchers say that open and easy access to reliable information is crucial.

Source: ETH Zurich




Technology.org

Exploring links between segregation and cardiovascular diseases

A University of Texas at Arlington researcher is examining how historic segregation in the United States may contribute to cardiovascular disease among individuals from minority or low-income groups.

“If this research does find neighborhood factors, like racial segregation and income, impact cardiovascular disease in minority and low-income people, then we can begin to approach those issues from a local policies perspective and hopefully reduce cardiovascular disease for the entire community,” said Yeonwoo Kim, assistant professor in the Department of Kinesiology in UTA’s College of Nursing and Health Innovation.

Cardiovascular disease remains the leading cause of death in the United States, but its impact varies by race and socioeconomic status. Black Americans and people of low socioeconomic status have earlier onsets of heart disease and a greater risk of dying compared to white Americans.

In a separate study, Kim is exploring the effects of built and social environments—such as health care resources, food accessibility, socioeconomic status, crime prevention and recreational facilities—on the health disparities found in people with cardiovascular disease.

Both projects use data from the Health and Retirement Study, which collected comprehensive information on health, health behaviors and socioeconomic status from 20,000 adults over age 50 biennially between 2004 and 2014.

This research is pivotal to addressing cardiovascular disease at the public health level, Kim said.

“If we change the neighborhood factors, then we could reduce the whole population’s risk. The magnitude of that impact would be great,” she said.

Source: University of Texas at Arlington




Technology.org