back to top
27.8 C
New York
Monday, November 18, 2024
Home Blog Page 334

New way found of monitoring volcanic ash cloud

The eruption of the Icelandic volcano Eyjafjallajökull in April this year resulted in a giant ash cloud, which – at one point covering most of Europe – brought international aviation to a temporary standstill, resulting in travel chaos for tens of thousands
New research, to be published today, Friday 10 December, in IOP Publishing’s Environmental Research Letters, shows that lightning could be used as part of an integrated approach to estimate volcanic plume properties.
The scientists found that during many of the periods of significant volcanic activity, the ash plume was sufficiently electrified to generate lightning, which was measured by the UK Met Office’s long range lightning location network (ATDnet), operating in the Very Low Frequency radio spectrum.
The measurements suggest a general correlation between lightning frequency and plume height and the method has the advantage of being detectable many thousands of kilometers away, in both day and night as well as in all weather conditions.

As the researchers write, “When a plume becomes sufficiently electrified to produce lightning, the rate of lightning generation provides a method of remotely monitoring the plume height, offering clear benefits to the volcanic monitoring community.”
 
Note: This story has been adapted from a news release issued by the Institute of Physics

Using chaos to model geophysical phenomena

Geophysical phenomena such as the dynamics of the atmosphere and ocean circulation are typically modeled mathematically by tracking the motion of air or water particles. These mathematical models define velocity fields that, given (i) a position in three-dimensional space and (ii) a time instant, provide a speed and direction for a particle at that position and time instant
“Geophysical phenomena are still not fully understood, especially in turbulent regimes,” explains Gary Froyland at the School of Mathematics and Statistics and the Australian Research Council Centre of Excellence for Mathematics and Statistics of Complex Systems (MASCOS) at the University of New South Wales in Australia.

“Nevertheless, it is very important that scientists can quantify the ‘transport’ properties of these geophysical systems: Put very simply, how does a packet of air or water get from A to B, and how large are these packets? An example of one of these packets is the Antarctic polar vortex, a rotating mass of air in the stratosphere above Antarctica that traps chemicals such as ozone and chlorofluorocarbons (CFCs), exacerbating the effect of the CFCs on the ozone hole,” Froyland says.

 
In the American Institute of Physics’ journal CHAOS, Froyland and his research team, including colleague Adam Monahan from the School of Earth and Ocean Sciences at the University of Victoria in Canada, describe how they developed the first direct approach for identifying these packets, called “coherent sets” due to their nondispersive properties.
This technique is based on so-called “transfer operators,” which represent a complete description of the ensemble evolution of the fluid. The transfer operator approach is very simple to implement, they say, requiring only singular vector computations of a matrix of transitions induced by the dynamics.
 
When tested using European Centre for Medium Range Weather Forecasting (ECMWF) data, they found that their new methodology was significantly better than existing technologies for identifying the location and transport properties of the vortex.
 
The transport operator methodology has myriad applications in atmospheric science and physical oceanography to discover the main transport pathways in the atmosphere and oceans, and to quantify the transport. “As atmosphere-ocean models continue to increase in resolution with improved computing power, the analysis and understanding of these models with techniques such as transfer operators must be undertaken beyond pure simulation,” says Froyland.
 
Their next application will be the Agulhas rings off the South African coast, because the rings are responsible for a significant amount of transport of warm water and salt between the Indian and Atlantic Oceans.
 
Note: This story has been adapted from a news release issued by the American Institute of Physics

New research shows rivers cut deep notches in the Alps’ broad glacial valleys

For years, geologists have argued about the processes that formed steep inner gorges in the broad glacial valleys of the Swiss Alps.
 
The U-shaped valleys were created by slow-moving glaciers that behaved something like road graders, eroding the bedrock over hundreds or thousands of years. When the glaciers receded, rivers carved V-shaped notches, or inner gorges, into the floors of the glacial valleys. But scientists disagreed about whether those notches were erased by subsequent glaciers and then formed all over again as the second round of glaciers receded.
New research led by a University of Washington scientist indicates that the notches endure, at least in part, from one glacial episode to the next. The glaciers appear to fill the gorges with ice and rock, protecting them from being scoured away as the glaciers move.
When the glaciers receded, the resulting rivers returned to the gorges and easily cleared out the debris deposited there, said David Montgomery, a UW professor of Earth and space sciences.

“The alpine inner gorges appear to lay low and endure glacial attack. They are topographic survivors,” Montgomery said.

“The answer is not so simple that the glaciers always win. The river valleys can hide under the glaciers and when the glaciers melt the rivers can go back to work.”
 
Montgomery is lead author of a paper describing the research, published online Dec. 5 in Nature Geoscience. Co-author is Oliver Korup of the University of Potsdam in Germany, who did the work while with the Swiss Federal Research Institutes in Davos, Switzerland.
 
The researchers used topographic data taken from laser-based (LIDAR) measurements to determine that, if the gorges were erased with each glacial episode, the rivers would have had to erode the bedrock from one-third to three-quarters of an inch per year since the last glacial period to get gorges as deep as they are today.
“That is screamingly fast. It’s really too fast for the processes,” Montgomery said. Such erosion rates would exceed those in all areas of the world except the most tectonically active regions, the researchers said, and they would have to maintain those rates for 1,000 years.
 
Montgomery and Korup found other telltale evidence, sediment from much higher elevations and older than the last glacial deposits, at the bottom of the river gorges. That material likely was pushed into the gorges as glaciers moved down the valleys, indicating the gorges formed before the last glaciers.

“That means the glaciers aren’t cutting down the bedrock as fast as the rivers do. If the glaciers were keeping up, each time they’d be able to erase the notch left by the river,” Montgomery said.
 
“They’re locked in this dance, working together to tear the mountains down.”
The work raises questions about how common the preservation of gorges might be in other mountainous regions of the world.

“It shows that inner gorges can persist, and so the question is, ‘How typical is that?’ I don’t think every inner gorge in the world survives multiple glaciations like that, but the Swiss Alps are a classic case. That’s where mountain glaciation was first discovered.”
 
Note: This story has been adapted from a news release issued by the University of Washington

SCEC’s ‘M8’ earthquake simulation breaks computational records, promises better quake models

A multi-disciplinary team of researchers has presented the world’s most advanced earthquake shaking simulation at the Supercomputing 2010 (SC10) conference held this week in New Orleans. The research was selected as a finalist for the Gordon Bell prize, awarded at the annual conference for outstanding achievement in high-performance computing applications.

The “M8” simulation represents how a magnitude 8.0 earthquake on the southern San Andreas Fault will shake a larger area, in greater detail, than previously possible. Perhaps most importantly, the development of the M8 simulation advances the state-of-the-art in terms of the speed and efficiency at which such calculations can be performed.

The Southern California Earthquake Center (SCEC) at the University of Southern California (USC) was the lead coordinator in the project. San Diego Supercomputer Center (SDSC) researchers provided the high-performance computing and scientific visualization expertise for the simulation. Scientific details of the earthquake were developed by scientists at San Diego State University (SDSU). Ohio State University (OSU) researchers were also part of the collaborative effort to improve the efficiency of the software involved.
While this specific earthquake has a low probability of occurrence, the improvements in technology required to produce this simulation will now allow scientists to simulate other more likely earthquakes scenarios in much less time than previously required. Because such simulations are the most important and widespread applications of high performance computing for seismic hazard estimation currently in use, the SCEC team has been focused on optimizing the technologies and codes needed to create them.
The M8 simulation was funded through a number of National Science Foundation (NSF) grants and it was performed using supercomputer resources including NSF’s Kraken supercomputer at National Institute for Computational Science (NICS) and the Department of Energy (DOE) Jaguar supercomputer at the National Center for Computational Science . The SCEC M8 simulation represents the latest in earthquake science and in computations at the petascale level, which refers to supercomputers capable of more than one quadrillion floating point operations (calculations) per second.
Petascale simulations such as this one are needed to understand the rupture and wave dynamics of the largest earthquakes, at shaking frequencies required to engineer safe structures,” said Thomas Jordan, director of SCEC and Principal Investigator for the project. Previous simulations were useful only for modeling how tall structures will behave in earthquakes, but the new simulation can be used to understand how a broader range of buildings will respond.
“The scientific results of this massive simulation are very interesting, and its level of detail has allowed us to observe things that we were not able to see in the past,” said Kim Olsen, professor of geological sciences at SDSU, and lead seismologist of the study. .
However, given the massive number of calculations required, only the most advanced supercomputers are capable of producing such simulations in a reasonable time period. “This M8 simulation represents a milestone calculation, a breakthrough in seismology both in terms of computational size and scalability,” said Yifeng Cui, a computational scientist at SDSC. “It’s also the largest and most detailed simulation of a major earthquake ever performed in terms of floating point operations, and opens up new territory for earthquake science and engineering with the goal of reducing the potential for loss of life and property.”
Specifically, the M8 simulation is the largest in terms duration of the shaking modeled (six minutes) and the geographical area covered – a rectangular volume approximately 500 miles (810km) long by 250 miles (405 km) wide, by 50 miles (85km) deep. The team’s latest research also set a new record in the number of computer processor cores used, with 223,074 cores sustaining performance of 220 trillion calculations per second for 24 hours on the Jaguar Cray XT5 supercomputer at the Oak Ridge National Laboratory (ORNL) in Tennessee.
We have come a long way in just six years, doubling the seismic frequencies modeled by our simulations every two to three years, from 0.5 Hertz (or cycles per second) in the TeraShake simulations, to 1.0 Hertz in the ShakeOut simulations, and now to 2.0 Hertz in this latest project,” said Phil Maechling, SCEC’s associate director for Information Technology.
In terms of earthquake science, these simulations can be used to study issues of how earthquake waves travel through structures in the earth’s crust and to improve three-dimensional models of such structures.
Based on our calculations, we are finding that deep sedimentary basins, such as those in the Los Angeles area, are getting larger shaking than are predicted by the standard methods,” Jordan said. “By improving the predictions, making them more realistic, we can help engineers make new buildings safer.” The simulations are also useful in developing better seismic hazard policies and for improving scenarios used in emergency planning.
Note: This story has been adapted from a news release issued by the University of Southern California

Scientists look deeper for coal ash hazards

As the U.S. Environmental Protection Agency weighs whether to define coal ash as hazardous waste, a Duke University study identifies new monitoring protocols and insights that can help investigators more accurately measure and predict the ecological impacts of coal ash contaminants.

“The take-away lesson is we need to change how and where we look for coal ash contaminants,” says Avner Vengosh, professor of geochemistry and water quality at Duke’s Nicholas School of the Environment. “Risks to water quality and aquatic life don’t end with surface water contamination, but much of our current monitoring does.”

The study, published online this week in the peer-reviewed journal Environmental Science and Technology, documents contaminant levels in aquatic ecosystems over an 18-month period following a massive coal sludge spill in 2008 at a Tennessee Valley Authority power plant in Kingston, Tenn.

By analyzing more than 220 water samples collected over the 18-month period, the Duke team found that high concentrations of arsenic from the TVA coal ash remained in pore water — water trapped within river-bottom sediment — long after contaminant levels in surface waters dropped back below safe thresholds.
Samples extracted from 10 centimeters to half a meter below the surface of sediment in downstream rivers contained arsenic levels of up to 2,000 parts per billion – well above the EPA’s thresholds of 10 parts per billion for safe drinking water, and 150 parts per billion for
protection of aquatic life “It’s like cleaning your house,” Vengosh says of the finding. “Everything may look clean, but if you look under the rugs, that’s where you find the dirt.”

The potential impacts of pore water contamination extend far beyond the river bottom, he explains, because “this is where the biological food chain begins, so any bioaccumulation of toxins will start here.”

The research team, which included two graduate students from Duke’s Nicholas School of the Environment and Pratt School of Engineering, also found that acidity and the loss or gain of oxygen in water play key roles in controlling how arsenic, selenium and other coal ash contaminants leach into the environment. Knowing this will help scientists better predict the fate and migration of contaminants derived from coal ash residues, particularly those stored in holding ponds and landfills, as well as any potential leakage into lakes, rivers and other aquatic systems.

The study comes as the EPA is considering whether to define ash from coal-burning power plants as hazardous waste. The deadline for public comment to the EPA was Nov. 19; a final ruling — what Vengosh calls “a defining moment” — is expected in coming months.

“At more than 3.7 million cubic meters, the scope of the TVA spill is unprecedented, but similar processes are taking place in holding ponds, landfills and other coal ash storage facilities across the nation,” he says. “As long as coal ash isn’t regulated as hazardous waste, there is no way to prevent discharges of contaminants from these facilities and protect the environment.”

Note: This story has been adapted from a news release issued by the Duke University

Related Articles