back to top
27.8 C
New York
Monday, September 16, 2024
Home Blog Page 332

Cave reveals Southwest’s abrupt climate swings during Ice Age

Sarah Truebe, a geosciences doctoral student at the University of Arizona, checks on an experiment that measures how fast cave formations grow in Arizona’s Cave of the Bells. – Copyright 2010 Stella Cousins.
Ice Age climate records from an Arizona stalagmite link the Southwest’s winter precipitation to temperatures in the North Atlantic, according to new research.
 
The finding is the first to document that the abrupt changes in Ice Age climate known from Greenland also occurred in the southwestern U.S., said co-author Julia E. Cole of the University of Arizona in Tucson.
“It’s a new picture of the climate in the Southwest during the last Ice Age,” said Cole, a UA professor of geosciences. “When it was cold in Greenland, it was wet here, and when it was warm in Greenland, it was dry here.”

The researchers tapped into the natural climate archives recorded in a stalagmite from a limestone cave in southern Arizona. Stalagmites grow up from cave floors.
The stalagmite yielded an almost continuous, century-by-century climate record spanning 55,000 to 11,000 years ago. During that time ice sheets covered much of North America, and the Southwest was cooler and wetter than it is now.

Cole and her colleagues found the Southwest flip-flopped between wet and dry periods during the period studied.

Each climate regime lasted from a few hundred years to more than one thousand years, she said. In many cases, the transition from wet to dry or vice versa took less than 200 years.

“These changes are part of a global pattern of abrupt changes that were first documented in Greenland ice cores,” she said. “No one had documented those changes in the Southwest before.”

Scientists suggest that changes in the northern Atlantic Ocean’s circulation drove the changes in Greenland’s Ice Age climate, Cole said. “Those changes resulted in atmospheric changes that pushed around the Southwest’s climate.”
 
She added that observations from the 20th and 21st centuries link modern-day alterations in the North Atlantic’s temperature with changes in the storm track that controls the Southwest’s winter precipitation.

“Also, changes in the storm track are the kinds of changes we expect to see in a warming world,” she said. “When you warm the North Atlantic, you move the storm track north.”
The team’s paper, “Moisture Variability in the Southwestern U.S. Linked to Abrupt Glacial Climate Change,” is scheduled for publication in the February issue of Nature Geoscience.

Cole’s UA co-authors are Jennifer D. M. Wagner, J. Warren Beck, P. Jonathan Patchett and Heidi R. Barnett. Co-author Gideon M. Henderson is from the University of Oxford, U.K.
 
Cole became interested in studying cave formations as natural climate archives about 10 years ago. At the suggestion of some local cave specialists, she and her students began working in the Cave of the Bells, an active limestone cave in the Santa Rita Mountains.

In such a cave, mineral-rich water percolates through the soil into the cave below and onto its floor. As the water loses carbon dioxide, the mineral known as calcium carbonate is left behind. As the calcium carbonate accumulates in the same spot on the cave floor over thousands of years, it forms a stalagmite.

The researchers chose the particular stalagmite for study because it was deep enough in the cave that the humidity was always high, an important condition for preservation of climate records, Cole said. Following established cave conservation protocols, the researchers removed the formation, which was less than 18 inches tall.

For laboratory analyses, first author Wagner took a core about one inch in diameter from the center of the stalagmite. The scientists then returned the formation to the cave, glued it back into its previous location with special epoxy and capped it with a limestone plug.

To read the climate record preserved in the stalagmite, Wagner sliced the core lengthwise several times for several different analyses.

On one slice, she shaved more than 1,200 hair-thin, 100-micron samples and measured what types of oxygen molecule each one contained.

A rare form of oxygen, oxygen-18, is more common in the calcium carbonate deposited during dry years. By seeing how much oxygen-18 was present in each layer, the scientists could reconstruct the region’s pattern of wet and dry climate.

To assign dates to each wet and dry period, Wagner used another slice of the core for an analysis called uranium-thorium dating.

The radioactive element uranium is present in minute amounts in the water dripping onto a stalagmite. The uranium then becomes part of the formation. Uranium decays into the element thorium at a steady and known rate, so its decay rate can be used to construct a timeline of a stalagmite’s growth.

By matching the stalagmite’s growth timeline with the sequence of wet and dry periods revealed by the oxygen analyses, the researchers could tell in century-by-century detail when the Southwest was wet and when it was dry.

“This work shows the promise of caves to providing climate records for the Southwest. It’s a new kind of climate record for this region,” Cole said.

She and her colleagues are now expanding their efforts by sampling other cave formations in the region.
 
Note: This story has been adapted from a news release issued by the University of Arizona

Earth’s hot past: Prologue to future climate?

If carbon dioxide emissions continue on their current trajectory, Earth may someday return to an ancient, hotter climate when the Antarctic ice sheet didn’t exist. – NOAA
The magnitude of climate change during Earth’s deep past suggests that future temperatures may eventually rise far more than projected if society continues its pace of emitting greenhouse gases, a new analysis concludes.
 
The study, by National Center for Atmospheric Research (NCAR) scientist Jeffrey Kiehl, will appear as a “Perspectives” article in this week’s issue of the journal Science.
The work was funded by the National Science Foundation (NSF), NCAR’s sponsor.

Building on recent research, the study examines the relationship between global temperatures and high levels of carbon dioxide in the atmosphere tens of millions of years ago.

It warns that, if carbon dioxide emissions continue at their current rate through the end of this century, atmospheric concentrations of the greenhouse gas will reach levels that existed about 30 million to 100 million years ago.

Global temperatures then averaged about 29 degrees Fahrenheit (16 degrees Celsius) above pre-industrial levels.

Kiehl said that global temperatures may take centuries or millennia to fully adjust in response to the higher carbon dioxide levels.

Accorning to the study and based on recent computer model studies of geochemical processes, elevated levels of carbon dioxide may remain in the atmosphere for tens of thousands of years.

The study also indicates that the planet’s climate system, over long periods of times, may be at least twice as sensitive to carbon dioxide as currently projected by computer models, which have generally focused on shorter-term warming trends.

This is largely because even sophisticated computer models have not yet been able to incorporate critical processes, such as the loss of ice sheets, that take place over centuries or millennia and amplify the initial warming effects of carbon dioxide.

“If we don’t start seriously working toward a reduction of carbon emissions, we are putting our planet on a trajectory that the human species has never experienced,” says Kiehl, a climate scientist who specializes in studying global climate in Earth’s geologic past.

“We will have committed human civilization to living in a different world for multiple generations.”

The Perspectives article pulls together several recent studies that look at various aspects of the climate system, while adding a mathematical approach by Kiehl to estimate average global temperatures in the distant past.

Its analysis of the climate system’s response to elevated levels of carbon dioxide is supported by previous studies that Kiehl cites.

“This research shows that squaring the evidence of environmental change in the geologic record with mathematical models of future climate is crucial,” says David Verardo, Director of NSF’s Paleoclimate Program. “Perhaps Shakespeare’s words that ‘what’s past is prologue’ also apply to climate.”

Kiehl focused on a fundamental question: when was the last time Earth’s atmosphere contained as much carbon dioxide as it may by the end of this century?

If society continues its current pace of increasing the burning of fossil fuels, atmospheric levels of carbon dioxide are expected to reach about 900 to 1,000 parts per million by the end of this century.
 
That compares with current levels of about 390 parts per million, and pre-industrial levels of about 280 parts per million.

Since carbon dioxide is a greenhouse gas that traps heat in Earth’s atmosphere, it is critical for regulating Earth’s climate.

Without carbon dioxide, the planet would freeze over.
 
But as atmospheric levels of the gas rise, which has happened at times in the geologic past, global temperatures increase dramatically and additional greenhouse gases, such as water vapor and methane, enter the atmosphere through processes related to evaporation and thawing.

This leads to further heating.

Kiehl drew on recently published research that, by analyzing molecular structures in fossilized organic materials, showed that carbon dioxide levels likely reached 900 to 1,000 parts per
million about 35 million years ago.

At that time, temperatures worldwide were substantially warmer than at present, especially in polar regions–even though the Sun’s energy output was slightly weaker.

The high levels of carbon dioxide in the ancient atmosphere kept the tropics at about 9-18 F (5-10 C) above present-day temperatures.

The polar regions were some 27-36 F (15-20 C) above present-day temperatures.

Kiehl applied mathematical formulas to calculate that Earth’s average annual temperature 30 to 40 million years ago was about 88 F (31 C)–substantially higher than the pre-industrial average temperature of about 59 F (15 C).

The study also found that carbon dioxide may have two times or more an effect on global temperatures than currently projected by computer models of global climate.

The world’s leading computer models generally project that a doubling of carbon dioxide in the atmosphere would have a heating impact in the range of 0.5 to 1.0 degrees Celsius watts per square meter. (The unit is a measure of the sensitivity of Earth’s climate to changes in greenhouse gases.)

However, the published data show that the comparable impact of carbon dioxide 35 million years ago amounted to about 2 C watts per square meter.

Computer models successfully capture the short-term effects of increasing carbon dioxide in the atmosphere.

But the record from Earth’s geologic past also encompasses longer-term effects, which accounts for the discrepancy in findings.

The eventual melting of ice sheets, for example, leads to additional heating because exposed dark surfaces of land or water absorb more heat than ice sheets.

“This analysis shows that on longer time scales, our planet may be much more sensitive to greenhouse gases than we thought,” Kiehl says.

Climate scientists are currently adding more sophisticated depictions of ice sheets and other factors to computer models.

As these improvements come on-line, Kiehl believes that the computer models and the paleoclimate record will be in closer agreement, showing that the impacts of carbon dioxide on climate over time will likely be far more substantial than recent research has indicated.

Because carbon dioxide is being pumped into the atmosphere at a rate that has never been experienced, Kiehl could not estimate how long it would take for the planet to fully heat up.
However, a rapid warm-up would make it especially difficult for societies and ecosystems to adapt, he says.

If emissions continue on their current trajectory, “the human species and global ecosystems will be placed in a climate state never before experienced in human history,” the paper states.
 
 
Note: This story has been adapted from a news release issued by the National Science Foundation

Hydrocarbons in the deep Earth?

The oil and gas that fuels our homes and cars started out as living organisms that died, were compressed, and heated under heavy layers of sediments in the Earth’s crust. Scientists have debated for years whether some of these hydrocarbons could also have been created deeper in the Earth and formed without organic matter. Now for the first time, scientists have found that ethane and heavier hydrocarbons can be synthesized under the pressure-temperature conditions of the upper mantle -the layer of Earth under the crust and on top of the core. The research was conducted by scientists at the Carnegie Institution’s Geophysical Laboratory, with colleagues from Russia and Sweden, and is published in the July 26, advanced on-line issue of Nature Geoscience
Methane (CH4) is the main constituent of natural gas, while ethane (C2H6) is used as a petrochemical feedstock. Both of these hydrocarbons, and others associated with fuel, are called saturated hydrocarbons because they have simple, single bonds and are saturated with hydrogen. Using a diamond anvil cell and a laser heat source, the scientists first subjected methane to pressures exceeding 20 thousand times the atmospheric pressure at sea level and temperatures ranging from 1,300 F° to over 2,240 F°. These conditions mimic those found 40 to 95 miles deep inside the Earth. The methane reacted and formed ethane, propane, butane, molecular hydrogen, and graphite. The scientists then subjected ethane to the same conditions and it produced methane. The transformations suggest heavier hydrocarbons could exist deep down. The reversibility implies that the synthesis of saturated hydrocarbons is thermodynamically controlled and does not require organic matter.

The scientists ruled out the possibility that catalysts used as part of the experimental apparatus were at work, but they acknowledge that catalysts could be involved in the deep Earth with its mix of compounds.

“We were intrigued by previous experiments and theoretical predictions,” remarked Carnegie’s Alexander Goncharov a coauthor. “Experiments reported some years ago subjected methane to high pressures and temperatures and found that heavier hydrocarbons formed from methane under very similar pressure and temperature conditions. However, the molecules could not be identified and a distribution was likely. We overcame this problem with our improved laser-heating technique where we could cook larger volumes more uniformly. And we found that methane can be produced from ethane.”

 
The hydrocarbon products did not change for many hours, but the tell-tale chemical signatures began to fade after a few days.

Professor Kutcherov, a coauthor, put the finding into context: “The notion that hydrocarbons generated in the mantle migrate into the Earth’s crust and contribute to oil-and-gas reservoirs was promoted in Russia and Ukraine many years ago. The synthesis and stability of the compounds studied here as well as heavier hydrocarbons over the full range of conditions within the Earth’s mantle now need to be explored. In addition, the extent to which this ‘reduced’ carbon survives migration into the crust needs to be established (e.g., without being oxidized to CO2). These and related questions demonstrate the need for a new experimental and theoretical program to study the fate of carbon in the deep Earth.”
 
Note: This story has been adapted from a news release issued by the Carnegie Institution

Mountain glacier melt to contribute 12 centimeters to world sea-level increases by 2100

A huge piece of ice breaking off the 80m high Glaciar Perito Moreno, El Calafate, Argentina
Melt off from small mountain glaciers and ice caps will contribute about 12 centimetres to world sea-level increases by 2100, according to UBC research published this week in Nature Geoscience.
The largest contributors to projected global sea-level increases are glaciers in Arctic Canada, Alaska and landmass bound glaciers in the Antarctic. Glaciers in the European Alps, New Zealand, the Caucasus, Western Canada and the Western United Sates–though small absolute contributors to global sea-level increases–are projected to lose more than 50 per cent of their current ice volume.

The study modelled volume loss and melt off from 120,000 mountain glaciers and ice caps, and is one of the first to provide detailed projections by region. Currently, melt from smaller mountain glaciers and ice caps is responsible for a disproportionally large portion of sea level increases, even though they contain less than one per cent of all water on Earth bound in glacier ice.

“There is a lot of focus on the large ice sheets but very few global scale studies quantifying how much melt to expect from these smaller glaciers that make up about 40 percent of the entire sea-level rise that we observe right now,” says Valentina Radic, a postdoctoral researcher with the Department of Earth and Ocean Sciences and lead author of the study.
Increases in sea levels caused by the melting of the Greenland and Antarctic ice sheets, and the thermal expansion of water, are excluded from the results.
Radic and colleague Regine Hock at the University of Alaska, Fairbanks, modeled future glacier melt based on temperature and precipitation projections from 10 global climate models used by the 
Intergovernmental Panel on Climate Change.
 

“While the overall sea level increase projections in our study are on par with IPCC studies, our results are more detailed and regionally resolved,” says Radic. “This allows us to get a better picture of projected regional ice volume change and potential impacts on local water supplies, and changes in glacier size distribution.”

Global projections of sea level rises from mountain glacier and ice cap melt from the IPCC range between seven and 17 centimetres by the end of 2100. Radic’s projections are only slightly higher, in the range of seven to 18 centimetres.

Radic’s projections don’t include glacier calving–the production of icebergs. Calving of tide-water glaciers may account for 30 per cent to 40 per cent of their total mass loss.

“Incorporating calving into the models of glacier mass changes on regional and global scale is still a challenge and a major task for future work,” says Radic.

However, the new projections include detailed projection of melt off from small glaciers surrounding the Greenland and Antarctic ice sheets, which have so far been excluded from, or only estimated in, global assessments.
 
Note: This story has been adapted from a news release issued by the University of British Columbia

Sulphur proves important in the formation of gold mines

 
Collaborating with an international research team, an economic geologist from The University of Western Ontario has discovered how gold-rich magma is produced, unveiling an all-important step in the formation of gold mines.
The findings were published in the December issue of Nature Geoscience.
Robert Linnen, the Robert Hodder Chair in Economic Geology in Western’s Department of Earth Sciences conducts research near Kirkland Lake, Ontario and says the results of the study could lead to a breakthrough in choosing geographic targets for gold exploration and making exploration more successful.

Noble metals, like gold, are transported by magma from deep within the mantle (below the surface) of the Earth to the shallow crust (the surface), where they form deposits. Through a series of experiments, Linnen and his colleagues from the University of Hannover (Germany), the University of Potsdam (Germany) and Laurentian University found that gold-rich magma can be generated in mantle also containing high amounts of sulphur.

“Sulphur wasn’t recognized as being that important, but we found it actually enhances gold solubility and solubility is a very important step in forming a gold deposit,” explains Linnen. “In some cases, we were detecting eight times the amount of gold if sulphur was also present.”
Citing the World Gold Council, Linnen says the best estimates available suggest the total volume of gold mined up to the end of 2009 was approximately 165,600 tonnes. Approximately 65 per cent of that total has been mined since 1950.
“All the easy stuff has been found,” offers Linnen. “So when you project to the future, we’re going to have to come up with different ways, different technologies and different philosophies for finding more resources because the demand for resources is ever-increasing.”
Note: This story has been adapted from a news release issued by the University of Western Ontario

Widespread ancient ocean ‘dead zones’ challenged early life

The oceans became oxygen-rich as they are today about 600 million years ago, during Earth’s Late Ediacaran Period. Before that, most scientists believed until recently, the ancient oceans were relatively oxygen-poor for the preceding four billion years.
Now biogeochemists at the University of California-Riverside (UCR) have found evidence that the oceans went back to being “anoxic,” or oxygen-poor, around 499 million years ago, soon after the first appearance of animals on the planet.
They remained anoxic for two to four million years.
The researchers suggest that such anoxic conditions may have been commonplace over a much broader interval of time.

“This work is important at many levels, from the steady growth of atmospheric oxygen in the last 600 million years, to the potential impact of oxygen level fluctuations on early evolution and diversification of life,” said Enriqueta Barrera, program director in the National Science Foundation (NSF)’s Division of Earth Sciences, which funded the research.

The researchers argue that such fluctuations in the oceans’ oxygen levels are the most likely explanation for what drove the explosive diversification of life forms and rapid evolutionary turnover that marked the Cambrian Period some 540 to 488 million years ago.
They report in this week’s issue of the journal Nature that the transition from a generally oxygen-rich ocean during the Cambrian to the fully oxygenated ocean we have today was not a simple turn of the switch, as has been widely accepted until now.
“Our research shows that the ocean fluctuated between oxygenation states 499 million years ago,” said paper co-author Timothy Lyons, a UCR biogeochemist and co-author of the paper.
“Such fluctuations played a major, perhaps dominant, role in shaping the early evolution of animals on the planet by driving extinction and clearing the way for new organisms to take their place.”
Oxygen is necessary for animal survival, but not for the many bacteria that thrive in and even demand life without oxygen.

Understanding how the environment changed over the course of Earth’s history can give scientists clues to how life evolved and flourished during the critical, very early stages of animal evolution.

“Life and the environment in which it lives are intimately linked,” said Benjamin Gill, the first author of the paper, a biogeochemist at UCR, and currently a postdoctoral researcher at Harvard University.

When the ocean’s oxygenation states changed rapidly in Earth’s history, some organisms were not able to cope.
Oceanic oxygen affects cycles of other biologically important elements such as iron, phosphorus and nitrogen.
“Disruption of these cycles is another way to drive biological crises,” Gill said. “A switch to an oxygen-poor state of the ocean can cause major extinction of species.”
The researchers are now working to find an explanation for why the oceans became oxygen-poor about 499 million years ago.
“We have the ‘effect,’ but not the ’cause,'” said Gill.
“The oxygen-poor state persisted likely until the enhanced burial of organic matter, originally derived from oxygen-producing photosynthesis, resulted in the accumulation of more oxygen in the atmosphere and ocean
“As a kind of negative feedback, the abundant burial of organic material facilitated by anoxia may have bounced the ocean to a more oxygen-rich state.”
Understanding past events in Earth’s distant history can help refine our view of changes happening on the planet now, said Gill.
“Today, some sections of the world’s oceans are becoming oxygen-poor–the Chesapeake Bay (surrounded by Maryland and Virginia) and the so-called ‘dead zone’ in the Gulf of Mexico are just two examples,” he said.
“We know the Earth went through similar scenarios in the past. Understanding the ancient causes and consequences can provide essential clues to what the future has in store for our oceans.”

The team examined the carbon, sulfur and molybdenum contents of rocks they collected from localities in the United States, Sweden, and Australia.

Combined, these analyses allowed the scientists to infer the amount of oxygen present in the ocean at the time the limestones and shales were deposited.
By looking at successive rock layers, they were able to compile the biogeochemical history of the ocean.
Note: This story has been adapted from a news release issued by the National Science Foundation

Hot stuff: Magma at shallow depth under Hawaii

Ohio State University researchers have found a new way to gauge the depth of the magma chamber that forms the Hawaiian Island volcanic chain, and determined that the magma lies much closer to the surface than previously thought.
The finding could help scientists predict when Hawaiian volcanoes are going to erupt. It also suggests that Hawaii holds great potential for thermal energy.
Julie Ditkof, an honors undergraduate student in earth sciences at Ohio State, described the study at the American Geophysical Union Meeting in San Francisco on Tuesday, December 14.
For her honors thesis, Ditkof took a technique that her advisor Michael Barton, professor of earth sciences, developed to study magma in Iceland, and applied it to Hawaii.
She discovered that magma lies an average of 3 to 4 kilometers (about 1.9 to 2.5 miles) beneath the surface of Hawaii.

“Hawaii was already unique among volcanic systems, because it has such an extensive plumbing system, and the magma that erupts has a unique and variable chemical composition,” Ditkof explained. “Now we know the chamber is at a shallow depth not seen anywhere else in the world.”

For example, Barton determined that magma chambers beneath Iceland lie at an average depth of 20 kilometers.
While that means the crust beneath Hawaii is much thinner than the crust beneath Iceland, Hawaiians have nothing to fear.
“The crust in Hawaii has been solidifying from eruptions for more than 300,000 years now. The crust doesn’t get consumed by the magma chamber. It floats on top,” Ditkof explained.
The results could help settle two scientific debates, however.
Researchers have wondered whether more than one magma chamber was responsible for the varying chemical compositions, even though seismological studies indicated only one chamber was present.
Meanwhile, those same seismological studies pegged the depth as shallow, while petrologic studies – studies of rock composition – pegged it deeper.
There has never been a way to prove who was right, until now.
“We suspected that the depth was actually shallow, but we wanted to confirm or deny all those other studies with hard data,” Barton said.
He and Ditkof determined that there is one large magma chamber just beneath the entire island chain that feeds the Hawaiian volcanoes through many different conduits.
They came to this conclusion after Ditkof analyzed the chemical composition of nearly 1,000 magma samples. From the ratio of some elements to others – aluminum to calcium, for example, or calcium to magnesium – she was able to calculate the pressure at which the magma had crystallized.
For his studies of Iceland, Barton created a methodology for converting those pressure calculations to depth. When Ditkof applied that methodology, she obtained an average depth of 3 to 4 kilometers.
Researchers could use this technique to regularly monitor pressures inside the chamber and make more precise estimates of when eruptions are going to occur.
Barton said that, ultimately, the finding might be more important in terms of energy.
“Hawaii has huge geothermal resources that haven’t been tapped fully,” he said, and quickly added that scientists would have to determine whether tapping that energy was practical – or safe.
“You’d have to drill some test bore holes. That’s dangerous on an active volcano, because then the lava could flow down and wipe out your drilling rig.”
Note: This story has been adapted from a news release issued by the Ohio State University

Ancient raindrops reveal a wave of mountains sent south by sinking Farallon plate

50 million years ago, mountains began popping up in southern British Columbia. Over the next 22 million years, a wave of mountain building swept (geologically speaking) down western North America as far south as Mexico and as far east as Nebraska, according to Stanford geochemists. Their findings help put to rest the idea that the mountains mostly developed from a vast, Tibet-like plateau that rose up across most of the western U.S. roughly simultaneously and then subsequently collapsed and eroded into what we see today.
The data providing the insight into the mountains – so popularly renowned for durability – came from one of the most ephemeral of sources: raindrops. Or more specifically, the isotopic residue – fingerprints, effectively – of ancient precipitation that rained down upon the American west between 65 and 28 million years ago.
Atoms of the same element but with different numbers of neutrons in their nucleus are called isotopes. More neutrons make for a heavier atom and as a cloud rises, the water molecules that contain the heavier isotopes of hydrogen and oxygen tend to fall first. By measuring the ratio of heavy to light isotopes in the long-ago rainwater, researchers can infer the elevation of the land when the raindrops fell.
 
The water becomes incorporated into clays and carbonate minerals on the surface, or in volcanic glass, which are then preserved for the ages in the sediments.
 
Hari Mix, a PhD candidate in Environmental Earth System Science at Stanford, worked with the analyses of about 2,800 samples – several hundred that he and his colleagues collected, the rest from published studies – and used the isotopic ratios to calculate the composition of the ancient rain. Most of the samples were from carbonate deposits in ancient soils and lake sediments, taken from dozens of basins around the western U.S.
 
Using the elevation trends revealed in the data, Mix was able to decipher the history of the mountains. “Where we got a huge jump in isotopic ratios, we interpret that as a big uplift,” he said.

“We saw a major isotopic shift at around 49 million years ago, in southwest Montana,” he said. “And another one at 39 mya, in northern Nevada” as the uplift moved southward. Previous work by Chamberlain’s group had found evidence for these shifts in data from two basins, but Mix’s work with the larger data set demonstrated that the pattern of uplift held across the entire western U.S.
 
The uplift is generally agreed to have begun when the Farallon plate, a tectonic plate that was being shoved under the North American plate, slowly began peeling away from the underside of the continent.

“The peeling plate looked sort of like a tongue curling down,” said Page Chamberlain, a professor in environmental Earth system science who is Mix’s advisor.
 
As hot material from the underlying mantle flowed into the gap between the peeling plates, the heat and buoyancy of the material caused the overlying land to rise in elevation. The peeling tongue continued to fall off, and hot mantle continued to flow in behind it, sending a slow-motion wave of mountain-building coursing southward.
“We knew that the Farallon plate fell away, but the geometry of how that happened and the topographic response to it is what has been debated,” Mix said.
 
Mix and Chamberlain estimate that the topographic wave would have been at least one to two kilometers higher than the landscape it rolled across and would have produced mountains with elevations up to a little over 4 kilometers (about 14,000 feet), comparable to the elevations existing today.
 
Mix said their isotopic data corresponds well with other types of evidence that have been documented.

“There was a big north to south sweep of volcanism through the western U.S. at the exact same time,” he said.
There was also a simultaneous extension of the Earth’s crust, which results when the crust is heated from below, as it would have been by the flow of hot magma under the North American plate.

“The pattern of topographic uplift we found matches what has been documented by other people in terms of the volcanology and extension,” Mix said.

“Those three things together, those patterns, all point to something going on with the Farallon plate as being responsible for the construction of the western mountain ranges, the Cordillera.”
Chamberlain said that while there was certainly elevated ground, it was not like Tibet.
“It was not an average elevation of 15,000 feet. It was something much more subdued,” he said.

“The main implication of this work is that it was not a plateau that collapsed, but rather something that happened in the mantle, that was causing this mountain growth,” Chamberlain said.
 
Note: This story has been adapted from a news release issued by the Stanford University

First measurement of magnetic field in Earth’s core

A University of California, Berkeley, geophysicist has made the first-ever measurement of the strength of the magnetic field inside Earth’s core, 1,800 miles underground

The magnetic field strength is 25 Gauss, or 50 times stronger than the magnetic field at the surface that makes compass needles align north-south. Though this number is in the middle of the range geophysicists predict, it puts constraints on the identity of the heat sources in the core that keep the internal dynamo running to maintain this magnetic field


“This is the first really good number we’ve had based on observations, not inference,” said author Bruce A. Buffett, professor of earth and planetary science at UC Berkeley. “The result is not controversial, but it does rule out a very weak magnetic field and argues against a very strong field.”

The results are published in the Dec. 16 issue of the journal Nature.
 
A strong magnetic field inside the outer core means there is a lot of convection and thus a lot of heat being produced, which scientists would need to account for, Buffett said. The presumed sources of energy are the residual heat from 4 billion years ago when the planet was hot and molten, release of gravitational energy as heavy elements sink to the bottom of the liquid core, and radioactive decay of long-lived elements such as potassium, uranium and thorium.
 
A weak field – 5 Gauss, for example – would imply that little heat is being supplied by radioactive decay, while a strong field, on the order of 100 Gauss, would imply a large contribution from radioactive decay.

“A measurement of the magnetic field tells us what the energy requirements are and what the sources of heat are,” Buffett said.
 
About 60 percent of the power generated inside the earth likely comes from the exclusion of light elements from the solid inner core as it freezes and grows, he said. This constantly builds up crud in the outer core.
 
The Earth’s magnetic field is produced in the outer two-thirds of the planet’s iron/nickel core. This outer core, about 1,400 miles thick, is liquid, while the inner core is a frozen iron and nickel wrecking ball with a radius of about 800 miles – roughly the size of the moon. The core is surrounded by a hot, gooey mantle and a rigid surface crust.
 
The cooling Earth originally captured its magnetic field from the planetary disk in which the solar system formed. That field would have disappeared within 10,000 years if not for the planet’s internal dynamo, which regenerates the field thanks to heat produced inside the planet. The heat makes the liquid outer core boil, or “convect,” and as the conducting metals rise and then sink through the existing magnetic field, they create electrical currents that maintain the magnetic field. This roiling dynamo produces a slowly shifting magnetic field at the surface.

“You get changes in the surface magnetic field that look a lot like gyres and flows in the oceans and the atmosphere, but these are being driven by fluid flow in the outer core,” Buffett said.
 
Buffett is a theoretician who uses observations to improve computer models of the earth’s internal dynamo. Now at work on a second generation model, he admits that a lack of information about conditions in the earth’s interior has been a big hindrance to making accurate models.
 
 
He realized, however, that the tug of the moon on the tilt of the earth’s spin axis could provide information about the magnetic field inside. This tug would make the inner core precess – that is, make the spin axis slowly rotate in the opposite direction – which would produce magnetic changes in the outer core that damp the precession. Radio observations of distant quasars – extremely bright, active galaxies – provide very precise measurements of the changes in the earth’s rotation axis needed to calculate this damping.

“The moon is continually forcing the rotation axis of the core to precess, and we’re looking at the response of the fluid outer core to the precession of the inner core,” he said.
 
By calculating the effect of the moon on the spinning inner core, Buffett discovered that the precession makes the slightly out-of-round inner core generate shear waves in the liquid outer core. These waves of molten iron and nickel move within a tight cone only 30 to 40 meters thick, interacting with the magnetic field to produce an electric current that heats the liquid. This serves to damp the precession of the rotation axis. The damping causes the precession to lag behind the moon as it orbits the earth. A measurement of the lag allowed Buffett to calculate the magnitude of the damping and thus of the magnetic field inside the outer core.
 
Buffett noted that the calculated field – 25 Gauss – is an average over the entire outer core. The field is expected to vary with position.
“I still find it remarkable that we can look to distant quasars to get insights into the deep interior of our planet,” Buffett said.
 
Note: This story has been adapted from a news release issued by the University of California – Berkeley

New way found of monitoring volcanic ash cloud

The eruption of the Icelandic volcano Eyjafjallajökull in April this year resulted in a giant ash cloud, which – at one point covering most of Europe – brought international aviation to a temporary standstill, resulting in travel chaos for tens of thousands
New research, to be published today, Friday 10 December, in IOP Publishing’s Environmental Research Letters, shows that lightning could be used as part of an integrated approach to estimate volcanic plume properties.
The scientists found that during many of the periods of significant volcanic activity, the ash plume was sufficiently electrified to generate lightning, which was measured by the UK Met Office’s long range lightning location network (ATDnet), operating in the Very Low Frequency radio spectrum.
The measurements suggest a general correlation between lightning frequency and plume height and the method has the advantage of being detectable many thousands of kilometers away, in both day and night as well as in all weather conditions.

As the researchers write, “When a plume becomes sufficiently electrified to produce lightning, the rate of lightning generation provides a method of remotely monitoring the plume height, offering clear benefits to the volcanic monitoring community.”
 
Note: This story has been adapted from a news release issued by the Institute of Physics

Using chaos to model geophysical phenomena

Geophysical phenomena such as the dynamics of the atmosphere and ocean circulation are typically modeled mathematically by tracking the motion of air or water particles. These mathematical models define velocity fields that, given (i) a position in three-dimensional space and (ii) a time instant, provide a speed and direction for a particle at that position and time instant
“Geophysical phenomena are still not fully understood, especially in turbulent regimes,” explains Gary Froyland at the School of Mathematics and Statistics and the Australian Research Council Centre of Excellence for Mathematics and Statistics of Complex Systems (MASCOS) at the University of New South Wales in Australia.

“Nevertheless, it is very important that scientists can quantify the ‘transport’ properties of these geophysical systems: Put very simply, how does a packet of air or water get from A to B, and how large are these packets? An example of one of these packets is the Antarctic polar vortex, a rotating mass of air in the stratosphere above Antarctica that traps chemicals such as ozone and chlorofluorocarbons (CFCs), exacerbating the effect of the CFCs on the ozone hole,” Froyland says.

 
In the American Institute of Physics’ journal CHAOS, Froyland and his research team, including colleague Adam Monahan from the School of Earth and Ocean Sciences at the University of Victoria in Canada, describe how they developed the first direct approach for identifying these packets, called “coherent sets” due to their nondispersive properties.
This technique is based on so-called “transfer operators,” which represent a complete description of the ensemble evolution of the fluid. The transfer operator approach is very simple to implement, they say, requiring only singular vector computations of a matrix of transitions induced by the dynamics.
 
When tested using European Centre for Medium Range Weather Forecasting (ECMWF) data, they found that their new methodology was significantly better than existing technologies for identifying the location and transport properties of the vortex.
 
The transport operator methodology has myriad applications in atmospheric science and physical oceanography to discover the main transport pathways in the atmosphere and oceans, and to quantify the transport. “As atmosphere-ocean models continue to increase in resolution with improved computing power, the analysis and understanding of these models with techniques such as transfer operators must be undertaken beyond pure simulation,” says Froyland.
 
Their next application will be the Agulhas rings off the South African coast, because the rings are responsible for a significant amount of transport of warm water and salt between the Indian and Atlantic Oceans.
 
Note: This story has been adapted from a news release issued by the American Institute of Physics

New research shows rivers cut deep notches in the Alps’ broad glacial valleys

For years, geologists have argued about the processes that formed steep inner gorges in the broad glacial valleys of the Swiss Alps.
 
The U-shaped valleys were created by slow-moving glaciers that behaved something like road graders, eroding the bedrock over hundreds or thousands of years. When the glaciers receded, rivers carved V-shaped notches, or inner gorges, into the floors of the glacial valleys. But scientists disagreed about whether those notches were erased by subsequent glaciers and then formed all over again as the second round of glaciers receded.
New research led by a University of Washington scientist indicates that the notches endure, at least in part, from one glacial episode to the next. The glaciers appear to fill the gorges with ice and rock, protecting them from being scoured away as the glaciers move.
When the glaciers receded, the resulting rivers returned to the gorges and easily cleared out the debris deposited there, said David Montgomery, a UW professor of Earth and space sciences.

“The alpine inner gorges appear to lay low and endure glacial attack. They are topographic survivors,” Montgomery said.

“The answer is not so simple that the glaciers always win. The river valleys can hide under the glaciers and when the glaciers melt the rivers can go back to work.”
 
Montgomery is lead author of a paper describing the research, published online Dec. 5 in Nature Geoscience. Co-author is Oliver Korup of the University of Potsdam in Germany, who did the work while with the Swiss Federal Research Institutes in Davos, Switzerland.
 
The researchers used topographic data taken from laser-based (LIDAR) measurements to determine that, if the gorges were erased with each glacial episode, the rivers would have had to erode the bedrock from one-third to three-quarters of an inch per year since the last glacial period to get gorges as deep as they are today.
“That is screamingly fast. It’s really too fast for the processes,” Montgomery said. Such erosion rates would exceed those in all areas of the world except the most tectonically active regions, the researchers said, and they would have to maintain those rates for 1,000 years.
 
Montgomery and Korup found other telltale evidence, sediment from much higher elevations and older than the last glacial deposits, at the bottom of the river gorges. That material likely was pushed into the gorges as glaciers moved down the valleys, indicating the gorges formed before the last glaciers.

“That means the glaciers aren’t cutting down the bedrock as fast as the rivers do. If the glaciers were keeping up, each time they’d be able to erase the notch left by the river,” Montgomery said.
 
“They’re locked in this dance, working together to tear the mountains down.”
The work raises questions about how common the preservation of gorges might be in other mountainous regions of the world.

“It shows that inner gorges can persist, and so the question is, ‘How typical is that?’ I don’t think every inner gorge in the world survives multiple glaciations like that, but the Swiss Alps are a classic case. That’s where mountain glaciation was first discovered.”
 
Note: This story has been adapted from a news release issued by the University of Washington

SCEC’s ‘M8’ earthquake simulation breaks computational records, promises better quake models

A multi-disciplinary team of researchers has presented the world’s most advanced earthquake shaking simulation at the Supercomputing 2010 (SC10) conference held this week in New Orleans. The research was selected as a finalist for the Gordon Bell prize, awarded at the annual conference for outstanding achievement in high-performance computing applications.

The “M8” simulation represents how a magnitude 8.0 earthquake on the southern San Andreas Fault will shake a larger area, in greater detail, than previously possible. Perhaps most importantly, the development of the M8 simulation advances the state-of-the-art in terms of the speed and efficiency at which such calculations can be performed.

The Southern California Earthquake Center (SCEC) at the University of Southern California (USC) was the lead coordinator in the project. San Diego Supercomputer Center (SDSC) researchers provided the high-performance computing and scientific visualization expertise for the simulation. Scientific details of the earthquake were developed by scientists at San Diego State University (SDSU). Ohio State University (OSU) researchers were also part of the collaborative effort to improve the efficiency of the software involved.
While this specific earthquake has a low probability of occurrence, the improvements in technology required to produce this simulation will now allow scientists to simulate other more likely earthquakes scenarios in much less time than previously required. Because such simulations are the most important and widespread applications of high performance computing for seismic hazard estimation currently in use, the SCEC team has been focused on optimizing the technologies and codes needed to create them.
The M8 simulation was funded through a number of National Science Foundation (NSF) grants and it was performed using supercomputer resources including NSF’s Kraken supercomputer at National Institute for Computational Science (NICS) and the Department of Energy (DOE) Jaguar supercomputer at the National Center for Computational Science . The SCEC M8 simulation represents the latest in earthquake science and in computations at the petascale level, which refers to supercomputers capable of more than one quadrillion floating point operations (calculations) per second.
Petascale simulations such as this one are needed to understand the rupture and wave dynamics of the largest earthquakes, at shaking frequencies required to engineer safe structures,” said Thomas Jordan, director of SCEC and Principal Investigator for the project. Previous simulations were useful only for modeling how tall structures will behave in earthquakes, but the new simulation can be used to understand how a broader range of buildings will respond.
“The scientific results of this massive simulation are very interesting, and its level of detail has allowed us to observe things that we were not able to see in the past,” said Kim Olsen, professor of geological sciences at SDSU, and lead seismologist of the study. .
However, given the massive number of calculations required, only the most advanced supercomputers are capable of producing such simulations in a reasonable time period. “This M8 simulation represents a milestone calculation, a breakthrough in seismology both in terms of computational size and scalability,” said Yifeng Cui, a computational scientist at SDSC. “It’s also the largest and most detailed simulation of a major earthquake ever performed in terms of floating point operations, and opens up new territory for earthquake science and engineering with the goal of reducing the potential for loss of life and property.”
Specifically, the M8 simulation is the largest in terms duration of the shaking modeled (six minutes) and the geographical area covered – a rectangular volume approximately 500 miles (810km) long by 250 miles (405 km) wide, by 50 miles (85km) deep. The team’s latest research also set a new record in the number of computer processor cores used, with 223,074 cores sustaining performance of 220 trillion calculations per second for 24 hours on the Jaguar Cray XT5 supercomputer at the Oak Ridge National Laboratory (ORNL) in Tennessee.
We have come a long way in just six years, doubling the seismic frequencies modeled by our simulations every two to three years, from 0.5 Hertz (or cycles per second) in the TeraShake simulations, to 1.0 Hertz in the ShakeOut simulations, and now to 2.0 Hertz in this latest project,” said Phil Maechling, SCEC’s associate director for Information Technology.
In terms of earthquake science, these simulations can be used to study issues of how earthquake waves travel through structures in the earth’s crust and to improve three-dimensional models of such structures.
Based on our calculations, we are finding that deep sedimentary basins, such as those in the Los Angeles area, are getting larger shaking than are predicted by the standard methods,” Jordan said. “By improving the predictions, making them more realistic, we can help engineers make new buildings safer.” The simulations are also useful in developing better seismic hazard policies and for improving scenarios used in emergency planning.
Note: This story has been adapted from a news release issued by the University of Southern California

Scientists look deeper for coal ash hazards

As the U.S. Environmental Protection Agency weighs whether to define coal ash as hazardous waste, a Duke University study identifies new monitoring protocols and insights that can help investigators more accurately measure and predict the ecological impacts of coal ash contaminants.

“The take-away lesson is we need to change how and where we look for coal ash contaminants,” says Avner Vengosh, professor of geochemistry and water quality at Duke’s Nicholas School of the Environment. “Risks to water quality and aquatic life don’t end with surface water contamination, but much of our current monitoring does.”

The study, published online this week in the peer-reviewed journal Environmental Science and Technology, documents contaminant levels in aquatic ecosystems over an 18-month period following a massive coal sludge spill in 2008 at a Tennessee Valley Authority power plant in Kingston, Tenn.

By analyzing more than 220 water samples collected over the 18-month period, the Duke team found that high concentrations of arsenic from the TVA coal ash remained in pore water — water trapped within river-bottom sediment — long after contaminant levels in surface waters dropped back below safe thresholds.
Samples extracted from 10 centimeters to half a meter below the surface of sediment in downstream rivers contained arsenic levels of up to 2,000 parts per billion – well above the EPA’s thresholds of 10 parts per billion for safe drinking water, and 150 parts per billion for
protection of aquatic life “It’s like cleaning your house,” Vengosh says of the finding. “Everything may look clean, but if you look under the rugs, that’s where you find the dirt.”

The potential impacts of pore water contamination extend far beyond the river bottom, he explains, because “this is where the biological food chain begins, so any bioaccumulation of toxins will start here.”

The research team, which included two graduate students from Duke’s Nicholas School of the Environment and Pratt School of Engineering, also found that acidity and the loss or gain of oxygen in water play key roles in controlling how arsenic, selenium and other coal ash contaminants leach into the environment. Knowing this will help scientists better predict the fate and migration of contaminants derived from coal ash residues, particularly those stored in holding ponds and landfills, as well as any potential leakage into lakes, rivers and other aquatic systems.

The study comes as the EPA is considering whether to define ash from coal-burning power plants as hazardous waste. The deadline for public comment to the EPA was Nov. 19; a final ruling — what Vengosh calls “a defining moment” — is expected in coming months.

“At more than 3.7 million cubic meters, the scope of the TVA spill is unprecedented, but similar processes are taking place in holding ponds, landfills and other coal ash storage facilities across the nation,” he says. “As long as coal ash isn’t regulated as hazardous waste, there is no way to prevent discharges of contaminants from these facilities and protect the environment.”

Note: This story has been adapted from a news release issued by the Duke University

Related Articles