back to top
27.8 C
New York
Tuesday, November 19, 2024
Home Blog Page 233

Scientists see deeper Yellowstone magma

A new University of Utah study in the journal Science provides the first complete view of the plumbing system that supplies hot and partly molten rock from the Yellowstone hotspot to the Yellowstone supervolcano. The study revealed a gigantic magma reservoir beneath the previously known magma chamber. This cross-section illustration cutting southwest-northeast under Yelowstone depicts the view revealed by seismic imaging. Seismologists say new techniques have provided a better view of Yellowstone’s plumbing system, and that it hasn’t grown larger or closer to erupting. They estimate the annual chance of a Yellowstone supervolcano eruption is 1 in 700,000. Credit: Hsin-Hua Huang, University of Utah

University of Utah seismologists discovered and made images of a reservoir of hot, partly molten rock 12 to 28 miles beneath the Yellowstone supervolcano, and it is 4.4 times larger than the shallower, long-known magma chamber.

The hot rock in the newly discovered, deeper magma reservoir would fill the 1,000-cubic-mile Grand Canyon 11.2 times, while the previously known magma chamber would fill the Grand Canyon 2.5 times, says postdoctoral researcher Jamie Farrell, a co-author of the study published online today in the journal Science.

“For the first time, we have imaged the continuous volcanic plumbing system under Yellowstone,” says first author Hsin-Hua Huang, also a postdoctoral researcher in geology and geophysics. “That includes the upper crustal magma chamber we have seen previously plus a lower crustal magma reservoir that has never been imaged before and that connects the upper chamber to the Yellowstone hotspot plume below.”

Contrary to popular perception, the magma chamber and magma reservoir are not full of molten rock. Instead, the rock is hot, mostly solid and spongelike, with pockets of molten rock within it. Huang says the new study indicates the upper magma chamber averages about 9 percent molten rock — consistent with earlier estimates of 5 percent to 15 percent melt — and the lower magma reservoir is about 2 percent melt.

So there is about one-quarter of a Grand Canyon worth of molten rock within the much larger volumes of either the magma chamber or the magma reservoir, Farrell says.

No increase in the danger

The researchers emphasize that Yellowstone’s plumbing system is no larger — nor closer to erupting — than before, only that they now have used advanced techniques to make a complete image of the system that carries hot and partly molten rock upward from the top of the Yellowstone hotspot plume — about 40 miles beneath the surface — to the magma reservoir and the magma chamber above it.

“The magma chamber and reservoir are not getting any bigger than they have been, it’s just that we can see them better now using new techniques,” Farrell says.

Study co-author Fan-Chi Lin, an assistant professor of geology and geophysics, says: “It gives us a better understanding the Yellowstone magmatic system. We can now use these new models to better estimate the potential seismic and volcanic hazards.”

The researchers point out that the previously known upper magma chamber was the immediate source of three cataclysmic eruptions of the Yellowstone caldera 2 million, 1.2 million and 640,000 years ago, and that isn’t changed by discovery of the underlying magma reservoir that supplies the magma chamber.

“The actual hazard is the same, but now we have a much better understanding of the complete crustal magma system,” says study co-author Robert B. Smith, a research and emeritus professor of geology and geophysics at the University of Utah.

The three supervolcano eruptions at Yellowstone — on the Wyoming-Idaho-Montana border — covered much of North America in volcanic ash. A supervolcano eruption today would be cataclysmic, but Smith says the annual chance is 1 in 700,000.

Before the new discovery, researchers had envisioned partly molten rock moving upward from the Yellowstone hotspot plume via a series of vertical and horizontal cracks, known as dikes and sills, or as blobs. They still believe such cracks move hot rock from the plume head to the magma reservoir and from there to the shallow magma chamber.

Anatomy of a supervolcano

The study in Science is titled, “The Yellowstone magmatic system from the mantle plume to the upper crust.” Huang, Lin, Farrell and Smith conducted the research with Brandon Schmandt at the University of New Mexico and Victor Tsai at the California Institute of Technology. Funding came from the University of Utah, National Science Foundation, Brinson Foundation and William Carrico.

Yellowstone is among the world’s largest supervolcanoes, with frequent earthquakes and Earth’s most vigorous continental geothermal system.

The three ancient Yellowstone supervolcano eruptions were only the latest in a series of more than 140 as the North American plate of Earth’s crust and upper mantle moved southwest over the Yellowstone hotspot, starting 17 million years ago at the Oregon-Idaho-Nevada border. The hotspot eruptions progressed northeast before reaching Yellowstone 2 million years ago.

Here is how the new study depicts the Yellowstone system, from bottom to top:

  • Previous research has shown the Yellowstone hotspot plume rises from a depth of at least 440 miles in Earth’s mantle. Some researchers suspect it originates 1,800 miles deep at Earth’s core. The plume rises from the depths northwest of Yellowstone. The plume conduit is roughly 50 miles wide as it rises through Earth’s mantle and then spreads out like a pancake as it hits the uppermost mantle about 40 miles deep. Earlier Utah studies indicated the plume head was 300 miles wide. The new study suggests it may be smaller, but the data aren’t good enough to know for sure.
  • Hot and partly molten rock rises in dikes from the top of the plume at 40 miles depth up to the bottom of the 11,200-cubic mile magma reservoir, about 28 miles deep. The top of this newly discovered blob-shaped magma reservoir is about 12 miles deep, Huang says. The reservoir measures 30 miles northwest to southeast and 44 miles southwest to northeast. “Having this lower magma body resolved the missing link of how the plume connects to the magma chamber in the upper crust,” Lin says.
  • The 2,500-cubic mile upper magma chamber sits beneath Yellowstone’s 40-by-25-mile caldera, or giant crater. Farrell says it is shaped like a gigantic frying pan about 3 to 9 miles beneath the surface, with a “handle” rising to the northeast. The chamber is about 19 miles from northwest to southeast and 55 miles southwest to northeast. The handle is the shallowest, long part of the chamber that extends 10 miles northeast of the caldera.

Scientists once thought the shallow magma chamber was 1,000 cubic miles. But at science meetings and in a published paper this past year, Farrell and Smith showed the chamber was 2.5 times bigger than once thought. That has not changed in the new study.

Discovery of the magma reservoir below the magma chamber solves a longstanding mystery: Why Yellowstone’s soil and geothermal features emit more carbon dioxide than can be explained by gases from the magma chamber, Huang says. Farrell says a deeper magma reservoir had been hypothesized because of the excess carbon dioxide, which comes from molten and partly molten rock.

A better, deeper look at Yellowstone

As with past studies that made images of Yellowstone’s volcanic plumbing, the new study used seismic imaging, which is somewhat like a medical CT scan but uses earthquake waves instead of X-rays to distinguish rock of various densities. Quake waves go faster through cold rock, and slower through hot and molten rock.

For the new study, Huang developed a technique to combine two kinds of seismic information: Data from local quakes detected in Utah, Idaho, the Teton Range and Yellowstone by the University of Utah Seismograph Stations and data from more distant quakes detected by the National Science Foundation-funded EarthScope array of seismometers, which was used to map the underground structure of the lower 48 states.

The Utah seismic network has closely spaced seismometers that are better at making images of the shallower crust beneath Yellowstone, while EarthScope’s seismometers are better at making images of deeper structures.

“It’s a technique combining local and distant earthquake data better to look at this lower crustal magma reservoir,” Huang says.

Reference:
Hsin-Hua Huang, Fan-Chi Lin, Brandon Schmandt, Jamie Farrell, Robert B. Smith, Victor C. Tsai. The Yellowstone magmatic system from the mantle plume to the upper crust. Science, 2015 DOI: 10.1126/science.aaa5648

Note: The above story is based on materials provided by University of Utah.

Researchers map genomes of woolly mammoths, raising possibility of bringing them back

Mammoth remains found on the Taimyr Pennisula in Siberia. An international team of researchers has sequenced the genome of the mammoth, offering new information on what may have led to its extinction at the end of the last Ice Age. Photo by Debi Poinar

An international team of researchers has sequenced the nearly complete genome of two Siberian woolly mammoths — revealing the most complete picture to date — including new information about the species’ evolutionary history and the conditions that led to its mass extinction at the end of the Ice Age.

“This discovery means that recreating extinct species is a much more real possibility, one we could in theory realize within decades,” says evolutionary geneticist Hendrik Poinar, director of the Ancient DNA Centre at McMaster University and a researcher at the Institute for Infectious Disease Research, the senior Canadian scientist on the project.

“With a complete genome and this kind of data, we can now begin to understand what made a mammoth a mammoth — when compared to an elephant — and some of the underlying causes of their extinction which is an exceptionally difficult and complex puzzle to solve,” he says.

While scientists have long argued that climate change and human hunting were major factors behind the mammoth’s extinction, the new data suggests multiple factors were at play over their long evolutionary history.

Researchers from McMaster, Harvard Medical School, the Swedish Museum of Natural History, Stockholm University and others produced high-quality genomes from specimens taken from the remains of two male woolly mammoths, which lived about 40,000 years apart.

One had lived in northeastern Siberia and is estimated to be nearly 45,000 years old. The other -believed to be from one of the last surviving mammoth populations — lived approximately 4,300 years ago on Russia’s Wrangel Island, located in the Arctic Ocean.

“We found that the genome from one of the world’s last mammoths displayed low genetic variation and a signature consistent with inbreeding, likely due to the small number of mammoths that managed to survive on Wrangel Island during the last 5,000 years of the species’ existence,” says Love Dalén, an associate professor of Bioinformatics and Genetics at the Swedish Museum of Natural History.

Scientists used sophisticated technology to tease bits and pieces of highly fragmented DNA from the ancient specimens, which they then used to sequence the genomes. Through careful analysis, they determined the animal populations had suffered and recovered from a significant setback roughly 250,000 to 300,000 years ago. However, say researchers, another severe decline occurred in the final days of the Ice Age, marking the end.

“The dates on these current samples suggest that when Egyptians were building pyramids, there were still mammoths living on these islands,” says Poinar. “Having this quality of data can help with our understanding of the evolutionary dynamics of elephants in general and possible efforts at de-extinction.”

The latest research is the continuation of the pioneering work Poinar and his team began in 2006, when they first mapped a partial mammoth genome, using DNA extracted from carcasses found in permafrost in the Yukon and Siberia.

The study is published online in the Cell Press journal Current Biology.

Video

Reference:
Palkopoulou et al. Complete genomes reveal signatures of demographic and genetic declines in the woolly mammoth. Current Biology, 2015 DOI: 10.1016/j.cub.2015.04.007

Note: The above story is based on materials provided by McMaster University. The original article was written by Michelle Donovan.

Looking to fossils to predict tooth evolution in rodents

This is the skull of a Laotian rock rat. Over evolutionary time, rodent molars have become taller. Credit: Vagan Tapaltsyan and Ophir Klein

Fifty million years ago, all rodents had short, stubby molars–teeth similar to those found in the back of the human mouth, used for grinding food. Over time, rodent teeth progressively evolved to become taller, and some rodent species even evolved continuously growing molar teeth. A new study publishing April 23 in the journal Cell Reports predicts that most rodent species will have ever-growing molars in the far distant future.

“Our analyses and simulations point towards a gradual evolution of taller teeth, and in our future studies we will explore whether tinkering with the genetic mechanisms of tooth formation in lab mice–which have short molar teeth–will replicate the evolution of taller teeth,” says co-senior author Ophir Klein, an associate professor at the University of California San Francisco School of Dentistry.

For their research, Dr. Klein and his colleagues used fossil data from thousands of extinct rodent species to study the evolution of dental stem cells, which are required for continuous tooth growth. They found evidence that most of the species possess the potential for acquiring dental stem cells, and that the final developmental step on the path toward continuously growing teeth may be quite small. “Just studying how molars become taller should tell us about the first steps in the arrival of stem cells,” Klein says.

The team’s computer simulations predict that rodents with continuously growing teeth and active stem cell reserves will eventually outcompete all other rodent species, whose teeth have a finite length. This won’t likely apply to people, however.

“As we humans have short teeth, evolutionarily speaking we would have to go through multiple steps that would take millions of years before we could acquire continuously growing teeth. Obviously, this is not something that would happen as long as we cook our food and don’t wear down our teeth,” says co-senior author Jukka Jernvall, an evolutionary biologist at the University of Helsinki, in Finland. “However, regarding rodents, it will be interesting to resolve the regional and taxonomic details of the 50 million year trend.”

Reference:
Mushegyan et al. Continuously growing rodent molars result from a predictable quantitative evolutionary change over 50 million years. Cell Reports, 2015 DOI: 10.1016/j.celrep.2015.03.064

Note: The above story is based on materials provided by Cell Press.

Catalina Island’s slow sink—and potential tsunami hazard

Underwater beach terraces around Santa Catalina Island suggest that the island is sinking. Credit: Chris Castillo, Stanford University

New images of ancient, underwater beach terraces around Santa Catalina Island suggest that the island is sinking, probably as a result of changes in the active fault systems around the island. At the rate that can be calculated so far, the island could disappear within three million years, as it is sinking approximately one foot every thousand years.

Chris Castillo of Stanford University and colleagues used a sort of underwater “ultrasound” called seismic reflection profiling to map out the traces of submerged marine terraces around the island, which correspond to beaches cut around the island at different sea levels. Although sea level around the island hasn’t dropped below 130 meters over the last 1 million years, the lowest level terraces are found at more than 350 meters, suggesting the island itself has sunk about 220 meters since these deepest terraces were formed.

There are several faults around the island, dominantly strike-slip to the northeast and a transtensional fault to the island’s southwest side. Within the last one million years, the researchers say, changes have happened within this fault system so that one portion of the system has “shut off” while another has become more active and allowed the island to subside. Knowing more about the exact timing and placement of this change could help scientists understand more about the potential seismic risks across fault systems in southern California. The researchers also found evidence of old landslides along a fault on the island that points toward Los Angeles, according to Castillo, which could indicate that the island may pose a small tsunami risk for the city if such landslides occur again.

Castillo will present his research at the annual meeting of the Seismological Society of America (SSA) in Pasadena, Calif.

Note : The above story is based on materials provided by Seismological Society of America.

Calbuco Volcano Eruption

The Calbuco volcano erupted Wednesday for the first time in over 40 years, billowing a huge ash cloud over a sparsely populated, mountainous area in southern Chile. Authorities ordered the evacuation of the 1,500 inhabitants of the nearby town of Ensenada, along with residents of two smaller communities.

 

Video Provided by: Rodrigo Barrera

Spectacular Calbuco voulcano eruption in Chile

The Calbuco volcano erupted Wednesday for the first time in over 40 years, billowing a huge ash cloud over a sparsely populated, mountainous area in southern Chile. Authorities ordered the evacuation of the 1,500 inhabitants of the nearby town of Ensenada, along with residents of two smaller communities.

Video Provided by: RT

Sexing Stegosaurus “Stegosaurus plates may have differed between male, female”

Some Stegosaurus had wide plates, some had tall, with the wide plates being up to 45 percent larger overall than the tall plates. According to a new study by University of Bristol, UK student, Evan Saitta, the tall-plated Stegosaurus and the wide-plated Stegosaurus were not two distinct species, nor were they individuals of different age: they were actually males and females. This is the first convincing evidence for sexual differences in a species of dinosaur. Credit: Copyright Evan Saitta

Stegosaurus, a large, herbivorous dinosaur with two staggered rows of bony plates along its back and two pairs of spikes at the end of its tail, lived roughly 150 million years ago during the Late Jurassic in the western United States.

Some individuals had wide plates, some had tall, with the wide plates being up to 45 per cent larger overall than the tall plates. According to the new study, the tall-plated Stegosaurus and the wide-plate Stegosaurus were not two distinct species, nor were they individuals of different age: they were actually males and females.

Professor Michael Benton, Director of the Masters in Palaeobiology at the University of Bristol said: “Evan made this discovery while he was completing his undergraduate thesis at Princeton University. It’s very impressive when an undergraduate makes such a major scientific discovery.

Sexual dimorphism (a term used to describe distinct anatomical differences between males and females of the same species) is common in living animals — think of the manes of lions or the antlers of deer — yet is surprisingly difficult to determine in extinct species.

Despite many previous claims of sexual dimorphism in dinosaurs, current researchers find them to be inconclusive because they do not rule out other possible explanations for why differences in anatomy might be present between fossil specimens. For example, two individuals that differ in anatomy might be two separate species, a young and an old individual, or a male and a female individual.

Having spent six summers in central Montana as part of an excavation crew digging up the first ever Stegosaurus ‘graveyard’, Evan Saitta was able to test these alternative explanations and others in the species Stegosaurus mjosi.

The group of dinosaurs excavated in Montana demonstrated the coexistence of individuals that only varied in their plates. Other skeletal differences indicating separation of ecological niches would have been expected if the two were different species.

The study also found that the two varieties were not a result of growth. CT scanning at Billings Clinic in Montana, as well as thin sections sampled from the plates for microscope analysis, showed that the bone tissues had ceased growing in both varieties. Neither type of plate was in the process of growing into the other.

With other possibilities ruled out, the best explanation for the two varieties of plates is that one type belonged to males and the other, females.

Speculating about which is which, Evan Saitta said: “As males typically invest more in their ornamentation, the larger, wide plates likely came from males. These broad plates would have provided a great display surface to attract mates. The tall plates might have functioned as prickly predator deterrents in females.”

Stegosaurus may not have been the only dinosaur to exhibit sexual dimorphism. Other species showed extra-large crests or nose horns, which were potentially sexual features. Male animals often fight or display for mates, just like red deer or peacocks today.

Not only does Saitta’s work show that dinosaurs exhibited sexual dimorphism, it suggests that the ornamentation of at least some species was used for sexual display.

The presence of sexual dimorphism in an extinct species can provide scientists with a much clearer picture of its behaviour than would otherwise be possible.

Reference:
Saitta ET (2015). Evidence for Sexual Dimorphism in the Plated Dinosaur Stegosaurus mjosi (Ornithischia, Stegosauria) from the Morrison Formation (Upper Jurassic) of Western USA. PLoS ONE, 2015 DOI: 10.1371/journal.pone.0123503

Note: The above story is based on materials provided by University of Bristol.

Magma intrusion is likely source of Colombia-Ecuador border quake swarms

Credit: R. Corredor Torres

The “seismic crisis” around the region of the Chiles and Cerro Negro de Mayasquer volcanoes near the Columbia-Ecuador border is likely caused by intruding magma, according to a report by R. Corredor Torres of the Servicio Geológico Colombiano and colleagues presented at the annual meeting of the Seismological Society of America (SSA).

The intruding magma appears to be interacting with the regional tectonics to spawn micro-earthquakes, which at their peak of activity numbered thousands of micro-earthquakes each day. Most of the earthquakes were less than magnitude 3, although the largest quake to date was magnitude 5.6 that took place in October 2014. When the earthquake swarms began in 2013, the Colombian Servicio Geológico Colombiano and the Ecuadoran Instituto Geofísico of the Escuela Politécnica Nacional collaborated to set up a monitoring system to observe the swarms and judge the risk of volcanic eruption for the surrounding population.

The largest perceived threat of eruption came in the fall of 2014, when the activity level was changed from yellow to orange, meaning a probable occurrence of eruption in days to weeks. Due to the occurrence of a magnitude 5.6 earthquake and subsequent aftershocks, some houses in the area were damaged and local residents decided to sleep in tents to feel safe, accepting support from the Colombian Disaster Prevention Office, said Torres.

Data collected by the new monitoring stations suggest that most of the earthquakes in the area are of a type of volcano-tectonic quakes, which occur when the movement of magma-and the fluids and gases it releases-creates pressure changes in the rocks above. Based on the seismic activity in the area, the researchers infer that millions of cubic meters of magma have moved into the area deep under the Chile and Cerro Negro volcanoes. However, both volcanoes appear to have been dormant for at least 10,000 years, and the tectonic stress in the region is compressive–both of which may be holding the magma back from erupting to the surface. So far, there have been no signs of ground swelling or outgassing at the surface, and the rate of earthquakes has slowed considerably this year from its peak of 7000 — 8000 micro-quakes per day in the fall of 2014.

Note: The above story is based on materials provided by Seismological Society of America.

More Americans at risk from strong earthquakes, says new report

More than 143 million Americans living in the 48 contiguous states are exposed to potentially damaging ground shaking from earthquakes, with as many as 28 million people likely to experience strong shaking during their lifetime, according to research discussed at the annual meeting of Seismological Society of America. The report puts the average long-term value of building losses from earthquakes at $4.5 billion per year, with roughly 80 percent of losses attributed to California, Oregon and Washington.

“This analysis of data from the new National Seismic Hazard Maps reveals that significantly more Americans are exposed to earthquake shaking, reflecting both the movement of the population to higher risk areas on the west coast and a change in hazard assessments,” said co-author Bill Leith, senior science advisor at USGS. By comparison, FEMA estimated in 1994 that 75 million Americans in 39 states were at risk from earthquakes.

Kishor Jaiswal, a research contractor with the U.S. Geological Survey (USGS), presented the research conducted with colleagues from USGS, FEMA and California Geological Survey. They analyzed the 2014 National Seismic Hazard Maps and the latest data on infrastructure and population from LandScan, a product of Oak Ridge National Laboratory.

The report focuses on the 48 contiguous states, where more than 143 million people are exposed to ground motions from earthquakes, but Leith noted that nearly half the U.S. population, or nearly 150 million Americans, are at risk of shaking from earthquakes when Alaska, Puerto Rico and Hawaii are also considered.

In the highest hazard zones, where 28 million Americans will experience strong shaking during their lifetime, key infrastructure could also experience a shaking intensity sufficient to cause moderate to extensive damage. The analysis identified more than 6,000 fire stations, more than 800 hospitals and nearly 20,000 public and private schools that may be exposed to strong ground motion from earthquakes.

Using the 2010 Census data and the 2012 replacement cost values for buildings, and using FEMA’s Hazus program, researchers calculated systematically the losses that could happen on any given year, ranging from no losses to a very high value of loss. However, the long-term average loss to the buildings in the contiguous U.S. is $4.5 billion per year, with most financial losses occurring in California, Oregon and Washington states.

“Earthquakes remain an important threat to our economy,” said Jaiswal. “While the west coast may carry the larger burden of potential losses and the greatest threat from the strongest shaking, this report shows that the threat from earthquakes is widespread.”

Note: The above story is based on materials provided by Seismological Society of America.

Earthquake potential where there is no earthquake history

It may seem unlikely that a large earthquake would take place hundreds of kilometers away from a tectonic plate boundary, in areas with low levels of strain on the crust from tectonic motion. But major earthquakes such as the Mw 7.9 2008 Chengdu quake in China and New Zealand’s 2011 Mw 6.3 quake have shown that large earthquakes do occur and can cause significant infrastructure damage and loss of life. So what should seismologists look for if they want to identify where an earthquake might happen despite the absence of historical seismic activity?

Roger Bilham of the University of Colorado shows that some of these regions had underlying features that could have been used to identify that the region was not as “aseismic” as previously thought. Some of these warning signs include debris deposits from past tsunamis or landslides, ancient mid-continent rifts that mark the scars of earlier tectonic boundaries, or old fault scarps worn down by hundreds or thousands of years of erosion.Earth’s populated area where there is no written history makes for an enormous “search grid” for earthquakes. For example, the Caribbean coast of northern Colombia resembles a classic subduction zone with the potential for tsunamigenic M>8 earthquakes at millennial time scales, but the absence of a large earthquake since 1492 is cause for complacency among local populations.

These areas are not only restricted to the Americas. Bilham notes that in many parts of Asia, where huge populations now reside and critical facilities exist or are planned, a similar historical silence exists. Parts of the Himalaya and central and western India that have not had any major earthquake in more than 500 years could experience shaking at levels and durations that are unprecedented in their written histories.

Note: The above story is based on materials provided by Seismological Society of America.

Race to unravel Oklahoma’s artificial quakes

It’s the first thing that geologist Todd Halihan asks on a sunny spring afternoon at Oklahoma State University in Stillwater: “Did you feel the earthquake? My mother-in-law just called to complain that the house was shaking.”

Halihan’s mother-in-law has been calling a lot lately. Fifteen quakes of magnitude 4 or greater struck in 2014 — packing more than a century’s worth of normal seismic activity for the state into a single year. Oklahoma had twice as many earthquakes last year as California — a seismic hotspot — and researchers are racing to understand why before the next major one strikes.

Whatever they learn will apply to seismic hazards worldwide. Oklahoma’s quakes have been linked to underground wells where oil and gas operations dispose of waste water, but mining, geothermal energy and other underground explorations have triggered earthquakes from South Africa to Switzerland. This week, at a meeting of the Seismological Society of America in Pasadena, California, scientists will discuss how the risk from human-induced quakes differs from that of natural quakes — and how society can prepare for it.

In Oklahoma, the earthquakes have unleashed a frenzy of finger-pointing, with angry residents suing oil and gas companies over damage to their homes. The industry and politicians are locked in fierce debates about whether the quakes are induced, but the unprecedented shaking across central and northern parts of the state matches almost exactly with the activity of water-disposal wells. “There are some who will argue that it is purely natural,” says Halihan. “But by now it’s pretty clear it’s not.”

Companies drill into the ground to extract oil and gas mixed with salt water, essentially the brine from a long-fossilized sea. They separate out the fuels and then inject the salt water into deep disposal wells (there are more than 4,600 in Oklahoma). State regulations require that the salt water be disposed of in rock layers below those that hold drinking water.

Stress fracture

Much of the liquid ends up in a rock formation called the Arbuckle, which underlies much of Oklahoma and is known for its ability to absorb huge volumes of water. But in many places the Arbuckle rests on brittle, ancient basement rocks, which can fracture along major faults under stress. “The deeper you inject, the more likely it is that the injected brine is going to make its way into a seismogenic fault zone, prone to producing earthquakes,” says Arthur McGarr, who leads research on induced quakes at the US Geological Survey (USGS) in Menlo Park, California.

Oil and gas companies operate disposal wells across the central United States, and although Oklahoma stands out for the sheer volume of waste water, other states may be getting triggered earthquakes. A report in Nature Communications this week, for example, links brine injection to a series of quakes that began in November 2013 near Azle, Texas.

The basic physics of the process has been understood since the 1970s, when scientists from the USGS pumped water down a well in Rangely, Colorado, and recorded how earthquake activity rose and petered out as they varied the amount of fluid. The question now is which faults are likely to rupture in Oklahoma, and how large an earthquake they might produce.

Whether a fault breaks in an earthquake depends on how it sits in relation to the stresses that compress Earth’s crust. The movement of tectonic plates is squeezing Oklahoma from east to west, so most of the earthquakes are happening along faults oriented northwest to southeast, or northeast to southwest. Other faults are less likely to rupture, says McGarr.

The biggest earthquake ever recorded in Oklahoma was a magnitude-5.6 event near the town of Prague in November 2011, and many seismologists think that it was induced by nearby disposal wells. Theoretical work suggests that the potential size of a quake grows with the volume of fluid injected into the ground. The biggest disposal wells in Oklahoma inject more than 60 million litres of waste water each month.

Austin Holland, the state seismologist at the Oklahoma Geological Survey in Norman, estimates that the chance of another earthquake of magnitude 5 or greater striking the state in the next year is about 30%. “That is not the kind of lottery we want to win,” he says.

Oklahoma has designated buffer zones, requiring extra scrutiny for disposal wells within 10 kilometres of sites of earthquake swarms or quakes of magnitude 4 or greater. As of 18 April, operators must also prove they are not injecting into or near basement rocks, or must cut their disposal volumes by half.

Yet oil and gas companies hold great political power in Oklahoma, and regulators continue to emphasize what they call uncertainty in linking injection wells to quakes. “We felt a big quake one Friday night and I knew we had permitted a brand-new Arbuckle disposal well not three miles from my house,” said Tim Baker, director of the oil and gas division of the Oklahoma Corporation Commission, which regulates drilling, at a town-hall meeting in suburban Oklahoma City this month. “I drove to that well to inspect it on Saturday morning, and it wasn’t even turned on. That’s how complex this issue is.”

The related — and controversial — technique of hydraulic fracturing, in which water is injected into rock to open cracks so oil and gas can flow more easily, has also been linked to earthquakes, but to a much lesser extent. The fracking involves injecting less water for shorter periods of time, and has not been tied to any earthquakes greater than magnitude 4 (ref. 5).

Seismic survey

One group of geologists wants to explore exactly how disposal wells might cause earthquakes. The team hopes to find a remote corner of Oklahoma and inject fluids deep underground while monitoring seismicity, in a modern analogue to the 1970s experiments in Colorado. “It’s a very ambitious goal, but we want to do a controlled field-scale experiment,” says Ze’ev Reches, a geophysicist at the University of Oklahoma in Norman and a co-leader of the project. But with Oklahomans already on edge, it is not clear whether the team could pull off such an experiment. So far, it remains hypothetical.

For now, seismologists are just trying to keep up with the quakes. The state geological survey recently gave up naming earthquake swarms, because the quakes simply never stopped, says Amberlee Darold, an agency seismologist. (The survey used to name swarms after nearby towns; it now identifies huge swathes of continuous activity by county.)

In the 15-storey brick Earth sciences building on the University of Oklahoma campus in Norman, statues celebrate the state’s ‘wild­catters’ who made it big in oil and gas, and a well-manicured garden nearby is dedicated to their achievements. Holland and Darold labour in the building’s dark basement, compiling a database of Oklahoma’s faults and trying to make sure that every earthquake is documented.

Many scientists are worried that the state’s buildings are not constructed to standards that consider seismic risk, and are concerned about how old brick-and-mortar structures would hold up in a large earthquake. The USGS issues national seismic-hazard maps every few years, but has never included the risk from induced quakes. This year, for the first time, the agency is developing induced-seismicity hazard maps for Oklahoma and surrounding states. The first of these is likely to be out by the end of 2015, says McGarr.

In Cushing, almost 60 kilometres north of Prague, crude-oil pipelines from across the continent meet. Fences topped with razor wire are meant to protect huge oil-storage tanks from a terrorist attack, but will not help if a major earthquake strikes, says Halihan.

In the meantime, he sits and waits to hear about the next quake. If he does not want to rely on his mother-in-law, Halihan can track the tremors by watching the movement of a small brass marker pinned to his office wall. It used to shake about once a week. Now it does so almost every day.

Note : The above story is based on materials provided by Nature.

Look mom, no eardrums! “Studying evolution beyond the fossil record”

(Left) Generalized schematic showing the relative position of the first pharyngeal pouch and elements in the first pharyngeal arch. The upper (green) and lower (blue) elements in the first pharyngeal arch form the primary jaw joint (red circle). (Top) Reptiles/birds. The schematic on the left shows the eardrum forming between the first pharyngeal pouch and the ear canal. It is connected to one middle ear bone and the quadrate bone (green), which is part of the upper jaw. The schematic on the right shows these bones in a lizard with the primary jaw joint located between the upper and lower jaws. (Bottom) Mammals. The schematic on the left shows the eardrum forming between the first pharyngeal pouch and the ear canal. It is connected to the hammer bone (blue), which along with the anvil (green) make up two of the three middle ear bones. The schematic on the right shows the middle ear in a human embryo, with the primary jaw joint located within the middle ear, and the hammer (blue) connected to the lower jaw. Credit: RIKEN

Researchers at the RIKEN Evolutionary Morphology Laboratory and the University of Tokyo in Japan have determined that the eardrum evolved independently in mammals and diapsids–the taxonomic group that includes reptiles and birds. Published in Nature Communications, the work shows that the mammalian eardrum depends on lower jaw formation, while that of diapsids develops from the upper jaw. Significantly, the researchers used techniques borrowed from developmental biology to answer a question that has intrigued paleontologists for years.

The evolution of the eardrum and the middle ear is what has allowed mammals, reptiles, and birds to hear through the air. Their eardrums all look similar, are formed when the ear canal reaches the first pharyngeal pouch, and function similarly. However the fossil record shows that the middle ears in these two lineages are fundamentally different, with two of the bones that make up the mammalian middle ear–the hammer and the anvil–being homologous with parts of diapsid jawbones–the articular and quadrate. In both lineages, these bones connect at what is called the primary jaw joint.

Although scientists have suspected that the eardrum–and thus hearing–developed independently in mammals and diapsids, no hard evidence has been found in the fossil record because the eardrum is never fossilized. To overcome this difficulty, the research team and their collaborators turned to evolutionary developmental biology–or “evo-devo.” They noted that in mammals, the eardrum attaches to the tympanic ring–a bone derived from the lower jaw, but that in diapsids it attaches to the quadrate–an upper jawbone. Hypothesizing that eardrum evolution was related to these different jawbones, they performed a series of experiments that manipulated lower jaw development in mice and chickens.

First they examined eardrum development in mice that lacked the Ednra receptor, a condition known to inhibit lower jaw development. They found that these mice also lacked eardrums and ear canals, showing that their development was contingent on lower jaw formation.

Next, they used an Ednra-receptor antagonist to block proper development of the lower jaw in chickens. Rather than losing the eardrum, this manipulation created duplicate eardrums and ear canals, with the additional set forming from upper jaw components that had developed within the malformed lower jaw.

To understand how the eardrum evolved twice and why it is associated with different jaw components, the researchers looked at expression of Bapxl–a marker for the primary jaw joint–and its position relative to the first pharyngeal pouch. They found that in mouse embryos, Bapxl was expressed in cells slightly below the first pharyngeal pouch and that in chickens it was expressed considerably lower. This difference forces the eardrum to develop below the primary jaw joint in mammals, necessitating an association with the lower jaw, and above the joint in diapsids, necessitating an association with the upper jaw.

While scientists still do not know how or why the primary jaw junction shifted upwards in mammals, the study shows that the middle ear developed after this shift and must therefore have occurred independently after mammal and diapsid lineages diverged from their common ancestor. Emphasizing the importance of this evo-devo approach, Chief Scientist Shigeru Kuratani notes that, “convergent evolution can often result in structures that resemble each other so much that they appear to be homologous. But, developmental analyses can often reveal their different origins.”

For structures like the eardrum that do not fossilize, the evo-devo approach is even more important. Lead author Masaki Takechi speculates that, “this approach to studying middle ear evolution could help us understand other related evolutionary changes in mammals, including the ability to detect higher toned sounds and even our greater metabolic efficiency.”

Reference:
Kitazawa T, Takechi M, Hirasawa T, Adachi N, Narboux-Neme N, Kume H, Maeda K, Hirai T, Miyagawa-Tomita S, Kurihara Y, Hitomi J, Levi G, Kuratani S, and Kurihara H. (2015) Developmental Genetic Bases behind the Independent Origin of the Tympanic Membrane in Mammals and Diapsids. Nature Communications. DOI:10.1038/ncomms7853

Note : The above story is based on materials provided by RIKEN.

Combination of gas field fluid injection and removal is most likely cause of 2013-14 earthquakes

Several natural and man-made factors can influence the subsurface stress regime resulting in earthquakes. Natural ones include intraplate stress changes related to plate tectonics and natural water table or lake level variations caused by changing weather patterns or water drainage patterns over time, or advance or retreat of glaciers. Man-made include human-generated changes to the water table, including dam construction, and industrial activities involving the injection or removal of fluids from the subsurface. Credit: Nature Communications/SMU

A seismology team led by Southern Methodist University (SMU), Dallas, finds that high volumes of wastewater injection combined with saltwater (brine) extraction from natural gas wells is the most likely cause of earthquakes occurring near Azle, Texas, from late 2013 through spring 2014.
In an area where the seismology team identified two intersecting faults, they developed a sophisticated 3D model to assess the changing fluid pressure within a rock formation in the affected area. They used the model to estimate stress changes induced in the area by two wastewater injection wells and the more than 70 production wells that remove both natural gas and significant volumes of salty water known as brine.

Conclusions from the modeling study integrate a broad-range of estimates for uncertain subsurface conditions. Ultimately, better information on fluid volumes, flow parameters, and subsurface pressures in the region will provide more accurate estimates of the fluid pressure along this fault.

“The model shows that a pressure differential develops along one of the faults as a combined result of high fluid injection rates to the west and high water removal rates to the east,” said Matthew Hornbach, SMU associate professor of geophysics. “When we ran the model over a 10-year period through a wide range of parameters, it predicted pressure changes significant enough to trigger earthquakes on faults that are already stressed.”

Model-predicted stress changes on the fault were typically tens to thousands of times larger than stress changes associated with water level fluctuations caused by the recent Texas drought.

“What we refer to as induced seismicity — earthquakes caused by something other than strictly natural forces — is often associated with subsurface pressure changes,” said Heather DeShon, SMU associate professor of geophysics. “We can rule out stress changes induced by local water table changes. While some uncertainties remain, it is unlikely that natural increases to tectonic stresses led to these events.”

DeShon explained that some ancient faults in the region are more susceptible to movement — “near critically stressed” — due to their orientation and direction. “In other words, surprisingly small changes in stress can reactivate certain faults in the region and cause earthquakes,” DeShon said.

The study, “Causal Factors for Seismicity near Azle, Texas,” has been published online in the journal Nature Communications.

The study was produced by a team of scientists from SMU’s Department of Earth Sciences in Dedman College of Humanities and Sciences, the U.S. Geological Survey, the University of Texas Institute for Geophysics and the University of Texas Department of Petroleum and Geosystems Engineering. SMU scientists Hornbach and DeShon are the lead authors.

SMU seismologists have been studying earthquakes in North Texas since 2008, when the first series of felt tremors hit near DFW International Airport between Oct. 30, 2008, and May 16, 2009. Next came a series of quakes in Cleburne between June 2009 and June 2010, and this third series in the Azle-Reno area northwest of Fort Worth occurred between November 2013 and January 2014. The SMU team also is studying an ongoing series of earthquakes in the Irving-Dallas area that began in April 2014.

In both the DFW sequence and the Cleburne sequence, the operation of injection wells used in the disposal of natural gas production fluids was listed as a possible cause of the seismicity. The introduction of fluid pressure modeling of both industry activity and water table fluctuations in the Azle study represents the first of its kind, and has allowed the SMU team to move beyond assessment of possible causes to the most likely cause identified in this report.

Prior to the DFW Airport earthquakes in 2008, an earthquake large enough to be felt had not been reported in the North Texas area since 1950. The North Texas earthquakes of the last seven years have all occurred in areas developed for natural gas extraction from a geologic formation known as the Barnett Shale. The Texas Railroad Commission reports that production in the Barnett Shale grew exponentially from 216 million cubic feet a day in 2000, to 4.4 billion cubic feet a day in 2008, to a peak of 5.74 billion cubic feet of gas a day in 2012.

While the SMU Azle study adds to the growing body of evidence connecting some injection wells and, to a lesser extent, some oil and gas production to induced earthquakes, SMU’s team notes that there are many thousands of injection and/or production wells that are not associated with earthquakes.

The area of study addressed in the report is in the Newark East Gas Field (NEGF), north and east of Azle. In this field, hydraulic fracturing is applied to loosen and extract gas trapped in the Barnett Shale, a sedimentary rock formation formed approximately 350 million years ago. The report explains that along with natural gas, production wells in the Azle area of the NEGF can also bring to the surface significant volumes of water from the highly permeable Ellenburger Formation — both naturally occurring brine as well as fluids that were introduced during the fracking process.

Subsurface fluid pressures are known to play a key role in causing seismicity. A primer produced by the U.S. Department of Energy explains the interplay of fluids and faults:

“The fluid pressure in the pores and fractures of the rocks is called the ‘pore pressure.’ The pore pressure acts against the weight of the rock and the forces holding the rock together (stresses due to tectonic forces). If the pore pressures are low (especially compared to the forces holding the rock together), then only the imbalance of natural in situ earth stresses will cause an occasional earthquake. If, however, pore pressures increase, then it would take less of an imbalance of in situ stresses to cause an earthquake, thus accelerating earthquake activity. This type of failure…is called shear failure. Injecting fluids into the subsurface is one way of increasing the pore pressure and causing faults and fractures to “fail” more easily, thus inducing an earthquake. Thus, induced seismicity can be caused by injecting fluid into the subsurface or by extracting fluids at a rate that causes subsidence and/or slippage along planes of weakness in the earth.”

All seismic waveform data used in the compilation of the report are publically available at the IRIS Data Management Center. Wastewater injection, brine production and surface injection pressure data are publicly available at the Texas Railroad Commission (TRC). Craig Pearson at the TRC, Bob Patterson from the Upper Trinity Groundwater Conservation District; scientists at XTO Energy, ExxonMobil, MorningStar Partners and EnerVest provided valuable discussions and, in some instances, data used in the completion of the report.

“This report points to the need for even more study in connection with earthquakes in North Texas,” said Brian Stump, SMU’s Albritton Chair in Earth Sciences. “Industry is an important source for key data, and the scope of the research needed to understand these earthquakes requires government support at multiple levels.”

Reference:
Matthew J. Hornbach, Heather R. DeShon, William L. Ellsworth, Brian W. Stump, Chris Hayward, Cliff Frohlich, Harrison R. Oldham, Jon E. Olson, M. Beatrice Magnani, Casey Brokaw, James H. Luetgert. Causal factors for seismicity near Azle, Texas. Nature Communications, 2015; 6: 6728 DOI: 10.1038/ncomms7728

Note: The above story is based on materials provided by Southern Methodist University.

Proteins for anxiety in humans and moulting in insects have common origin

This is a sea urchin. Credit: MR Elphick/QMUL

Neuropeptides are small proteins in the brains of all animals that bind to receptor proteins and cause activity in cells. The researchers at Queen Mary University of London, led by Professor Maurice Elphick, were investigating whether a particular sea urchin neuropeptide was an evolutionary link between neuropeptides in humans and insects.

The last common ancestor of humans, sea urchins and insects probably lived over 600 million years ago but we’ll almost certainly never know what it looked like or even find an example of it in the fossil record but we can tell a lot about it by looking at genes and proteins in its evolutionary descendants.

Neuropeptide molecules are difficult to study in this way because they are small, often only a few amino acids long, much shorter than most proteins, and therefore patterns can be difficult to identify.

Dean Semmens, a PhD student at QMUL and first author of the paper said:

“The remarkable process of evolution means that molecules that once had the same function can, over hundreds of millions of years, change to control such different processes as anxiety in humans and moulting in insects.

“Despite their alien looking shape sea urchins are comparatively close relatives of humans, certainly much closer than insects. For this reason, as with this discovery, they can help us determine the evolutionary history and origins of important molecules in our brain.”

Reference:
‘Discovery of sea urchin NGFFFamide receptor unites a bilaterian neuropeptide family’ by Semmens DC, Beets I, Rowe ML, Blowes LM, Oliveri P, Elphick MR. 2015 is published in Open Biology. DOI: 10.1098/rsob.150030

Note : The above story is based on materials provided by Queen Mary, University of London.

Earth Day

Earth Day is an annual event, celebrated on April 22, on which day events worldwide are held to demonstrate support for environmental protection. It was first celebrated in 1970, and is now coordinated globally by the Earth Day Network, and celebrated in more than 192 countries each year.

In 1969 at a UNESCO Conference in San Francisco, peace activist John McConnell proposed a day to honor the Earth and the concept of peace, to first be celebrated on March 21, 1970, the first day of spring in the northern hemisphere. This day of nature’s equipoise was later sanctioned in a Proclamation written by McConnell and signed by Secretary General U Thant at the United Nations. A month later a separate Earth Day was founded by United States Senator Gaylord Nelson as an environmental teach-in first held on April 22, 1970. Nelson was later awarded the Presidential Medal of Freedom Award in recognition of his work. While this April 22 Earth Day was focused on the United States, an organization launched by Denis Hayes, who was the original national coordinator in 1970, took it international in 1990 and organized events in 141 nations. Numerous communities celebrate Earth Week, an entire week of activities focused on environmental issues.

The History of a Movement

Each year, Earth Day — April 22 — marks the anniversary of what many consider the birth of the modern environmental movement in 1970.

The height of hippie and flower-child culture in the United States, 1970 brought the death of Jimi Hendrix, the last Beatles album, and Simon & Garfunkel’s “Bridge Over Troubled Water”. Protest was the order of the day, but saving the planet was not the cause. War raged in Vietnam, and students nationwide increasingly opposed it.

At the time, Americans were slurping leaded gas through massive V8 sedans. Industry belched out smoke and sludge with little fear of legal consequences or bad press. Air pollution was commonly accepted as the smell of prosperity. “Environment” was a word that appeared more often in spelling bees than on the evening news.  Although mainstream America remained oblivious to environmental concerns, the stage had been set for change by the publication of Rachel Carson’s New York Times bestseller Silent Spring in 1962.  The book represented a watershed moment for the modern environmental movement, selling more than 500,000 copies in 24 countries and, up until that moment, more than any other person, Ms. Carson raised public awareness and concern for living organisms, the environment and public health.

Earth Day 1970 capitalized on the emerging consciousness, channeling the energy of the anti-war protest movement and putting environmental concerns front and center.

The Idea

The idea came to Earth Day founder Gaylord Nelson, then a U.S. Senator from Wisconsin, after witnessing the ravages of the 1969 massive oil spill in Santa Barbara, California. Inspired by the student anti-war movement, he realized that if he could infuse that energy with an emerging public consciousness about air and water pollution, it would force environmental protection onto the national political agenda. Senator Nelson announced the idea for a “national teach-in on the environment” to the national media; persuaded Pete McCloskey, a conservation-minded Republican Congressman, to serve as his co-chair; and recruited Denis Hayes as national coordinator. Hayes built a national staff of 85 to promote events across the land.

As a result, on the 22nd of April, 20 million Americans took to the streets, parks, and auditoriums to demonstrate for a healthy, sustainable environment in massive coast-to-coast rallies. Thousands of colleges and universities organized protests against the deterioration of the environment. Groups that had been fighting against oil spills, polluting factories and power plants, raw sewage, toxic dumps, pesticides, freeways, the loss of wilderness, and the extinction of wildlife suddenly realized they shared common values.

Earth Day 1970 achieved a rare political alignment, enlisting support from Republicans and Democrats, rich and poor, city slickers and farmers, tycoons and labor leaders. The first Earth Day led to the creation of the United States Environmental Protection Agency and the passage of the Clean Air, Clean Water, and Endangered Species Acts. “It was a gamble,” Gaylord recalled, “but it worked.”

As 1990 approached, a group of environmental leaders asked Denis Hayes to organize another big campaign. This time, Earth Day went global, mobilizing 200 million people in 141 countries and lifting environmental issues onto the world stage. Earth Day 1990 gave a huge boost to recycling efforts worldwide and helped pave the way for the 1992 United Nations Earth Summit in Rio de Janeiro. It also prompted President Bill Clinton to award Senator Nelson the Presidential Medal of Freedom (1995) — the highest honor given to civilians in the United States — for his role as Earth Day founder.

Earth Day Today

As the millennium approached, Hayes agreed to spearhead another campaign, this time focused on global warming and a push for clean energy. With 5,000 environmental groups in a record 184 countries reaching out to hundreds of millions of people, Earth Day 2000 combined the big-picture feistiness of the first Earth Day with the international grassroots activism of Earth Day 1990. It used the Internet to organize activists, but also featured a talking drum chain that traveled from village to village in Gabon, Africa, and hundreds of thousands of people gathered on the National Mall in Washington, DC. Earth Day 2000 sent world leaders the loud and clear message that citizens around the world wanted quick and decisive action on clean energy.

Much like 1970, Earth Day 2010 came at a time of great challenge for the environmental community. Climate change deniers, well-funded oil lobbyists, reticent politicians, a disinterested public, and a divided environmental community all contributed to a strong narrative that overshadowed the cause of progress and change. In spite of the challenge, for its 40th anniversary, Earth Day Network reestablished Earth Day as a powerful focal point around which people could demonstrate their commitment. Earth Day Network brought 225,000 people to the National Mall for a Climate Rally, amassed 40 million environmental service actions toward its 2012 goal of A Billion Acts of Green®, launched an international, 1-million tree planting initiative with Avatar director James Cameron and tripled its online base to over 900,000 community members.

The fight for a clean environment continues in a climate of increasing urgency, as the ravages of climate change become more manifest every day. We invite you to be a part of Earth Day and help write many more victories and successes into our history. Discover energy you didn’t even know you had. Feel it rumble through the grassroots under your feet and the technology at your fingertips. Channel it into building a clean, healthy, diverse world for generations to come.

The Earth Day name

According to Nelson, the moniker “Earth Day” was “an obvious and logical name” suggested by “a number of people” in the fall of 1969, including, he writes, both “a friend of mine who had been in the field of public relations” and “a New York advertising executive,” Julian Koenig. Koenig, who had been on Nelson’s organizing committee in 1969, has said that the idea came to him by the coincidence of his birthday with the day selected, April 22; “Earth Day” rhyming with “birthday,” the connection seemed natural. Other names circulated during preparations—Nelson himself continued to call it the National Environment Teach-In, but national coordinator Denis Hayes used the term Earth Day in his communications and press coverage of the event was “practically unanimous” in its use of “Earth Day,” so the name stuck. The introduction of the name “Earth Day” was also claimed by John McConnell (see “Equinox Earth Day,” below).

Reference:
Wikipedia : Earth Day
Earth day Network: Earth Day: The History of a Movement

Uranium isotopes carry the fingerprint of ancient bacterial activity

Non-soluble and soluble uranium Credit: Alain Herzog, 2015

The oceans and other water bodies contain billions of tons of dissolved uranium. Over the planet’s history, some of this uranium was transformed into an insoluble form, causing it to precipitate and accumulate in sediments. There are two ways that uranium can go from a soluble to an insoluble form: either through the action of live organisms – bacteria – or by interacting chemically with certain minerals. Knowing which pathway was taken can provide valuable insight into the evolution and activity of microbial biology over Earth’s history.

Publishing in the journal PNAS, an international team of researchers led by the Ecole Polytechnique Fédérale de Lausanne in Switzerland describes a new method that uses the isotopic composition of uranium to distinguish between these alternative pathways.

The link between bacteria and the rock record is not new. Under certain conditions, bacteria interact biochemically with dissolved ions such as sulfur, or uranium, causing them to become insoluble and precipitate, contributing to their accumulation in oceanic sediments. But for the first time, scientists can determine whether bacteria were active at the time and place the sediments were formed by analyzing tiny amounts of uranium present in sediments.

Picky electron donors

The fact that bacteria and uranium interact at all may sound somewhat surprising. But as Rizlan Bernier-Latmani, the study’s principal investigator explains, to complete certain metabolic processes, the bacteria need to get rid of electrons, and dissolved uranium just happens to be capable of taking them up. Uranium is far from being the only metal to which bacteria donate extra electrons. But once it precipitates in its insoluble form, uranium is the only metal known to date that preserves a signal that scientists can analyze to detect whether bacteria were involved in its transformation.

What makes uranium unique is that bacteria are picky when it comes to the atomic weight of the uranium to which they donate electrons. Of the two most abundant uranium isotopes found on earth – uranium-238 and uranium-235 – bacteria seem to prefer the heavier uranium-238. The chemical transformation pathway, by contrast, treats both forms of uranium equally. As a result, a slightly higher ratio between heavy and light isotopes in solid uranium extracted from the ground points at a bacterial transformation process.

The evolution of life

Being able to discriminate between both pathways gives researchers a unique tool to probe into environmental niches occupied by bacteria billions of years ago. Applying their methodology to existing data of Archean sediments from Western Australia, the authors argue that uranium found in oxygen-depleted sediments there was immobilized biologically. Bacteria, they argue, were active there already 2.5 billion years ago when the sediments were formed.

To an environmental biogeochemist like Bernier-Latmani, knowing whether or not bacteria were active at that time and place is exciting, as it could provide new insight into the planet’s chemical evolution, for example on the abundance free oxygen in the oceans and the atmosphere. “We have some understanding of how oxygen concentrations in the atmosphere and oceans evolved over time. There is increasing evidence that traces of oxygen were available already billions of years ago in an overall anoxic world – and bacteria existed that indirectly used it. These changes have a direct bearing on the evolution of life and on mass extinctions,” she says. In the complex puzzle of the planet’s early history, uranium could be holding some of the missing pieces.

The research was carried out in collaboration with researchers from the Institute of Mineralogy at Leibniz University in Hannover, Germany, and the School of Earth and Space Exploration at Arizona State University in Arizona, USA.

Reference:
Uranium isotopes fingerprint biotic reduction

Edited by Donald E. Canfield, Institute of Biology and Nordic Center for Earth Evolution, University of Southern Denmark, Odense M., Denmark, and approved March 23, 2015

Publication HistoryPublished online before print on April 20, 2015, DOI: 10.1073/pnas.1421841112

Note : The above story is based on materials provided by Ecole Polytechnique Fédérale de Lausanne.

Oldest stone tools raise questions about their creators

Excavators at Lomekwi, Kenya, in 2011. Credit: MPK/WTAP

The oldest stone tools on record may spell the end for the theory that complex toolmaking began with the genus Homo, to which humans belong. The 3.3-million-year-old artefacts, revealed at a conference in California last week, predate the first members of Homo, and suggest that more-ancient hominin ancestors had the intelligence and dexterity to craft sophisticated tools.
“This is a landmark discovery pertaining to one of the key evolutionary milestones,” says Zeresenay Alemseged, a palaeoanthropologist at the California Academy of Sciences in San Francisco, who attended the talk at the annual meeting of the Paleoanthropology Society in San Francisco, on 14 April.

More than 80 years ago, anthropologist Louis Leakey found stone tools in Olduvai Gorge in Tanzania. Decades later, he and his wife Mary and their team found bones from a species that the Leakeys named Homo habilis — ‘the handy man’. This led to the prevailing view that human stone-tool use began with Homo, a group that includes modern humans and their big-brained and tall forebears. The oldest of these Oldowan tools date to 2.6 million years ago — around the time of the earliest Homo fossils. Climate upheavals that transformed dense forest into open savannah might have catalysed ancient humans into developing the new technology so that they could hunt or scavenge grass-eating animals, the theory goes.

Chimpanzees and other non-human primates use stones to crack nuts, for instance, but their tools lack the craftsmanship of the Oldowan toolmakers, who would strike one rock against another, breaking off flakes to leave a sharp-edged stone core.

In 2010, Alemseged and his team reported an intriguing find at a site called Dikika in Ethiopia (S. P. McPherron et al. Nature 466, 857–860; 2010). They saw cut marks on bones from 3.4 million years ago, when ape-like creatures such as Australopithecus afarensis — the same species as the famous fossil called Lucy — roamed eastern Africa. This hinted at even earlier manufacturing of stone tools. Other researchers questioned the find, attributing the marks to natural wear and tear such as trampling, or bites inflicted by crocodiles.

Aware of this controversy, a team led by Sonia Harmand of Stony Brook University in New York set out in 2011 to find tools older than 3 million years, at a site west of Kenya’s Lake Turkana. On a July day, the team took a wrong turn and happened upon a patch of land that seemed worth exploring. By tea time, they had found pieces of rock lying on the ground that looked like flakes left over from the manufacture of stone tools. Careful excavation of the patch revealed 19 buried artefacts, including stone core forms, and dozens more on the surface. One key surface find was a small rock flake, which fitted in a gap in a buried core as snugly as a jigsaw puzzle piece, confirming that the tools were made through a flaking process.

The tools come from sediments that Harmand’s team dated to around 3.3 million years ago and are much larger than the Oldowan artefacts: some weigh as much as 15 kilograms. The team concluded that the tools represent a distinct culture, which they have named the Lomekwian culture after the site where the implements were found. “Lomekwi marks a new beginning to the known archaeological record,” Harmand said at the meeting.

Hominin fossils and cut-marked animal bones have not been found at the site, so the team cannot yet say who made the tools or how they were used. But their discovery may deliver a fatal blow to the already fragile idea that complex toolmaking began with Homo. Harmand suggests that earlier species, such as Kenyanthropus platyops, bones of which have been found on the western shore of Lake Turkana, and A. afarensis, may have made tools by building on the cruder abilities seen in apes and monkeys. The Lomekwi tools were made in a forest environment, also questioning the idea that open landscapes catalysed tool use, said Harmand.

Alemseged sees the Lomekwi tools as vindication for his team’s controversial find of cut-marked bones. Before Harmand’s presentation, Alemseged’s colleague Jessica Thompson, an archaeologist at Emory University in Atlanta, Georgia, presented an analysis of other animal bones from Dikika. None contained similar patterns to those reported in 2010, suggesting that the marks were made by something other than wear and tear — probably by tools.

The Lomekwi talk left David Braun, an archaeologist at George Washington University in Washington DC, itching for further details. He says that the tools look authentic, as does the date that Harmand and her team assert. The identity of their makers has aroused his curiosity: “What the hell do these things look like if they can use 15-kilogram tools?”

But he is most interested in what the Lomekwi tools meant for their creators. Did they offer an advantage over the other hominins that were around at the time, or was tool­making more common 3 million to 4 million years ago than existing evidence suggests? “They’re a game-changer,” he adds, “no matter what.”

Note : The above story is based on materials provided by Nature.

Ocean currents impact methane consumption

Diagram showing the divisions of the worlds oceans. Credit: Chris huh

Large amounts of methane – whether as free gas or as solid gas hydrates – can be found in the sea floor along the ocean shores. When the hydrates dissolve or when the gas finds pathways in the sea floor to ascend, the methane can be released into the water and rise to the surface. Once emitted into the atmosphere, it acts as a very potent greenhouse gas twenty times stronger than carbon dioxide. Fortunately, marine bacteria exist that consume part of the methane before it reaches the water surface. Geomicrobiologists and oceanographers from Switzerland, Germany, Great Britain and the U.S. were able to show in an interdisciplinary study that ocean currents can have a strong impact on this bacterial methane removal. Nature Geoscience has published the study today.

The data was collected during an expedition in the summer of 2012 aboard the research vessel MARIA S. MERIAN. At that time, the international research team was studying the methane seeps off the west coast of the Norwegian Svalbard archipelago. “Already then, we were able to see that the level of activity of the methane consuming bacteria changed drastically over very short time spans, while at the same time many oceanographic parameters such as water temperature and salinity also changed”, explains Lea Steinle, first-author of the study and PhD student at the University of Basel and the GEOMAR Helmholtz Centre for Ocean Research Kiel. For her PhD thesis, Steinle studies where and how much methane is consumed in the ocean water column by bacteria.

In order to test if the fluctuations measured during the four weeks of the expedition were only random observations or based on typical and recurring processes, oceanographers of the GEOMAR later took a closer look at the region with a high resolution ocean model. “We were able to see that the observed fluctuations of the oceanographic data and the activity level of the bacteria can be traced back to recurring shifts in the West Spitsbergen Current”, says Prof. Dr. Arne Biastoch from the GEOMAR. The West Spitsbergen Current is a relatively warm, salty current that carries water from the Norwegian Sea to the Arctic Ocean. “It mostly runs very close to the coast. Shifts in the current strength are responsible for the meandering of the current. Then, in a matter of a few days, the current moves miles away from the coast”, explains Professor Biastoch further.

If the current runs directly over the methane seeps near the coast or continues on the open sea, impacts the methane filtration. “We were able to show that strength and variability of ocean currents control the prevalence of methanotrophic bacteria”, says Lea Steinle, “therefore, large bacteria populations cannot develop in a strong current, which consequently leads to less methane consumption.”

In order to verify if these results are only valid for Spitsbergen or are of global importance, the researchers studied in a second, global ocean model how ocean currents are varying in other regions of the world with methane seeps. “We saw that strong and fluctuating currents are often found above methane seeps”, says Dr. Helge Niemann, biogeochemist at the University of Basel and one of the initiator of the study. His colleague Prof. Dr. Tina Treude, geomicrobiologist at the University of California Los Angeles adds: “This clearly shows that one-time or short-term measurements often only give us a snapshot of the whole situation.” In the future, fluctuations of bacterial methane consumption caused by oceanographic parameters will have to be considered, both during field measurements as well as models.

Reference:
Water column methanotrophy controlled by a rapid oceanographic switch, Nature Geoscience, DOI: 10.1038/ngeo2420

Note : The above story is based on materials provided by Helmholtz Association of German Research Centres.

Scientific study shines new light on the source of diamonds

A team of specialists from four Australian universities, including The University of Western Australia, has established the exact source of a diamond-bearing rock for the first time.

These rocks, orangeites, are already commonly found in South Africa.  However, the new study now reveals that they may be present in much higher abundance worldwide, including in Australia.

While rough on the outside, orangeites contain not only treasured diamonds but also tiny fragments of mantle and crustal rocks.  By using highly sophisticated geochemical and isotopic analytical techniques, the scientists were able to link those fragments to the source of the orangeites, deep in the interior of the planet.

The work was carried out by Associate Professor Marco Fiorentini from UWA’s Centre for Exploration Targeting and the ARC Centre of Excellence for Core to Crust Fluid Systems, and colleagues from a team of Australian universities.

“We found strong evidence that orangeites are sourced from MARID (Mica-Amphibole-Rutile-Ilmenite-Diopside) mantle, which up until recently had only been recognised in South Africa,” Professor Fiorentini said.  “However, ongoing studies suggest that MARID mantle may occur in other continents, including here in Australia.”

The team found that orangeites were formed from lava produced by massive volcanic eruptions several tens of millions of years ago.  “With such an ancient age perhaps diamonds truly are forever,” joked Professor Fiorentini.  “What is certain is that the new study provides key information about the composition of the deep Earth.

“Orangeites are the solidified product of lavas from explosive volcanic eruptions, forming shallow craters and rock-filled fractures (or diatremes) in the Earth’s crust.  Diatremes breach the Earth’s surface and produce a steep inverted cone shape, where diamonds are usually found.”

The team presented its findings on the composition of the source that generated orangeites in a paper published online today in Nature Communications.

Note : The above story is based on materials provided by University of Western Australia.

Oldest fossils controversy resolved

Elemental map of a cross section through a pseudofossil. It can be seen that the artefact consists of a complex stack of plate like aluminium-rich clay minerals (green, stacked from left to right).  Some of these are coated with later generation of carbon (yellow) and iron (red) giving the false impression of cellular compartments. Credit: University of Western Australia

New analysis of world-famous 3.46 billion-year-old rocks by researchers from The University of Western Australia is set to finally resolve a long-running evolutionary controversy.
The new research, published this week in Proceedings of the National Academy of Sciences USA, shows that structures once thought to be Earth’s oldest microfossils do not compare with younger fossil candidates but have, instead, the character of peculiarly shaped minerals.

In 1993, US scientist Bill Schopf described tiny (c. 0.5-20 micrometres wide), carbon-rich filaments within the 3.46 billion-year-old Apex chert from the Pilbara region of Western Australia, which he likened to certain forms of bacteria, including cyanobacteria.  (Chert is fine-grained, silica-rich sedimentary rock.)

These ‘Apex chert microfossils’ soon became enshrined in textbooks, museums displays, popular science books and online reference guides as the earliest evidence for life on Earth.  In 1996, these structures were even used to test and help refute the case against ‘microfossils’ in the Martian meteorite ALH 84001.

Even so, their curious colour and complexity gave rise to some early questions.  Gravest doubts emerged in 2002, when a team led by Martin Brasier of Oxford University (co-author of this current study) revealed that the host rock was not part of a simple sedimentary unit but rather came from a complex, high-temperature hydrothermal vein, with evidence for multiple episodes of subsurface fluid flow over a long time.  His team advanced an alternative hypothesis, stating that these curious structures were not true microfossils but pseudofossils formed by the redistribution of carbon around mineral grains during these hydrothermal events.

Although other research teams have since supported the hydrothermal context of Brasier, the ‘Apex microfossil’ debate has remained hard to resolve because scientific instrumentation has only recently reached the level of resolution needed to map both chemical composition and morphology of these ‘microfossils’ at the sub-micrometre scale.

Now scientists based in UWA’s Centre for Microscopy, Characterisation and Analysis, in collaboration with the late Professor Brasier, have come up with new high-spatial resolution data that clearly demonstrate that the ‘Apex chert microfossils’ comprise stacks of plate-like clay minerals arranged into branched and tapered worm-like chains.  Carbon was then absorbed on to the edges of these minerals during the circulation of hydrothermal fluids, giving a false impression of carbon-rich cell-like walls.

UWA researchers Dr David Wacey and Professor Martin Saunders used transmission electron microscopy to examine ultrathin slices of ‘microfossil’ candidates, to build up nanoscale maps of their size, shape, mineral chemistry and distribution of carbon.

Dr Wacey said it soon became clear that the distribution of carbon was unlike anything seen in authentic microfossils. “A false appearance of cellular compartments is given by multiple plates of clay minerals having a chemistry entirely compatible with a high temperature hydrothermal setting,” he said.

“We studied a range of authentic microfossils using the same transmission electron microscopy technique and in all cases these reveal coherent, rounded envelopes of carbon having dimensions consistent with their origin from cell walls and sheaths.  At high spatial resolution, the Apex ‘microfossils’ lack all evidence for coherent, rounded walls.  Instead, they have a complex, incoherent spikey morphology, evidently formed by filaments of clay crystals coated with iron and carbon.”

Before his death Professor Brasier said: “This research should, at long last, provide a closing chapter for the ‘Apex microfossil’ debate.  Such discussions have encouraged us to refine both the questions and techniques needed to search for life remote in time and space, including signals from Mars or beyond.  It is hoped that textbooks and websites will now focus upon recent and more robust discoveries of microfossils of a similar age from Western Australia, also examined by us in the same article.”

Note : The above story is based on materials provided by University of Western Australia.

Related Articles