The last glacial maximum was a time when Earth’s far northern and far southern latitudes were largely covered in ice sheets and sea levels were low. Over much of the planet, glaciers were at their greatest extent roughly 20,000 years ago. But according to a study headed by University of Pennsylvania geologist Jane Willenbring, that wasn’t true in at least one part of southern Europe. Due to local effects of temperature and precipitation, the local glacial maximum occurred considerably earlier, around 26,000 years ago.
The finding sheds new light on how regional climate has varied over time, providing information that could lead to more-accurate global climate models, which predict what changes Earth will experience in the future.
Willenbring, an assistant professor in Penn’s Department of Earth and Environmental Science in the School of Arts and Sciences, teamed with researchers from Spain, the United Kingdom, China and the United States to pursue this study of the ancient glaciers of southern Europe.
“We wanted to unravel why and when glaciers grow and shrink,” Willenbring said.
In the study site in central Spain, it is relatively straightforward to discern the size of ancient glaciers, because the ice carried and dropped boulders at the margin. Thus a ring of boulders marks the edge of the old glacier.
It is not as easy to determine what caused the glacier to grow, however. Glaciers need both moisture and cold temperatures to expand. Studying the boulders that rim the ancient glaciers alone cannot distinguish these contributions. Caves, however, provide a way to differentiate the two factors. Stalagmites and stalactites — the stony projections that grow from the cave floor and ceiling, respectively — carry a record of precipitation because they grow as a result of dripping water.
“If you add the cave data to the data from the glaciers, it gives you a neat way of figuring out whether it was cold temperatures or higher precipitation that drove the glacier growth at the time,” Willenbring said.
The researchers conducted the study in three of Spain’s mountain ranges: the Bejár, Gredos and Guadarrama. The nearby Eagle Cave allowed them to obtain indirect precipitation data.
To ascertain the age of the boulders strewn by the glaciers and thus come up with a date when glaciers were at their greatest extent, Willenbring and colleagues used a technique known as cosmogenic nuclide exposure dating, which measures the chemical residue of supernova explosions. They also used standard radiometric techniques to date stalagmites from Eagle Cave, which gave them information about fluxes in precipitation during the time the glaciers covered the land.
“Previously, people believe the last glacial maximum was somewhere in the range of 19-23,000 years ago,” Willenbring said. “Our chronology indicates that’s more in the range of 25-29,000 years ago in Spain.”
The geologists found that, although temperatures were cool in the range of 19,000-23,000 years ago, conditions were also relatively dry, so the glaciers did not regain the size they had obtained several thousand years earlier, when rain and snowfall totals were higher. They reported their findings in the journal Scientific Reports.
Given the revised timeline in this region, Willenbring and colleagues determined that the increased precipitation resulted from changes in the intensity of the sun’s radiation on the Earth, which is based on the planet’s tilt in orbit. Such changes can impact patterns of wind, temperature and storms.
“That probably means there was a southward shift of the North Atlantic Polar Front, which caused storm tracks to move south, too,” Willenbring said. “Also, at this time there was a nice warm source of precipitation, unlike before and after when the ocean was colder.”
Willenbring noted that the new date for the glacier maximum in the Mediterranean region, which is several thousands of years earlier than the date the maximum was reached in central Europe, will help provide more context for creating accurate global climate models.
“It’s important for global climate models to be able to test under what conditions precipitation changes and when sources for that precipitation change,” she said. “That’s particularly true in some of these arid regions, like the American Southwest and the Mediterranean.”
When glaciers were peaking in the Mediterranean around 26,000 years ago, the American Southwest was experiencing similar conditions. Areas that are now desert were moist. Large lakes abounded, including Lake Bonneville, which covered much of modern-day Utah. The state’s Great Salt Lake is what remains.
“Lakes in this area were really high for 5,000-10,000 years, and the cause for that has always been a mystery,” Willenbring said. “By looking at what was happening in the Mediterranean, we might eventually be able to say something about the conditions that led to these lakes in the Southwest, too.”
This research was supported by the Ministerio de Ciencia e Innovación and the Junta de Comunidades de Castilla-La Mancha.
Note : The above story is based on materials provided by University of Pennsylvania.
The last glacial maximum was a time when Earth’s far northern and far southern latitudes were largely covered in ice sheets and sea levels were low. Over much of the planet, glaciers were at their greatest extent roughly 20,000 years ago. But according to a study headed by University of Pennsylvania geologist Jane Willenbring, that wasn’t true in at least one part of southern Europe. Due to local effects of temperature and precipitation, the local glacial maximum occurred considerably earlier, around 26,000 years ago.
The finding sheds new light on how regional climate has varied over time, providing information that could lead to more-accurate global climate models, which predict what changes Earth will experience in the future.
Willenbring, an assistant professor in Penn’s Department of Earth and Environmental Science in the School of Arts and Sciences, teamed with researchers from Spain, the United Kingdom, China and the United States to pursue this study of the ancient glaciers of southern Europe.
“We wanted to unravel why and when glaciers grow and shrink,” Willenbring said.
In the study site in central Spain, it is relatively straightforward to discern the size of ancient glaciers, because the ice carried and dropped boulders at the margin. Thus a ring of boulders marks the edge of the old glacier.
It is not as easy to determine what caused the glacier to grow, however. Glaciers need both moisture and cold temperatures to expand. Studying the boulders that rim the ancient glaciers alone cannot distinguish these contributions. Caves, however, provide a way to differentiate the two factors. Stalagmites and stalactites — the stony projections that grow from the cave floor and ceiling, respectively — carry a record of precipitation because they grow as a result of dripping water.
“If you add the cave data to the data from the glaciers, it gives you a neat way of figuring out whether it was cold temperatures or higher precipitation that drove the glacier growth at the time,” Willenbring said.
The researchers conducted the study in three of Spain’s mountain ranges: the Bejár, Gredos and Guadarrama. The nearby Eagle Cave allowed them to obtain indirect precipitation data.
To ascertain the age of the boulders strewn by the glaciers and thus come up with a date when glaciers were at their greatest extent, Willenbring and colleagues used a technique known as cosmogenic nuclide exposure dating, which measures the chemical residue of supernova explosions. They also used standard radiometric techniques to date stalagmites from Eagle Cave, which gave them information about fluxes in precipitation during the time the glaciers covered the land.
“Previously, people believe the last glacial maximum was somewhere in the range of 19-23,000 years ago,” Willenbring said. “Our chronology indicates that’s more in the range of 25-29,000 years ago in Spain.”
The geologists found that, although temperatures were cool in the range of 19,000-23,000 years ago, conditions were also relatively dry, so the glaciers did not regain the size they had obtained several thousand years earlier, when rain and snowfall totals were higher. They reported their findings in the journal Scientific Reports.
Given the revised timeline in this region, Willenbring and colleagues determined that the increased precipitation resulted from changes in the intensity of the sun’s radiation on the Earth, which is based on the planet’s tilt in orbit. Such changes can impact patterns of wind, temperature and storms.
“That probably means there was a southward shift of the North Atlantic Polar Front, which caused storm tracks to move south, too,” Willenbring said. “Also, at this time there was a nice warm source of precipitation, unlike before and after when the ocean was colder.”
Willenbring noted that the new date for the glacier maximum in the Mediterranean region, which is several thousands of years earlier than the date the maximum was reached in central Europe, will help provide more context for creating accurate global climate models.
“It’s important for global climate models to be able to test under what conditions precipitation changes and when sources for that precipitation change,” she said. “That’s particularly true in some of these arid regions, like the American Southwest and the Mediterranean.”
When glaciers were peaking in the Mediterranean around 26,000 years ago, the American Southwest was experiencing similar conditions. Areas that are now desert were moist. Large lakes abounded, including Lake Bonneville, which covered much of modern-day Utah. The state’s Great Salt Lake is what remains.
“Lakes in this area were really high for 5,000-10,000 years, and the cause for that has always been a mystery,” Willenbring said. “By looking at what was happening in the Mediterranean, we might eventually be able to say something about the conditions that led to these lakes in the Southwest, too.”
This research was supported by the Ministerio de Ciencia e Innovación and the Junta de Comunidades de Castilla-La Mancha.
Note : The above story is based on materials provided by University of Pennsylvania.
A GeoTess model is comprised of 2D triangular tessellations of a unit sphere with 1D radial arrays of nodes associated with each vertex of the 2D tessellations. Variable spatial resolution in both geographic and radial dimensions is supported. Users have considerable flexibility in how to define the data stored on the grid.
The GeoTess library is provided in Java and C++, with a C interface to the C++ library. The software has been tested on Linux, Mac, Sun and Windows computers. Source code and pre-compiled binaries are provided. A FORTRAN interface to the C++ library is being developed.
The Linux, SunOS and MacOSX downloads contain the same files except the binaries are different for each package. Makefiles are included so that any downloaded package can be recompiled for another platform. For Windows, the code is delivered in a Visual Studio 2008 project.
ALBUQUERQUE, N.M. – During the Cold War, U.S. and international monitoring agencies could spot nuclear tests and focused on measuring their sizes. Today, they’re looking around the globe to pinpoint much smaller explosives tests.
Under the sponsorship of the National Nuclear Security Administration’s Office of Defense Nuclear Nonproliferation R&D, Sandia National Laboratories and Los Alamos National Laboratory have partnered to develop a 3-D model of the Earth’s mantle and crust called SALSA3D, or Sandia-Los Alamos 3D. The purpose of this model is to assist the US Air Force and the international Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) in Vienna, Austria, more accurately locate all types of explosions.
The model uses a scalable triangular tessellation and seismic tomography to map the Earth’s “compressional wave seismic velocity,” a property of the rocks and other materials inside the Earth that indicates how quickly compressional waves travel through them and is one way to accurately locate seismic events, Sandia geophysicist Sandy Ballard said. Compressional waves — measured first after seismic events — move the particles in rocks and other materials minute distances backward and forward between the location of the event and the station detecting it.
SALSA3D also reduces the uncertainty in the model’s predictions, an important feature for decision-makers who must take action when suspicious activity is detected, he added.
“When you have an earthquake or nuclear explosion, not only do you need to know where it happened, but also how well you know that. That’s a difficult problem for these big 3-D models. It’s mainly a computational problem,” Ballard said. “The math is not so tough, just getting it done is hard, and we’ve accomplished that.”
A Sandia team has been writing and refining code for the model since 2007 and is now demonstrating SALSA3D is more accurate than current models.
In recent tests, SALSA3D was able to predict the source of seismic events over a geographical area that was 26 percent smaller than the traditional one-dimensional model and 9 percent smaller than a recently developed Regional Seismic Travel Time (RSTT) model used with the one-dimensional model.
Sandia recently released SALSA3D’s framework — the triangular tessellated grid on which the model is built — to other Earth scientists, seismologists and the public. By standardizing the framework, the seismological research community can more easily share models of the Earth’s structure and global monitoring agencies can better test different models. Both activities are hampered by the plethora of models available today, Ballard said. (*)
“GeoTess makes models compatible and standardizes everything,” he said. “This would really facilitate sharing of different models, if everyone agreed on it.”
When an explosion goes off, the energy travels through the Earth as waves that are picked up by seismometers at U.S. and international ground monitoring stations associated with nuclear explosion monitoring organizations worldwide. Scientists use these signals to determine the location.
They first predict the time taken for the waves to travel from their source through the Earth to each station. To calculate that, they have to know the seismic velocity of the Earth’s materials from the crust to the inner core, Ballard said.
“If you have material that has very high seismic velocity, the waves travel very quickly, but the energy travels less quickly through other kinds of materials, so it takes the signals longer to travel from the source to the receiver,” he says.
For the past 100 years, seismologists have predicted the travel time of seismic energy from source to receiver using one-dimensional models. These models, which are still widely used today, account only for radial variations in seismic velocity and ignore variations in geographic directions. They yield seismic event locations that are reasonably accurate, but not nearly as precise as locations calculated with high fidelity 3-D models.
Modern 3-D models of the Earth, like SALSA3D, account for distortions of the seismic wavefronts caused by minor lateral differences in the properties of rocks and other materials.
For example, waves are distorted when they move through a geological feature called a subduction zone, such as the one beneath the west coast of South America where one tectonic plate under the Pacific Ocean is diving underneath the Andes Mountains. This happens at about the rate at which fingernails grow, but, geologically speaking, that’s fast, Ballard said.
One-dimensional models, like the widely used ak135 developed in the 1990s, are good at predicting the travel time of waves when the distance from the source to the receiver is large because these waves spend most of their time traveling through the deepest, most homogenous parts of the Earth. They don’t do so well at predicting travel time to nearby events where the waves spend most of their time in the Earth’s crust or the shallowest parts of the mantle, both of which contain a larger variety of materials than the lower mantle and the Earth’s core.
RSTT, a previous model developed jointly by Sandia, Los Alamos and Lawrence Livermore national laboratories, tried to solve that problem and works best at ranges of about 60-1,200 miles (100-2,000 kilometers).
Still, “the biggest errors we get are close to the surface of the Earth. That’s where the most variability in materials is,” Ballard said.
Seismic tomography gives SALSA3D accuracy
Today, Earth scientists are mapping three dimensions: the radius, latitude and longitude.
Anyone who’s studied a globe or world atlas knows that the traditional grid of longitudinal and latitudinal lines work all right the closer you are to the equator, but at the poles, the lines are too close together. For nuclear explosion monitoring, Earth models must accurately characterize the polar regions even though they are remote because seismic waves travel under them, Ballard said.
Triangular tessellation solves that with nodes, or intersections of the triangles, that can be accurately modeled even at the poles. The triangles can be smaller where more detail is needed and larger in areas that require less detail, like the oceans. Plus the model extends into the Earth like columns of stacked pieces of pie without the rounded crust edges.
The way Sandia calculates the seismic velocities uses the same math that is used to detect a tumor in an MRI, except on a global, rather than a human, scale.
Sandia uses historical data from 118,000 earthquakes and 13,000 current and former monitoring stations worldwide collected by Los Alamos Lab’s Ground Truth catalog.
“We apply a process called seismic tomography where we take millions of observed travel times and invert them for the seismic velocities that would create that data set. It’s mathematically similar to doing linear regression, but on steroids,” Sandy says. Linear regression is a simple mathematical way to model the relationship between a known variable and one or more unknown variables. Because the Sandia team models hundreds of thousands of unknown variables, they apply a mathematical method called least squares to minimize the discrepancies between the data from previous seismic events and the predictions.
With 10 million data points, Sandia uses a distributed computer network with about 400 core processors to characterize the seismic velocity at every node.
Monitoring agencies could use SALSA3D to precompute the travel time from each station in their network to every point on Earth. When it comes time to compute the location of a new seismic event in real-time, source-to-receiver travel times can be computed in a millisecond and pinpoint the energy’s source in about a second, he said.
Uncertainty modeling a SALSA3D feature
But no model is perfect, so Sandia has developed a way to measure the uncertainty in each prediction SALSA3D makes, based on uncertainty in the velocity at each node and how that uncertainty affects the travel time prediction of each wave from a seismic event to each monitoring station.
SALSA3D estimates for the users at monitoring stations the most likely location of a seismic event and the amount of uncertainty in the answer to help inform their decisions.
International test ban treaties require that on-site inspections can only occur within a 1,000-square-kilometer (385-square-mile) area surrounding a suspected nuclear test site. Today, 3-D Earth models like SALSA3D are helping to meet and sometimes significantly exceed this threshold in most parts of the world.
“It’s extremely difficult to do because the problem is so large,” Ballard said. “But we’ve got to know it within 1,000 square kilometers or they might search in the wrong place.”
* Seismologists and researchers worldwide can now download GeoTess, which provides a common model parameterization for multidimensional Earth models and a software support system that addresses the construction, population, storage and interrogation of data stored in the model. GeoTess is not specific to any particular data, so users have considerable flexibility in how they store information in the model. The free package, including source code, is being released under the very liberal BSD Open Source License. The code is available in Java and C++, with interfaces to the C++ version written in C and Fortran90. GeoTess has been tested on multiple platforms, including Linux, SunOS, MacOSX and Windows. GeoTess is available here.
Note : The above story is based on materials provided by Sandia National Laboratories
Since the 1960s, it has been known that groundwater resources in certain provinces of China are contaminated with arsenic. Estimates of the numbers of affected people have risen year by year. In the most recent survey – conducted by the Chinese Ministry of Health between 2001 and 2005 – more than 20,000 (5%) of the 445,000 wells tested showed arsenic concentrations higher than 50 µg/L. According to official estimates, almost 6 million people consume drinking water with an arsenic content of more than 50 µg/L and almost 15 million are exposed to concentrations exceeding 10 µg/L (the guideline value recommended by the WHO).
Given the sheer size of China and the time and expense involved in testing for arsenic contamination, several more decades would probably be required to screen all of the millions of groundwater wells. Accordingly, a group of researchers from Eawag and the China Medical University in Shenyang de-veloped a statistical risk model making use of existing data on geology, soil characteristics and topographic features. This model was calibrated using available arsenic measurements. The predictions of unsafe or safe areas showed a high level of agreement, both for known high-risk areas and for areas where elevated arsenic levels had been ruled out by sampling campaigns.
In addition, large areas have now been identified as potentially at risk, such as the basins of the Tarim (Xinjiang), Ejina (Inner Mongolia) and Heihe (Gansu), or the North China Plain (Henan and Shandong). Arsenic concentrations above 10 µg/L are predicted for a total area of 580,000 km2. When these results were combined with the latest available population data, it was found that almost 20 million people across China live in high-risk areas.
Geochemist Annette Johnson concedes: “This figure may be an overestimate, as we lack reliable information on the number of people with treated water sup-plies.” But in the long term, she adds, China will remain dependent on groundwater as a source of drinking water, particularly in the arid provinces. The risk model shows where conventional groundwa-ter quality monitoring efforts are best focussed: “Our method permits more targeted sampling cam-paigns and saves time in identifying populations at risk. The Chinese authorities are adopting our maps in the national monitoring programme.
” Johnson is convinced that the model could also be used in other countries where groundwater is known or suspected to be contaminated with arsenic – for example, in Africa or in central Asia, where risk assessments for arsenic contamination have not yet been performed.
Box: Arsenic
Arsenic is one of the most common inorganic contaminants found in drinking water world-wide. This metalloid occurs as a natural component of sediments, with small quantities being dissolved in groundwater as a result of weathering. The inorganic salts of arsenic are tasteless and odourless, but highly toxic to humans. If ingested over long periods, even low concentrations can cause damage to health, including hyperpigmentation of the skin, hyperkeratosis on the palms and soles, disorders of liver, cardiovascular and kidney function, and various types of cancer.
Problems arise from the fact that firstly, arsenic concentrations can vary widely at the local level and, secondly, people in many areas are completely unaware of the risk because their groundwater wells have never been screened for arsenic. Concentrations below 10 µg/L are considered safe. This concentration is therefore recommended by the World Health Organi-zation as a guideline value for arsenic in drinking water. In China, the standard guideline has just recently changed from 50 µg/L to 10 µg/L. In many other studies, e.g. at various sites in Inner Mongolia, arsenic concentrations of more than 100 µg/L and up to 1500 µg/L have been measured.
Click here to download the kmz-files for viewing Google Earth overlays.
Note : The above story is based on materials provided bySwiss Federal Institute of Aquatic Science and Technology: Eawag
New Ohio University research suggests that the rise of an early phase of the Appalachian Mountains and cooling oceans allowed invasive species to upset the North American ecosystem 450 million years ago.
The study, published recently in the journal PLOS ONE, took a closer look at a dramatic ecological shift captured in the fossil record during the Ordovician period. Ohio University scientists argue that major geological developments triggered evolutionary changes in the ancient seas, which were dominated by organisms such as brachiopods, corals, trilobites and crinoids.
During this period, North America was part of an ancient continent called Laurentia that sat near the equator and had a tropical climate. Shifting of Earth’s tectonic plates gave rise to the Taconic Mountains, which were forerunners of the Appalachian Mountains. The geological shift left a depression behind the mountain range, flooding the area with cool water from the surrounding deep ocean.
Scientists knew that there was a massive influx of invasive species into this ocean basin during this time period, but didn’t know where the invaders came from or how they got a foothold in the ecosystem, said Alycia Stigall, an Ohio University associate professor of geological sciences who co-authored the paper with former Ohio University graduate student David Wright, now a doctoral student at Ohio State University.
“The rocks of this time record a major oceanographic shift, pulse of mountain building and a change in evolutionary dynamics coincident with each other,” Stigall said. “We are interested in examining the interactions between these factors.”
Using the fossils of 53 species of brachiopods that dominated the Laurentian ecosystem, Stigall and Wright created several phylogenies, or trees of reconstructed evolutionary relationships, to examine how individual speciation events occurred.
The invaders that proliferated during this time period were species within the groups of animals that inhabited Laurentia, Stigall explained. Within the brachiopods, corals and cephalopods, for example, some species are invasive and some are not.
As the geological changes slowly played out over the course of a million years, two patterns of survival emerged, the scientists report.
During the early stage of mountain building and ocean cooling, the native organisms became geographically divided, slowly evolving into different species suited for these niche habitats. This process, called vicariance, is the typical method by which new species originate on Earth, Stigall said.
As the geological changes progressed, however, species from other regions of the continent began to directly invade habitats, a process called dispersal. Although biodiversity may initially increase, this process decreases biodiversity in the long term, Stigall explained, because it allows a few aggressive species to populate many sites quickly, dominating those ecosystems.
This is the second time that Stigall and her team have found this pattern of speciation in the geological record. A study published in 2010 on the invasive species that prompted a mass extinction during the Devonian period about 375 million years ago also discovered a shift from vicariance to dispersal that contributed to a decline in biodiversity, Stigall noted.
It’s a pattern that’s happening during our modern biodiversity crisis as well, she said.
“Only one out of 10 invaders truly become invasive species. Understanding the process can help determine where to put conservation resources,” she said.
Note : The above story is based on materials provided by Ohio University.
Since records began in 1776, the people of Youngstown, Ohio had never experienced an earthquake. However, from January 2011, 109 tremors were recorded and new research in Geophysical Research-Solid Earth reveals how this may be the result of shale fracking.
In December 2010, Northstar 1, a well built to pump wastewater produced by fracking in the neighboring state of Pennsylvania, came online. In the year that followed seismometers in and around Youngstown recorded 109 earthquakes; the strongest being a magnitude 3.9 earthquake on December 31, 2011.
The study authors analyzed the Youngstown earthquakes, finding that their onset, cessation, and even temporary dips in activity were all tied to the activity at the Northstar 1 well. The first earthquake recorded in the city occurred 13 days after pumping began, and the tremors ceased shortly after the Ohio Department of Natural Resources shut down the well in December 2011.
Dips in earthquake activity correlated with Memorial Day, the Fourth of July, Labor Day, and Thanksgiving, as well as other periods when the injection at the well was temporarily stopped.
“In recent years, waste fluid generated during the shale gas production – hydraulic fracturing, had been increasing steadily in United States. Earthquakes were triggered by these waste fluid injection at a deep well in Youngstown, Ohio during Jan. 2011 – Feb. 2012. We found that the onset of earthquakes and cessation were tied to the activity at the Northstar 1 deep injection well. The earthquakes were centered in subsurface faults near the injection well. These shocks were likely due to the increase in pressure from the deep waste water injection which caused the existing fault to slip,” said Dr. Won-Young Kim. “Throughout 2011, the earthquakes migrated from east to west down the length of the fault away from the well—indicative of the earthquakes being caused by expanding pressure front.”
Note : The above story is based on materials provided by Wiley Inc.
Reservoirs of silica-rich magma — the kind that causes the most explosive volcanic eruptions — can persist in Earth’s upper crust for hundreds of thousands of years without triggering an eruption, according to new University of Washington modeling research.
That means an area known to have experienced a massive volcanic eruption in the past, such as Yellowstone National Park, could have a large pool of magma festering beneath it and still not be close to going off as it did 600,000 years ago.
“You might expect to see a stewing magma chamber for a long period of time and it doesn’t necessarily mean an eruption is imminent,” said Sarah Gelman, a UW doctoral student in Earth and space sciences.
Recent research models have suggested that reservoirs of silica-rich magma, or molten rock, form on and survive for geologically short time scales — in the tens of thousands of years — in the Earth’s cold upper crust before they solidify. They also suggested that the magma had to be injected into the Earth’s crust at a high rate to reach a large enough volume and pressure to cause an eruption.
But Gelman and her collaborators took the models further, incorporating changes in the crystallization behavior of silica-rich magma in the upper crust and temperature-dependent heat conductivity. They found that the magma could accumulate more slowly and remain molten for a much longer period than the models previously suggested.
Gelman is the lead author of a paper explaining the research published in the July edition of Geology. Co-authors are Francisco Gutiérrez, a former UW doctoral student now with Universidad de Chile in Santiago, and Olivier Bachmann, a former UW faculty member now with the Swiss Federal Institute of Technology in Zurich.
There are two different kinds of magma and their relationship to one another is unclear. Plutonic magma freezes in the Earth’s crust and never erupts, but rather becomes a craggy granite formation like those commonly seen in Yosemite National Park. Volcanic magma is associated with eruptions, whether continuous “oozing” types of eruption such as Hawaii’s Kilauea Volcano or more explosive eruptions such as Mount Pinatubo in the Philippines or Mount St. Helens in Washington state.
Some scientists have suggested that plutonic formations are what remain in the crust after major eruptions eject volcanic material. Gelman believes it is possible that magma chambers in the Earth’s crust could consist of a core of partially molten material feeding volcanoes surrounded by more crystalline regions that ultimately turn into plutonic rock. It is also possible the two rock types develop independently, but those questions remain to be answered, she said.
The new work suggests that molten magma reservoirs in the crust can persist for far longer than some scientists believe. Silica content is a way of judging how the magma has been affected by being in the crust, Gelman said. As the magma is forced up a column from lower in the Earth to the crust, it begins to crystallize. Crystals start to drop out as the magma moves higher, leaving the remaining molten rock with higher silica content.
“These time scales are in the hundreds of thousands, even up to a million, years and these chambers can sit there for that long,” she said.
Even if the molten magma begins to solidify before it erupts, that is a long process, she added. As the magma cools, more crystals form giving the rock a kind of mushy consistency. It is still molten and capable of erupting, but it will behave differently than magma that is much hotter and has fewer crystals.
The implications are significant for volcanic “arcs,” found near subduction zones where one of Earth’s tectonic plates is diving beneath another. Arcs are found in various parts of the world, including the Andes Mountains of South America and the Cascades Range of the Pacific Northwest.
Scientists have developed techniques to detect magma pools beneath these arcs, but they cannot determine how long the reservoirs have been there. Because volcanic magma becomes more silica-rich with time, its explosive potential increases.
“If you see melt in an area, it’s important to know how long that melt has been around to determine whether there is eruptive potential or not,” Gelman said. “If you image it today, does that mean it could not have been there 300,000 years ago? Previous models have said it couldn’t have been. Our model says it could. That doesn’t mean it was there, but it could have been there.”
The work was funded by the National Science Foundation and the National Scientific and Technological Research Commission of Chile.
Note : The above story is based on materials provided by University of Washington.
A study by a postgraduate researcher at the University of Southampton has found that Neanderthals were more culturally complex than previously acknowledged. Two cultural traditions existed among Neanderthals living in what is now northern Europe between 115,000 to 35,000 years ago.
Dr Karen Ruebens from the Centre for the Archaeology of Human Origins (CAHO) and funded by the Arts and Humanities Research Council (AHRC) examined the design of 1,300 stone tools originating from 80 Neanderthal sites in five European countries; France, Germany, Belgium, Britain and the Netherlands.
Dr Ruebens’ investigations uncovered new evidence that two separate handaxe traditions or designs existed — one in a region now spanning south-western France and Britain — the other in Germany and further to the East. In addition, she found an area covering modern day Belgium and the Netherlands that demonstrates a transition between the two.
She comments: “In Germany and France there appears to be two separate handaxe traditions, with clear boundaries, indicating completely separate, independent developments. “The transition zone in Belgium and Northern France indicates contact between the different groups of Neanderthals, which is generally difficult to identify but has been much talked about, especially in relation to later contacts with groups of modern humans. This area can be seen as a melting pot of ideas where mobile groups of Neanderthals, both from the eastern and western tradition, would pass by — influencing each other’s designs and leaving behind a more varied record of bifacial tools.”
The University of Southampton research has revealed Neanderthals in the western region made symmetrical, triangular and heart-shaped handaxes, while during the same time period, in the eastern region, they produced asymmetrically shaped bifacial knives.
Dr Ruebens says: “Distinct ways of making a handaxe were passed on from generation to generation and for long enough to become visible in the archaeological record. This indicates a strong mechanism of social learning within these two groups and says something about the stability and connectivity of the Neanderthal populations.
“Making stone tools was not merely an opportunistic task. A lot of time, effort and tradition were invested and these tools carry a certain amount of socio-cultural information, which does not contribute directly to their function.”
The study’s extensive analysis also shows other factors which could have influenced handaxe design, such as raw material availability to Neanderthals, the function of their sites, or the repeated reuse and sharpening of tools — didn’t have an impact in this instance.
Note : The above story is based on materials provided by University of Southampton.
Scientists have discovered a vast plume of iron and other micronutrients more than 1,000 km long billowing from hydrothermal vents in the South Atlantic Ocean. The finding, soon to be published in the journal Nature Geoscience, calls past estimates of iron abundances into question, and may challenge researchers’ assumptions about iron sources in the world’s seas.
“This study and other studies like it are going to force the scientific community to reevaluate how much iron is really being contributed by hydrothermal vents and to increase those estimates, and that has implications for not only iron geochemistry but a number of other disciplines as well,” says Mak Saito, a WHOI associate scientist and lead author of the study.
Saito and his team of collaborators—which includes WHOI researchers and a colleague affiliated with the University of Liverpool (U.K.)—didn’t set out to find iron plumes in the South Atlantic. They set sail aboard the R/V Knorr in 2007 as part of the Cobalt, Iron and Micro-organisms from the Upwelling zone to the Gyre (or CoFeMUG, pronounced “coffee mug”) expedition, which intended to map chemical composition and microbial life along the ship’s route between Brazil and Namibia. As the scientists traveled the route, they sampled the seawater at frequent intervals and multiple depths along the way, and then stored the samples for in-depth analysis back on land.
Their route passed over the Mid-Atlantic Ridge, a band of mountains and valleys running along the Atlantic Ocean floor from the Arctic to the Antarctic where several of the Earth’s major tectonic plates are slowly spreading apart. Hydrothermal vents, or fissures in the Earth’s crust, are found along the ridge, but they haven’t been extensively studied because slow-spreading ridges are thought to be less active than fast-spreading ones. Past studies using helium, which is released from the Earth’s mantle through hydrothermal vents and is routinely used as an indicator of vent activity, have found little coming from mid-Atlantic vents, and researchers have assumed that means the vents spew little iron as well.
So Saito and his colleagues were surprised by what their samples revealed when later studied in the lab. Once filtered and analyzed, some of the seawater showed unexpectedly high levels of iron and manganese. When Abigail Noble, then a WHOI graduate student, and Saito plotted the sites where the iron-rich samples were taken, they realized the samples formed a distinct plume—a cloud of nutrients ranging in depth from 1,500 to 3,500 meters that spanned more than 1,000 km of the South Atlantic Ocean.
“We had never seen anything like it,” Saito says. “We were sort of shocked—there’s this huge bull’s-eye right in the middle of the South Atlantic Ocean. We didn’t quite know what to do with it, because it went contrary to a lot of our expectations.”
The plume’s ratio of iron to helium was 80-fold higher than ratios reported for faster-spreading ridges in the southeastern Pacific Ocean.
The serendipitous discovery casts doubt on the assumption that slow-spreading ridges are iron-poor, and it raises questions about the use of helium as an indicator for iron flux in hydrothermal vents, Saito says.
“We’ve assumed that low helium means low iron, and our study finds that that’s not true,” Saito says. “There’s actually quite a lot of iron coming out of these slow-spreading regions in the Atlantic, where people thought there would be little to none.”
And that has profound implications, because iron is a critical element for ocean life. Iron is known to spur the growth of phytoplankton in many marine habitats, especially those important in the ocean’s carbon cycle, which, in turn, impacts atmospheric carbon dioxide levels and Earth’s climate. Because more than half the world’s seafloor ridges are slow-spreading, the team’s discovery suggests there may be far more iron from these locations than previously estimated.
“We need to understand where iron is in the ocean and where it’s coming from to understand the role of iron in the marine carbon cycle with any confidence,” Saito says.
Saito and his colleagues hope future studies will reveal the exact shape and extent of the plume, and just how much of its iron and other micronutrients persist and rise to the surface. Answering these lingering questions will help researchers truly understand how hydrothermal vents affect the ocean as a whole, Saito says.
The research was supported by the U.S. NSF-Chemical Oceanography program and the Gordon and Betty Moore Foundation (grant GBMF2724).
The Woods Hole Oceanographic Institution is a private, non-profit organization on Cape Cod, Mass., dedicated to marine research, engineering, and higher education. Established in 1930 on a recommendation from the National Academy of Sciences, its primary mission is to understand the ocean and its interaction with the Earth as a whole, and to communicate a basic understanding of the ocean’s role in the changing global environment.
Note : The above story is based on materials provided by Woods Hole Oceanographic Institution
MISSOULA – Findings from a large-scale ice drilling study on the Greenland ice sheet by a team of University of Montana and University of Wyoming researchers may revise the models used to predict how ice sheets move.
The work was published in Science on Aug. 15 in a paper titled “Basal Drainage System Response to Increasing Surface Melt on the Greenland Ice Sheet.”
The bed of the Greenland ice sheet, hidden beneath hundreds to thousands of meters of ice, is one of the most isolated locations on earth – making it difficult for scientists to understand just how the second largest ice body on the planet functions.Meierbachtol
Led by UM glaciologist Joel Harper and with backing from the National Science Foundation, the team of researchers set out to observe and record exactly how melt water flows along the bed of the ice sheet and how that water influences ice sheet acceleration.
“Our ability to predict and understand what’s going to happen in the future hinges on our ability to have an adequate understanding of the present processes and how those might influence future ice sheet behavior under climate changes,” said Toby Meierbachtol, the study’s lead author and a UM doctoral student.
The team employed a unique strategy for collecting widespread measurements of the conditions at the bottom of the ice sheet.
“Our approach was to be lightweight and nimble so that we could drill lots of boreholes all over the ice sheet.” Harper said.
Using a drill designed by Neil Humphrey, UW professor and co-author on the paper, the team melted 23 boreholes through the ice sheet to measure the pressure and flow conditions of the water under the ice. The drill was easily transported by helicopter and even could be carried short distances by six people.
The other component to their strategy was to carefully select a small and hard-working field team. The group comprised two professors, two doctoral students, two graduate students and two undergraduate students.
According to Harper, the team selection was of critical importance. He needed mentally tough team members who both excelled academically and also could withstand difficult conditions. “Our success relied on finding people who were equally comfortable fiddling with circuit boards or fixing a broken generator, and doing those things in the cold and with a relentless wind blowing,” he said.
Once the data was analyzed, the research team discovered that it didn’t match up with the working hypotheses for water flow beneath the ice sheet. This led the scientists to surmise that there are other critical processes at work that had been missing – one possibility being that as the ice sheet accelerates, the acceleration itself opens up space between the ice and bedrock and expands the drainage network.
“This process is largely neglected in current interpretations,” Meierbachtol said. “We need to pull ourselves away from the narrow vision and start to explore some of the other options for transient growth.”
Future warming likely will be enhanced over the Artic. This body of research will provide a more accurate assessment of the impacts of future warming on Greenland.
“Much climate research focuses on documenting recent and ongoing changes, or making projections about future change,” Harper said. “We can only go so far with that work before we hit stumbling blocks resulting from our incomplete understanding of how various components of the climate system work. Our research is focused on unraveling the fundamental physical processes controlling how ice sheets move when sitting on flowing melt water. This basic research is essential to increasing confidence in our understanding of climate and sea level change.”
Note : The above story is based on materials provided by The University of Montana
The 160 million-year-old fossil of an extinct rodent-like creature from China is helping to explain how multituberculates — the most evolutionarily successful and long-lived mammalian lineage in the fossil record — achieved their dominance.
This fossil find — the oldest ancestor in the multituberculate family tree — represents a newly discovered species known as Rugosodon eurasiaticus. The nearly complete skeleton provides critical insights into the traits that helped such multituberculates thrive in their day. For example, the fossil reveals teeth that were adapted to gnawing plants and animals alike, as well as ankle joints that were highly adept at rotation.
In light of these findings, researchers suggest that R. eurasiaticus paved the way for later plant-eating and tree-dwelling mammals.
Chong-Xi Yuan from the Chinese Academy of Geological Sciences in Beijing, China, along with Chinese and American colleagues, report their analysis of the fossil in the 16 August issue of Science.
The multituberculates flourished during the Cretaceous era, which ended over 60 million years ago. Much like today’s rodents, they filled an extremely wide variety of niches — below the ground, on the ground and in the trees — and this new fossil, which resembles a small rat or a chipmunk, possessed many of the adaptations that subsequent species came to rely upon, the researchers say.
“The later multituberculates of the Cretaceous [era] and the Paleocene [epoch] are extremely functionally diverse: Some could jump, some could burrow, others could climb trees and many more lived on the ground,” explained Zhe-Xi Luo, a co-author of the Science report. “The tree-climbing multituberculates and the jumping multituberculates had the most interesting ankle bones, capable of ‘hyper-back-rotation’ of the hind feet.”
“What is surprising about this discovery is that these ankle features were already present in Rugosodon — a land-dwelling mammal,” he said. (Such highly mobile ankle joints are normally associated with the foot functions of animals that are exclusively tree-dwellers — those that navigate uneven surfaces.)
Additionally, R. eurasiaticus could eat many different types of food, according to the researchers. The fossil — particularly its dentition, which reveals teeth designed for shearing plant matter — confirms a 2012 analysis of tooth types that suggested multituberculates consumed an animal-dominated diet for much of their existence, later diversifying to a plant-dominated one.
Multituberculates arose in the Jurassic period and went extinct in the Oligocene epoch, occupying a diverse range of habitats for more than 100 million years before they were out-competed by more modern rodents. By the end of their run on the planet, multituberculates had evolved complex teeth that allowed them to enjoy vegetarian diets and unique locomotive skills that enabled them to traverse treetops. Both adaptations helped them to become dominant among their contemporaries.
The fossilized R. eurasiaticus that Yuan and his team unearthed was preserved in lake sediments, suggesting that the creature may have lived on the shores. However, the researchers say that the ankle joints of this early multituberculate were already highly mobile and its teeth were already oriented for an omnivorous diet. Based on their findings, the researchers suggest that such adaptations must have arisen very early in the evolution of the order, setting the stage for the major diversification of rodent-like mammals that ensued.
The discovery of R. eurasiaticus also extends the distribution of certain multituberculates from Europe to Asia during the Late Jurassic period, the researchers say.
“This new fossil from eastern China is very similar to the Late Jurassic fossil teeth of multituberculates from Portugal in western Europe,” explained Dr. Luo. “This suggests that Rugosodon and its closely related multituberculates had a broad paleogreographic distribution and dispersals back-and-forth across the entire Eurasian continent.”
The report by Yuan et al. was supported by the Ministry of Land Resources and the Ministry of Science and technology of China; the Scientific Commission of Beijing; the Beijing Museum of Natural History; the National Science Foundation; Carnegie Museum; and the University of Chicago.
Note : The above story is based on materials provided by American Association for the Advancement of Science, via EurekAlert!, a service of AAAS.
Monitoring slow earthquakes may provide a basis for reliable prediction in areas where slow quakes trigger normal earthquakes, according to Penn State geoscientists.
“We currently don’t have any way to remotely monitor when land faults are about to move,” said Chris Marone, professor of geophysics. “This has the potential to change the game for earthquake monitoring and prediction, because if it is right and you can make the right predictions, it could be big.”
Marone and Bryan Kaproth-Gerecht, recent Ph.D. graduate, looked at the mechanisms behind slow earthquakes and found that 60 seconds before slow stick slip began in their laboratory samples, a precursor signal appeared.
Normal stick slip earthquakes typically move at a rate of three to 33 feet per second, but slow earthquakes, while they still stick and slip for movement, move at rates of about 0.004 inches per second taking months or more to rupture. However, slow earthquakes often occur near traditional earthquake zones and may precipitate potentially devastating earthquakes.
“Understanding the physics of slow earthquakes and identifying possible precursory changes in fault zone properties are increasingly important goals,” the researchers report on line in today’s (Aug. 15) issue of Science Express.
Using serpentine, a common mineral often found in slow earthquake areas, Marone and Kaproth-Gerecht”We recognize that this is complicated and that velocity depends on the friction,” said Marone. “We don’t know for sure what is happening, but, from our lab experiments, we know that this phenomenon is occurring.”
The researchers think that what makes this unusual pattern of movement is that friction contact strength goes down as velocity goes up, but only for a small velocity range. Once the speed increases enough, the friction contact area becomes saturated. It can’t get any smaller and other physical properties take over, such as thermal effects. This mechanism limits the speed of slow earthquakes. Marone and Kaproth-Gerecht also looked at the primary elastic waves and the secondary shear waves produced by their experiments.
“Here we see elastic waves moving and we know what’s going on with P and S waves and the acoustic speed,” said Marone. “This is important because this is what you can see in the field, what seismographs record.”
Marone notes that there are not currently sufficient measuring devices adjacent to known fault lines to make any type of prediction from the precursor signature of the movement of the elastic waves. It is, however, conceivable that with the proper instrumentation, a better picture of what happens before a fault moves in slip stick motion is possible and perhaps could lead to some type of prediction.
performed laboratory experiments applying shear stress to rock samples so that the samples exhibited slow stick slip movement. The researchers repeated experiments 50 or more times and found that, at least in the laboratory, slow fault zones undergo a transition from a state that supports slow velocity below about 0.0004 inches per second to one that essentially stops movement above that speed.
Note : The above story is based on materials provided by The Pennsylvania State University
For more than a century scientists have known that Earth’s ice ages are caused by the wobbling of the planet’s orbit, which changes its orientation to the sun and affects the amount of sunlight reaching higher latitudes, particularly the polar regions.
The Northern Hemisphere’s last ice age ended about 20,000 years ago, and most evidence has indicated that the ice age in the Southern Hemisphere ended about 2,000 years later, suggesting that the south was responding to warming in the north.
But new research published online Aug. 14 in Nature shows that Antarctic warming began at least two, and perhaps four, millennia earlier than previously thought.
Most previous evidence for Antarctic climate change has come from ice cores drilled in East Antarctica, the highest and coldest part of the continent. However, a U.S.-led research team studying a new ice core from West Antarctica found that warming there was well under way 20,000 years ago.
“Sometimes we think of Antarctica as this passive continent waiting for other things to act on it. But here it is showing changes before it ‘knows’ what the north is doing,” said T.J. Fudge, a University of Washington doctoral student in Earth and space sciences and lead corresponding author of the Nature paper.
Co-authors are 41 other members of the West Antarctic Ice Sheet Divide project, which is primarily funded by the National Science Foundation.
The findings come from a detailed examination of an ice core taken from the West Antarctic Ice Sheet Divide, an area where there is little horizontal flow of the ice so the data are known to be from a location that remained consistent over long periods.
The ice core is more than 2 miles deep and covers 68,000 years, though so far data have been analyzed only from layers going back 30,000 years. Near the surface, 1 meter of ice covers one year, but at greater depths the annual layers are compressed to centimeters.
Fudge identified the annual layers by running two electrodes along the ice core to measure higher electrical conductivity associated with each summer season. Evidence of greater warming turned up in layers associated with 18,000 to 22,000 years ago, the beginning of the last deglaciation.
“This deglaciation is the last big climate change that that we’re able to go back and investigate,” he said. “It teaches us about how our climate system works.”
West Antarctica is separated from East Antarctica by a major mountain range. East Antarctica has a substantially higher elevation and tends to be much colder, though there is recent evidence that it too is warming.
Rapid warming in West Antarctica in recent decades has been documented in previous research by Eric Steig, a UW professor of Earth and space sciences who serves on Fudge’s doctoral committee and whose laboratory produced the oxygen isotope data used in the Nature paper. The new data confirm that West Antarctica’s climate is more strongly influenced by regional conditions in the Southern Ocean than East Antarctica is.
“It’s not surprising that West Antarctica is showing something different from East Antarctica on long time scales, but we didn’t have evidence for that before,” Fudge said.
He noted that the warming in West Antarctica 20,000 years ago is not explained by a change in the sun’s intensity. Instead, how the sun’s energy was distributed over the region was a much bigger factor. It not only warmed the ice sheet but also warmed the Southern Ocean that surrounds Antarctica, particularly during summer months when more sea ice melting could take place.
Changes in Earth’s orbit today are not an important factor in the rapid warming that has been observed recently, he added.
“Earth’s orbit changes on the scale of thousands of years, but carbon dioxide today is changing on the scale of decades so climate change is happening much faster today,” Fudge said.
Note : The above story is based on materials provided by University of Washington. The original article was written by Vince Stricherz.
Study in the Journal Physics of Fluids will help scientists understand and predict paths of debris and destruction from natural and manmade disasters
WASHINGTON D.C. August 12, 2013 — Some time around 37,000 BCE a massive volcano erupted in the Campanian region of Italy, blanketing much of Europe with ash, stunting plant growth and possibly dooming the Neanderthals. While our prehistoric relatives had no way to know the ash cloud was coming, a recent study provides a new tool that may have predicted what path volcanic debris would take.
“This paper provides a model for the pattern of the ash cloud if the wind is blowing past an eruption of a given size,” said Peter Baines, a scientist at the University of Melbourne in Australia who did the study. He published his work in the journal Physics of Fluids.
Volcanic eruptions are an example of what Baines calls an “intrusion.” Other examples include exhaust rising from a chimney, sewage flowing into the ocean, and the oil spilling underwater in the 2010 Deepwater Horizon disaster. In all these events, a fluid rises into a density-stratified environment like the atmosphere or the ocean. As the fluid rises, it is pushed by winds or currents, and this crossflow can cause the intruding fluid to disperse far from its origin.
Scientists have previously modeled intrusions into a completely calm environment, but before Baines nobody had ever attempted to introduce the effect of crosswinds, a necessary step toward making such models more realistic and useful.
Baines thinks his work could be used to estimate how much ash is pouring out of a volcano, or how fast oil is gushing from a hole in the sea floor.
Baines is now working with volcanologists in Britain to apply his model to historic eruptions like the Campanian event and the catastrophic Toba supereruption that occurred around 73,000 years ago in Indonesia. The scientists are hoping to use ash deposits from these volcanoes to develop a sharper picture of the amount and speed of the ejected material.
“Most of what we know about prehistoric eruptions is from sedimentary records,” said Baines. “You then have to try to infer what the nature of the eruption was, when this is the only information you’ve got.”
Baines said his model can also help forecast the deposition patterns of future eruptions. And that should give us a big leg up on the poor Neanderthals.
How the Model Works
To understand how intrusions work in the presence of crossflows, Baines developed what he calls a semi-analytical model. He began with fluid dynamics equations, and then used numerical calculations to arrive at approximate solutions for specifics combinations of source flow and spread rates, and crosswind speed. He found that, under normal wind speeds, the intruding fluid reached a maximum thickness at a certain distance upstream from the source, and thinned in the downstream direction. The distance to the upstream stagnation point depended much more on the rate of source flow than the crossflow speed.
Note : The above story is based on materials provided by American Institute of Physics (AIP)
The Greenland ice sheet is melting from below, caused by a high heat flow from the mantle into the lithosphere. This influence is very variable spatially and has its origin in an exceptionally thin lithosphere. Consequently, there is an increased heat flow from the mantle and a complex interplay between this geothermal heating and the Greenland ice sheet.
The international research initiative IceGeoHeat led by the GFZ German Research Centre for Geosciences establishes in the current online issue of Nature Geoscience that this effect cannot be neglected when modeling the ice sheet as part of a climate study.
The continental ice sheets play a central role in climate. Interactions and feedback processes between ice and temperature rise are complex and still a current research topic. The Greenland ice sheet loses about 227 gigatonnes of ice per year and contributes about 0.7 millimeters to the currently observed mean sea level change of about 3 mm per year.
Existing model calculations, however, were based on a consideration of the ice cap and considered the effect of the lithosphere, i.e. Earth’s crust and upper mantle, too simplistic and primarily mechanical: the ice presses the crust down due to its weight. GFZ scientists Alexey Petrunin and Irina Rogozhina have now coupled an ice/climate model with a thermo-mechanical model for the Greenland lithosphere.
“We have run the model over a simulated period of three million years, and taken into account measurements from ice cores and independent magnetic and seismic data,” says Petrunin. “Our model calculations are in good agreement with the measurements. Both the thickness of the ice sheet as well as the temperature at its base are depicted very accurately. ”
The model can even explain the difference in temperature measured at two adjacent drill holes: the thickness of the Greenland lithosphere and thus the geothermal heat flow varies greatly in narrow confines.
What does this mean for climate modeling? “The temperature at the base of the ice, and therefore the current dynamics of the Greenland ice sheet is the result of the interaction between the heat flow from Earth’s interior and the temperature changes associated with glacial cycles,” explains corresponding author Irina Rogozhina (GFZ) who initiated IceGeoHeat. “We found areas where the ice melts at the base next to other areas where the base is extremely cold.”
The current climate is influenced by processes that go far back into the history of Earth: the Greenland lithosphere is 2.8 to 1.7 billion years old and is only about 70 to 80 kilometers thick under Central Greenland. It remains to be explored why it is so exceptionally thin. It turns out, however, that the coupling of models of ice dynamics with thermo-mechanical models of the solid earth allows a more accurate view of the processes that are melting the Greenland ice.
Note : The above story is based on materials provided by Helmholtz Centre Potsdam – GFZ German Research Centre for Geosciences.
The Mississippi River is the chief river of the largest drainage system in North America. Flowing entirely in the United States (though its drainage basin reaches into Canada), it rises in northern Minnesota and meanders slowly southwards for 2,530 miles (4,070 km) to the Mississippi River Delta at the Gulf of Mexico. With its many tributaries, the Mississippi’s watershed drains all or parts of 31 US states and 2 Canadian provinces between the Rocky and Appalachian Mountains. The Mississippi ranks as the fourth longest and tenth largest river in the world. The river either borders or cuts through the states of Minnesota, Wisconsin, Iowa, Illinois, Missouri, Kentucky, Tennessee, Arkansas, Mississippi, and Louisiana.
Native Americans long lived along the Mississippi and its tributaries. Most were hunter-gatherers or herders, but some, such as the Mound builders, formed prolific agricultural societies. The arrival of Europeans in the 1500s changed the native way of life as first explorers, then settlers, ventured into the basin in increasing numbers. The river served first as barrier – forming borders for New Spain, New France, and the early United States – then as vital transportation artery and communications link. In the 19th century, during the height of Manifest Destiny, the Mississippi and several western tributaries, most notably the Missouri, formed pathways for the western expansion of the United States.
Formed from thick layers of this river’s silt deposits, the Mississippi River Valley is one of the most fertile agricultural regions of the country, which resulted in the river’s storied steamboat era. During the American Civil War, the Mississippi’s capture by Union forces marked a turning point towards victory because of the river’s importance as a route of trade and travel, not least to the Confederacy. Because of substantial growth of cities and the larger ships and barges that supplanted riverboats, the decades following the 1900s saw the construction of massive engineering works such as levees, locks and dams, often built in combination.
Since modern development of the basin began, the Mississippi has also seen its share of pollution and environmental problems – most notably large volumes of agricultural runoff, which has led to the Gulf of Mexico dead zone off the Delta. In recent years, the river has shown a steady shift towards the Atchafalaya River channel in the Delta; a course change would prove disastrous to seaports such as New Orleans. A system of dikes and gates has, so far, held the Mississippi at bay but, due to fluvial processes, the shift becomes more likely each year.
The word Mississippi itself comes from Messipi, the French rendering of the Anishinaabe (Ojibwe or Algonquin) name for the river, Misi-ziibi (Great River). See below in “History” section for additional information.
In addition to historical traditions shown by names, there are at least two other measures of a river’s identity, one being the largest branch (by water volume), and the other being the longest branch. Using the largest-branch criterion, the Ohio (not the Middle and Upper Mississippi) would be the main branch of the Lower Mississippi. Using the longest-branch criterion, the Middle Mississippi-Missouri-Jefferson-Beaverhead-Red Rock-Hellroaring Creek River would be the main branch. According to either school of thought, the Upper Mississippi from Lake Itasca, Minnesota to St. Louis, despite its name, would only be a secondary tributary of the final river flowing from Cairo to the Gulf of Mexico.
While the Missouri River, flowing from the confluence of the Jefferson, Madison and Gallatin Rivers to the Mississippi, is the longest continuously named river in the United States,[4] the serially named river known sequentially as Hellroaring Creek, Red Rock, Beaverhead, Jefferson, Missouri, Middle Mississippi, and Lower Mississippi, as one continuous waterway, is the longest river in North America and the third or fourth longest river in the world. Its length of at least 3,745 mi (6,027 km) is exceeded only by the Nile, the Amazon,[6] and perhaps the Yangtze River[7] among the longest rivers in the world. The source of this waterway is at Brower’s Spring, 8,800 feet (2,700 m) above sea level in southwestern Montana, along the Continental Divide outside Yellowstone National Park.
The Mississippi River is widely considered a convenient if approximate dividing line between the Eastern, Southern, and Midwestern United States and the Western U.S., as exemplified by the Gateway Arch in St. Louis and the phrase “Trans-Mississippi”, used for example in the name of the 1898 Trans-Mississippi Exposition held in Omaha, Nebraska.
Physical geography
The geographical setting of the Mississippi River includes considerations of the course of the river itself, its watershed, its outflow, its prehistoric and historic course changes, and possibilities of future course changes. The New Madrid Seismic Zone along the river is also noteworthy. These various basic geographical aspects of the river in turn underlie its human history and present uses of the waterway and its adjacent lands.
Divisions
Upper Mississippi
The Upper Mississippi runs from its headwaters to its confluence with the Missouri River at St. Louis, Missouri. The Upper Mississippi is divided into two sections:
The headwaters, 493 miles (793 km), from the source to Saint Anthony Falls in Minneapolis, Minnesota; and
A navigable channel, formed by a series of man-made lakes between Minneapolis and St. Louis, Missouri, some 664 miles (1,069 km).
The source of the Upper Mississippi branch is traditionally accepted as Lake Itasca, 1,475 feet (450 m) above sea level in Itasca State Park in Clearwater County, Minnesota. The name “Itasca” is a combination of the last four letters of the Latin word for truth (veritas) and the first two letters of the Latin word for head (caput). However, the lake is in turn fed by a number of smaller streams.
From its origin at Lake Itasca to St. Louis, Missouri, the waterway’s flow is moderated by 43 dams. Fourteen of these dams are located above Minneapolis in the headwaters region and serve multiple purposes, including power generation and recreation. The remaining 29 dams, beginning in downtown Minneapolis, all contain locks and were constructed to improve commercial navigation of the upper river. Taken as a whole these 43 dams significantly shape the geography and influence the ecology of the upper river. Beginning just below Saint Paul, Minnesota, and continuing throughout the upper and lower river, the Mississippi is further controlled by thousands of wing dikes that moderate the river’s flow in order to maintain an open navigation channel and prevent the river from eroding its banks.
The head of navigation on the Mississippi is the Coon Rapids Dam in Coon Rapids, Minnesota. Before its construction in 1913, steamboats could occasionally go upstream as far as Saint Cloud, Minnesota, depending on river conditions.
The uppermost lock and dam on the Upper Mississippi River is the Upper St. Anthony Falls Lock and Dam in Minneapolis. Above the dam, the river’s elevation is 799 feet (244 m). Below the dam, the river’s elevation is 750 feet (230 m). This 49-foot (15 m) drop is the largest of all the Mississippi River locks and dams. The origin of the dramatic drop is a waterfall preserved adjacent to the lock under an apron of concrete. Saint Anthony Falls is the only true waterfall on the entire Mississippi River. The water elevation continues to drop steeply as it passes through the gorge carved by the waterfall.
The Upper Mississippi features various natural and artificial lakes, with its widest point being Lake Winnibigoshish, near Grand Rapids, Minnesota, over 7 miles (11 km) across. Also of note is Lake Onalaska (created by Lock and Dam No. 7), near La Crosse, Wisconsin, over 4 miles (6.4 km) wide. On the other hand, Lake Pepin is natural, formed due to the delta formed by the Chippewa River of Wisconsin as it enters the Upper Mississippi; it is more than 2 miles (3.2 km) wide.
By the time the Upper Mississippi reaches Saint Paul, Minnesota, below Lock and Dam No. 1, it has dropped more than half its original elevation and is 687 feet (209 m) above sea level. From St. Paul to St. Louis, Missouri, the river elevation falls much more slowly, and is controlled and managed as a series of pools created by 26 locks and dams.
The Upper Mississippi River is joined by the Minnesota River at Fort Snelling in the Twin Cities; the St. Croix River near Prescott, Wisconsin; the Cannon River near Red Wing, Minnesota; the Zumbro River at Wabasha, Minnesota; the Black, La Crosse, and Root rivers in La Crosse, Wisconsin; the Wisconsin River at Prairie du Chien, Wisconsin; the Rock River at the Quad Cities; the Iowa River near Wapello, Iowa; the Skunk River south of Burlington, Iowa; and the Des Moines River at Keokuk, Iowa. Other major tributaries of the Upper Mississippi include the Crow River in Minnesota, the Chippewa River in Wisconsin, the Maquoketa River and the Wapsipinicon River in Iowa, and the Big Muddy River and Illinois River.
The Upper Mississippi is largely a multi-thread stream with many bars and islands. From its confluence with the St. Croix River downstream to Dubuque, Iowa, the river is entrenched, with high bedrock bluffs lying on either side. The height of these bluffs decreases to the south of Dubuque, though they are still significant through Savanna, Illinois. This topography contrasts strongly with the Lower Mississippi, which is a meandering river in a broad, flat area, only rarely flowing alongside a bluff (as at Vicksburg, Mississippi).
Middle Mississippi
The Mississippi River is known as the Middle Mississippi from the Upper Mississippi River’s confluence with the Missouri River at St. Louis, Missouri, for 190 miles (310 km) to its confluence with the Ohio River at Cairo, Illinois.
The Middle Mississippi is a relatively free-flowing river. From St. Louis to the Ohio River confluence, the Middle Mississippi falls a total of 220 feet (67 m) over a distance of 180 miles (290 km) for an average rate of 1.2 feet per mile (23 cm/km). At its confluence with the Ohio River, the Middle Mississippi is 315 feet (96 m) above sea level. Apart from the Missouri River and Meramec River which originates in central Missouri, along with the Kaskaskia River in Illinois, no major tributaries enter the Middle Mississippi River.
Lower Mississippi
The Mississippi River is called the Lower Mississippi River from its confluence with the Ohio River to its mouth at the Gulf of Mexico. Measured by water volume, the Lower Mississippi’s primary branch is the Ohio River. At the confluence of the Ohio and the Middle Mississippi, the Ohio is the bigger river, with its long-term mean discharge at Cairo, Illinois being 281,500 cu ft/s (7,970 m3/s), while the long-term mean discharge of the Mississippi at Thebes, Illinois (just upriver from Cairo) is 208,200 cu ft/s (5,900 m3/s). Thus, by volume, the main branch of the Mississippi River system at Cairo can be considered to be the Ohio River (and the Allegheny River further upstream), rather than the Middle Mississippi.
In addition to the Ohio River, the major tributaries of the Lower Mississippi River are the White River, flowing in at the White River National Wildlife Refuge in east central Arkansas; the Arkansas River, joining the Mississippi at Arkansas Post; the Big Black River in Mississippi; the Yazoo River, meeting the Mississippi at Vicksburg, Mississippi; and the Red River in Louisiana. The widest point of the Mississippi River is in the Lower Mississippi portion where it exceeds 1 mile (1.6 km) in width in several places.
Deliberate water diversion at the Old River Control Structure in Louisiana allows the Atchafalaya River in Louisiana to be a major distributary of the Mississippi River, with 30% of the Mississippi flowing to the Gulf of Mexico by this route, rather than continuing down the Mississippi’s current channel past Baton Rouge and New Orleans on a longer route to the Gulf.
Watershed
The Mississippi River has the world’s fourth largest drainage basin (“watershed” or “catchment”). The basin covers more than 1,245,000 sq mi (3,220,000 km2), including all or parts of 32 U.S. states and two Canadian provinces. The drainage basin empties into the Gulf of Mexico, part of the Atlantic Ocean. The total catchment of the Mississippi River covers nearly 40% of the landmass of the continental United States.
In the United States, the Mississippi River drains the majority of the area between crest of the Rocky Mountains and the crest of the Appalachian Mountains, except for various regions drained to Hudson Bay by the Red River of the North; to the Atlantic Ocean by the Great Lakes and the Saint Lawrence River; and to the Gulf of Mexico by the Rio Grande, the Alabama and Tombigbee rivers, the Chattahoochee and Appalachicola rivers, and various smaller coastal waterways along the Gulf.
The Mississippi River empties into the Gulf of Mexico about 100 miles (160 km) downstream from New Orleans. Measurements of the length of the Mississippi from Lake Itasca to the Gulf of Mexico vary somewhat, but the United States Geological Survey’s number is 2,340 miles (3,770 km). The retention time from Lake Itasca to the Gulf is typically about 90 days.
Outflow
The Mississippi River discharges at an annual average rate of between 200 and 700 thousand cubic feet per second (7,000–20,000 m3/s). Although it is the 5th largest river in the world by volume, this flow is a mere fraction of the output of the Amazon, which moves nearly 7 million cubic feet per second (200,000 m3/s) during wet seasons. On average, the Mississippi has only 8% the flow of the Amazon River.
Fresh river water flowing from the Mississippi into the Gulf of Mexico does not mix into the salt water immediately. The images from NASA’s MODIS (to the right) show a large plume of fresh water, which appears as a dark ribbon against the lighter-blue surrounding waters. These images demonstrate that the plume did not mix with the surrounding sea water immediately. Instead, it stayed intact as it flowed through the Gulf of Mexico, into the Straits of Florida, and entered the Gulf Stream. The Mississippi River water rounded the tip of Florida and traveled up the southeast coast to the latitude of Georgia before finally mixing in so thoroughly with the ocean that it could no longer be detected by MODIS.
Prior to 1900, the Mississippi River transported an estimated 400 million metric tons of sediment per year from the interior of the United States to coastal Louisiana and the Gulf of Mexico. During the last two decades, this number was only 145 million metric tons per year. The reduction in sediment transported down the Mississippi River is the result of engineering modification of the Mississippi, Missouri, and Ohio rivers and their tributaries by dams, meander cutoffs, river-training structures, and bank revetments and soil erosion control programs in the areas drained by them.
Course changes
Over geologic time, the Mississippi River has experienced numerous large and small changes to its main course, as well as additions, deletions, and other changes among its numerous tributaries, and the lower Mississippi River has used different pathways as its main channel to the Gulf of Mexico across the delta region.
Through a natural process known as avulsion or delta switching, the lower Mississippi River has shifted its final course to the mouth of the Gulf of Mexico every thousand years or so. This occurs because the deposits of silt and sediment begin to clog its channel, raising the river’s level and causing it to eventually find a steeper, more direct route to the Gulf of Mexico. The abandoned distributaries diminish in volume and form what are known as bayous. This process has, over the past 5,000 years, caused the coastline of south Louisiana to advance toward the Gulf from 15 to 50 miles (25–80 km). The currently active delta lobe is called the Birdfoot Delta, after its shape, or the Balize Delta, after La Balize, Louisiana, the first French settlement at the mouth of the Mississippi.
Prehistoric courses
The current form of the Mississippi River basin was largely shaped by the Laurentide Ice Sheet of the most recent Ice Age. The southernmost extent of this enormous glaciation extended well into the present-day United States and Mississippi basin. When the ice sheet began to recede, hundreds of feet of rich sediment were deposited, creating the flat and fertile landscape of the Mississippi Valley. During the melt, giant glacial rivers found drainage paths into the Mississippi watershed, creating such features as the Minnesota River, James River, and Milk River valleys. When the ice sheet completely retreated, many of these “temporary” rivers found paths to Hudson Bay or the Arctic Ocean, leaving the Mississippi Basin with many features “oversized” for the existing rivers to have carved in the same time period.
Ice sheets during the Illinoian Stage about 300,000 to 132,000 years before present, blocked the Mississippi near Rock Island, Illinois, diverting it to its present channel farther to the west, the current western border of Illinois. The Hennepin Canal roughly follows the ancient channel of the Mississippi downstream from Rock Island to Hennepin, Illinois. South of Hennepin, to Alton, Illinois, the current Illinois River follows the ancient channel used by the Mississippi River before the Illinoian Stage.
New Madrid Seismic Zone
The New Madrid Seismic Zone, along the Mississippi River near New Madrid, Missouri, between Memphis and St. Louis, is related to an aulacogen (failed rift) that formed at the same time as the Gulf of Mexico. This area is still quite active seismically. Four great earthquakes in 1811 and 1812, estimated at approximately 8 on the Richter magnitude scale, had tremendous local effects in the then sparsely settled area, and were felt in many other places in the midwestern and eastern U.S. These earthquakes created Reelfoot Lake in Tennessee from the altered landscape near the river, and temporarily reversed the direction of flow of the Mississippi itself.
Note : The above story is based on materials provided by Wikipedia
Scientists at Curtin University have chronicled the genesis of a particular type of iron deposit in the state’s north, finding that the valuable mineral formed relatively late in Western Australia’s evolution.
The researchers, along with the CSIRO, examined channel iron deposits (CID) taken from the Hamersley Province in the Pilbara to clarify the long running debate over how the deposits were created.
They used the dating technique (U–Th)/He in combination with other analytical methods to date the CID mineral, which is an iron-rich fluvial deposit formed in ancient river channels that is found almost exclusively in Western Australia.
John de Laeter Centre for Isotope Research director Brent McInnes says they wanted to determine when and how channel iron deposits (CID) formed during the geological evolution of Western Australia.
Professor McInnes says mining these iron formations, which supply 40 per cent of the country’s iron ore, is a cornerstone of the Australian economy and makes up approximately 20 per cent of national exports.
The team applied the dating technique to two CID samples which were taken from diamond drill cores and provided to the CSIRO by Robe River Mining from its Mesa J project.
“CID deposits are geologically ‘young’ deposits that formed continuously from about 26 million years ago to as recent as two million years ago,” Prof McInnes says.
“Considering that the bulk of Australia’s mines are located in rocks nearly 1,000 times older, the fact that Australia’s economy depends on a relatively recent geological episode is interesting.”
Tests revealed that environmental conditions conducive to CID formation existed during the Miocene period (23-5.3 million years ago) and that a typical Robe River CID sequence took roughly 4-8 million years to accumulate.
“Another interesting finding is that the Pilbara was much wetter 20 million years [ago], and supported substantive tree-lined, slow-moving water courses and drainage networks,” Prof McInnes says.
“The iron-rich sediments accumulated at the bottom of these ancient river channels over tens of millions of years, indicating that the drainage networks were long-lived.”
Curtin researcher and study co-author Associate Professor Noreen Evans developed the U-Th/He dating technique which measured minute levels of uranium and thorium and which together experiences radioactive decay at a constant rate and produces helium as a by-product.
The scientists then measured concentrations of each element in iron-rich minerals in the iron formations using sensitive mass spectrometry techniques and from that analysis they calculated the time it took for the helium to accumulate via radioactive decay.
They will now apply the technique to try and understand the weathering history of Australia by examining iron-oxide minerals in oxidised lateritic assemblages.
Note : The above story is based on materials provided by Science Network WA
A new analysis of dinosaur fossils by University of Pennsylvania researchers has revealed that a number of specimens of the genus Psittacosaurus — once believed to represent three different species — are all members of a single species. The differences among the fossil remains that led other scientists to label them as separate species in fact arose from how the animals were buried and compressed, the study found.
“Because of the vagaries of fossilization, no two fossils are the same,” said senior author Peter Dodson, professor of anatomy in Penn’s School of Veterinary Medicine and professor of paleontology in the School of Arts and Sciences’ Department of Earth and Environmental Science. “Animals are alive and they die, but what’s crucial in paleontology is what happens to the animals after they die.”
The research involved a cutting-edge technique, known as three-dimensional geometric morphometrics, which uses lasers to generate data about the shape of different specimens. This is the first time the approach has been used to study dinosaur fossils and could lead to a re-examination of the taxonomic classifications of additional dinosaur species as well as other long-extinct fossil organisms.
Brandon Hedrick, a doctoral student in the Department of Earth and Environmental Science, led the study in collaboration with Dodson. Their research will be reported in the journal PLOS ONE.
The investigation focused on dinosaurs in the genus Psittacosaurus, a word that comes from the Greek for “parrot lizard.” The group was named for the animal’s beaked face, not unlike that of a turtle. Originally discovered in 1923, 15 species have been classified as Psittacosaurus, though a recent analysis confirmed only nine of these as definite members of the genus. These animals were small plant-eaters that lived 120 to 125 million years ago. Paleontologists have discovered Psittacosaurus fossils in Mongolia, China and Russia and possibly in Thailand.
“Meat-eaters are sexy; plant-eaters are not,” Dodson said. “This isn’t a flashy dinosaur. But it has an interesting feature in that it’s one of the most abundant dinosaurs known to science.”
Indeed, many hundreds of Psittacosaurus specimens have been found. This abundance made the genus ideal for Hedrick and Dodson’s comparative study, as it is easier to determine relationships within and between species when there are more individuals to compare.
“For example, if you have a single dachshund and a single beagle, they may appear to be different species until you found 40 dachshund-beagle mixes of various grades to examine,” Hedrick said.
The scientists examined Psittacosaurus skulls discovered in the fossilized ashes of the Lujiatun beds of northeastern China’s Yixian Formation. Paleontologists had previously identified the skulls as belonging to three different species, Psittacosaurus lujiatunensis, P. major or Hongshanosaurus houi.
To compare and contrast the specimens, the researchers used two techniques. First they conducted a traditional study in which they examined every skull that had been classified as one of those three species — a total of 74 specimens — for a variety of characteristics that had been used in prior studies to distinguish the species. The Penn team also compared these fossils to skulls that had been classified as belonging to eight other Psittacosaurus species.
Next they completed a more high-tech analysis of 30 skulls from the three named species. Using a hand-held stylus that captures a point in space relative to a transmitter, they pinpointed 56 “landmarks,” or particular anatomical locations, on each fossil and compared the relative position of those marks between specimens. They also used a hand-held, laser-emitting scanner to make a three-dimensional image of each specimen, similar to a CT scan, from which they also collected landmark data.
Based on the “old-fashioned” method of examining the physical skulls, the researchers concluded that the three purported species were in fact one. They propose that all three can be considered members of the species P. lujiatunensis.
Results from the geometric morphometric analysis, though not sufficient on its own to classify species, supported this conclusion and suggested that how an animal’s body was crushed as it fossilized — from the top, from the side or twisted — could lead to inaccurate species determinations.
“Our study found all of these false ‘species’ that are not biological species but are apparent species caused by the process of fossilization,” Dodson said.
The Penn team said their investigation shows the value of traditional taxonomic analysis, while also revealing the potential of a new approach to analyzing fossils.
“Hopefully this will open up the paleontological community to using three-dimensional geometrics morphometrics in a variety of ways,” Hedrick said. “This technique has limitless applications to understanding dinosaurs.”
Note : The above story is based on materials provided by University of Pennsylvania, via EurekAlert!, a service of AAAS.