SAN FRANCISCO–An analysis of buildings tagged red and yellow by structural engineers after the August 2014 earthquake in Napa links pre-1950 buildings and the underlying sedimentary basin to the greatest shaking damage, according to one of six reports on the Napa quake published in the March/April issue of Seismological Research Letters (SRL).
“This data should spur people to retrofit older homes,” said John Boatwright, a geophysicist with the U.S. Geological Survey (USGS) in Menlo Park and the lead author of a study that analyzed buildings tagged by the City of Napa.
The South Napa earthquake was the largest earthquake to strike the greater San Francisco Bay Area since the magnitude 6.9 Loma Prieta earthquake in 1989, damaging residential and commercial buildings from Brown’s Valley through historic downtown Napa.
“The larger faults, like the San Andreas and Hayward faults, get the public’s attention, but lesser known faults, like the West Napa fault, can cause extensive damage. Unreinforced brick masonry and stone buildings have been shown to be especially vulnerable to earthquakes,” said Erol Kalkan, a research structural engineer at USGS and guest editor of the SRL special issue, which features six technical reports that cover different aspects of the magnitude 6.0 South Napa earthquake on August 24, 2014.
The South Napa earthquake occurred on the West Napa Fault system, a recognized but poorly studied fault lying between the longer Rodgers Creek and Green Valley faults, and caused strong ground motions, as detailed in the paper by Tom Brocher et al. The mapped surface rupture was unusually large for a moderate quake, extending nearly eight miles from Cuttings Wharf in the south to just west of Alston Park in the north.
An extensive sedimentary basin underlies much of Napa Valley, including the City of Napa. The basin, which may be as much as 2 km deep beneath the city, appears to have amplified the ground motion. A close look at the damaged buildings within the city revealed a clear pattern.
“Usually I look to certain factors that influence ground motion at a specific site – proximity to the fault rupture, directivity of the rupture process and the geology underneath the site,” said Boatwright. “The source distance and the direction of rupture did not strongly condition the shaking damage in Napa.”
Boatwright et al., analyzed data provided by structural engineers who inspected and tagged damaged buildings after the earthquake. The 165 red tags (prohibited access) and 1,707 yellow tags (restricted access) stretched across the city but were primarily concentrated within the residential section that lies between State Route 29 and the Napa River, including the historic downtown area.
When comparing the distribution of red and yellow-tagged buildings to the underlying sedimentary basin, to the pre-1950 development of Napa and to the recent alluvial geology of Napa Valley, the most severe damage correlates to the age of the buildings–pre-1950 construction–and their location within the basin. Less damaged areas to the east and west of central Napa lie outside of the sedimentary basin, and the moderately damaged neighborhoods to the north lie inside the basin but are of more recent construction.
Although the city’s buildings suffered extensive damage, there were few reports of ground failure, such as liquefaction and landslides. Brocher et al. suggest the timing of the earthquake near the end of the dry season, the three-year long drought and resulting low water table inhibited the liquefaction of the top layers of sandy deposits, sparing the area greater damage.
In their open-access paper for Geology, Kimberly Genareau and colleagues propose, for the first time, a mechanism for the generation of glass spherules in geologic deposits through the occurrence of volcanic lightning. The existence of fulgurites — glassy products formed in rocks and sediments struck by cloud-to-ground lightning — provide direct evidence that geologic materials can be melted via natural lightning occurrence.
Lightning-induced volcanic spherules (LIVS) form in the atmosphere from the physical transformation of volcanic ash particles into spheres of glass due to the high heat generated by lightning discharge. Examples of these textures were discovered in deposits from two volcanic eruptions where lightning was extensively documented: The 23 March 2009 eruption of Mount Redoubt, Alaska, USA, and the April-May 2010 eruption of Eyjafjallajökull, Iceland.
In some cases, the individual spherules are smooth, while in other instances the surfaces are interrupted by holes or cracks that appear to result from outward expansion of the spherule interior. Analogue laboratory experiments, examining the flashover mechanism across high voltage insulators contaminated by volcanic ash, confirm that glass spherules can be formed from the high heat generated by electrical discharge.
Reference:
Lightning-induced volcanic spherules
Kimberly Genareau et al., University of Alabama, Tuscaloosa, Alabama, USA. Published online ahead of print on 27 Feb. 2015; http://dx.doi.org/10.1130/G36255.1. This article is OPEN ACCESS online.
Villarrica volcano in southern Chile began erupting early Tuesday forcing the evacuation of some 3,000 people in nearby villages, the government said. VIDEOGRAPHIC
Recreating the violent conditions of Earth’s formation, scientists are learning more about how iron vaporizes and how this iron rain affected the formation of Earth and Moon. The study is published March 2 in Nature Geoscience.
“We care about when iron vaporizes because it is critical to learning how Earth’s core grew,” said co-author Sarah Stewart, UC Davis professor of Earth and Planetary Sciences.
Scientists from Lawrence Livermore National Laboratory, Sandia National Laboratory, Harvard University and UC Davis used one of the world’s most powerful radiation sources, the Sandia National Laboratories Z-machine, to recreate conditions that led to Earth’s formation. They subjected iron samples to high shock pressures in the machine, slamming aluminum plates into iron samples at extremely high speeds. They developed a new shock-wave technique to determine the critical impact conditions needed to vaporize the iron.
The researchers found that the shock pressure required to vaporize iron is much lower than expected, which means more iron was vaporized during Earth’s formation than previously thought.
Iron rain
Lead author Richard Kraus, formerly a graduate student under Stewart at Harvard, is now a research scientist at Lawrence Livermore National Laboratory. He said the results may shift how planetary scientists think about the processes and timing of Earth’s core formation.
“Rather than the iron in the colliding objects sinking down directly to the Earth’s growing core, the iron is vaporized and spread over the surface within a vapor plume,” said Kraus. “This means that the iron can mix much more easily with Earth’s mantle.”
After cooling, the vapor would have condensed into an iron rain that mixed into Earth’s still-molten mantle.
To the moon
This process may also explain why the Moon, which is thought to have formed by this time, lacks iron-rich material despite being exposed to similarly violent collisions. The authors suggest the Moon’s reduced gravity could have prevented it from retaining most of the vaporized iron.
Reference:
Richard G. Kraus, Seth Root, Raymond W. Lemke, Sarah T. Stewart, Stein B. Jacobsen, Thomas R. Mattsson. Impact vaporization of planetesimal cores in the late stages of planet formation. Nature Geoscience, 2015; DOI: 10.1038/ngeo2369
Step away from the villages and idyllic beaches of Hawaii, and you may think you’ve been transported to the moon. Walking along the lava flows of the Kilauea volcano, the landscape changes from a lush tropical paradise to one that’s bleak and desolate, the ground gray and rippled with hardened lava.
That’s how Christelle Wauthier, assistant professor in the Department of Geosciences and the Institute for CyberScience at Penn State, describes it, anyway.
Wauthier has been studying Kilauea volcano for several years and is getting ready to start a new project at Penn State—one using a radar imaging technique that researchers call interferometric synthetic aperture radar (InSAR) to try to peer below its surface and learn more about why the volcano is so volatile.
Kilauea is the most active of the five volcanoes that make up the island of Hawaii. It’s been erupting continuously since 1983, so far spewing 3.5 cubic kilometers of lava onto the surrounding landscape. The lava usually flows southward, but last year an eruption started creeping east toward the nearby village of Pahoa.
The flow was inconsistent—advancing anywhere from 10 yards to one-quarter mile a day—but it was enough to cause evacuations and lots of anxiety for the residents of the small village.
Wauthier says the volcano’s recent brush with the island’s inhabitants reinforced the importance of studying not just what’s happening on the surface of the volcano, but also what’s going on below.
“The volcano has been erupting for 31 years, so obviously there’s a lot of magma coming from below,” said Wauthier. “There’s lots of magma moving up and out, so one of the questions we’re asking is where are all these magma sources and how do they relate to each other?”
One of the keys to answering this question is found in the deformations happening on the surface of Kilauea. While a deformation is simply a change on the volcano’s exterior, what it implies goes much deeper—there has to be something below the surface causing the change. And without X-ray glasses to diagnose what’s happening, Wauthier uses InSAR to try to piece together what might be going on.
“InSAR is a remote-sensing technique that combines radar data taken from satellites to create images that show subtle movements in the ground’s surface,” said Wauthier. “In this case, the movements we’re studying are deformations on Kilauea.”
To begin the process, Wauthier gathers satellite data from archived databases. She looks for information about changes in elevation from before and after a “natural hazard event”—an eruption or earthquake, for example. Wauthier then uses this data to create two images: one from before the natural hazard event and one from after. This shows how the event changed the ground’s surface.
The two pictures can then be combined to create a single, much more comprehensive InSAR image called an interferogram, which uses color to represent movement.
Wauthier says that while InSAR images can certainly be created from two images, she also uses a time-series approach called Multi-Temporal (MT)-InSAR when enough radar images are available. This technique uses multiple images instead of two.
“This approach is much more accurate, but it also requires much more data and computing power,” Wauthier said. “The powerful computer clusters and IT facilities available through the Institute for CyberScience here at Penn State are tremendously helpful by providing the necessary computing power and efficiency.”
After Wauthier creates the InSAR images, she can begin to use them to predict what might be happening underneath Kilauea. She uses an approach called inverse modeling to estimate what caused the deformation.
“Basically, we use what’s happening on the surface of the volcano to find a ‘best fit model’ for what’s happening underground,” said Wauthier. “For example, if we know the ground rose here but sank over there, we’ll come up with a best guess for the type of magma process—like a magma reservoir or intrusion—that’s below.”
But magma processes aren’t the only things that could be affecting Kilauea’s volatility. The southern flank of the volcano is moving away from the island, and Wauthier says this could also be influencing the volcano’s magma plumbing system and activity.
Wauthier says that although the flank is slipping seaward at an average speed of 6 to 10 centimeters a year, earthquakes in the past have caused more drastic movement and have even generated tsunamis.
Remote-sensing technologies like InSAR are important because they allow researchers like Wauthier to do important research without physically being on location. (Although when you’re studying the Hawaiian landscape, you might want to be.)
Wauthier says she would like to return to Hawaii one day, but in the meantime, she hopes the project will help uncover information that could help the people of Hawaii as well as other scientists at the U.S. Geological Society Hawaiian Volcano Observatory. Having a better understanding of Kilauea would help researchers better grasp the behavior of other ocean islands volcanoes.
“Ideally, we’d like to get a much better picture of the underground magma systems and how they interact with the flank slip,” she said. “The flank instabilities can cause earthquakes and tsunamis, so we’d like to be able to understand and forecast those better. Hopefully, the more we know about these natural hazards, the more we can help people anticipate and mitigate their risks.”
A type of vertebrate trace fossil gaining recognition in the field of paleontology is that made by various tetrapods (four-footed land-living vertebrates) as they traveled through water under buoyant or semibuoyant conditions.
Called fossil “swim tracks,” they occur in high numbers in deposits from the Early Triassic, the Triassic being a geologic period (250 to 200 million years ago) that lies between the Permian and Jurassic. Major extinction events mark the start and end of the Triassic.
While it is known that tetrapods made the tracks, what is less clear is just why the tracks are so abundant and well preserved.
Paleontologists at the University of California, Riverside have now determined that a unique combination of factors in Early Triassic delta systems resulted in the production and unusually widespread preservation of the swim tracks: delayed ecologic recovery, depositional environments, and tetrapod swimming behavior.
“Given their great abundance in Lower Triassic strata, swim tracks have the potential to provide a wealth of information regarding environmental exploitation by reptiles during this critical time in their evolution following the end-Permian mass extinction,” said Mary L. Droser, a professor of paleontology in the Department of Earth Sciences, who led the research. “They also provide important data for our interpretation of Early Triassic sedimentological and stratigraphic processes. The Early Triassic period follows the largest mass extinction event in Earth’s history. The fossil record shows that a prolonged period of delayed ecologic recovery persisted throughout the Early Triassic.”
She explained that the fossil swim tracks are important and unique records of the aquatic behaviors and locomotion mechanics of tetrapods, and reveal a hidden biodiversity. They also constitute an excellent natural laboratory for investigating the paleoenvironmental and paleoecological conditions associated with their production and preservation.
Droser and Tracy J. Thomson, her former graduate student, surveyed the temporal distribution of the swim tracks seen in fossils in Utah, and report online this month, ahead of print, in the journal Geology that it is not the tetrapod swimming behavior alone, but the prevalence of unbioturbated substrates resulting from the unique combination of ecological and environmental conditions during the Early Triassic that led to the abundant production and preservation of swim tracks.
They identify three interacting factors that composed a “Goldilocks” effect in promoting the production and preservation of Lower Triassic swim tracks. These factors were (1) ecological, i.e., delayed ecologic recovery resulting in the lack of well-mixed sediment, (2) paleoenvironmental, i.e., depositional environments that promoted the production of firmground substrates, and (3) behavioral, i.e., the presence of tetrapods capable of aquatic locomotion such as swimming or bottom walking.
“During the Early Triassic, sediment mixing by animals living within the substrate was minimal,” said Thomson, the first author of the research paper who is now pursuing a doctoral degree at UC Davis. “This strongly contributed to the widespread production of firm-ground substrates that are ideal for recording and preserving trace fossils like swim tracks.”
Thomson explained that the end-Permian mass extinction event resulted in ecologic restructuring of both the marine and terrestrial realms. Bioturbation was suppressed, resulting in no extensively mixed sediment layer, thereby allowing fine-grained, low-water-content firmgrounds to develop near the sediment-water interface.
“Early Triassic deltas and their paleoenvironments were favorable habitats for functionally amphibious reptiles,” Droser said. “There were few animals living in the sediment mixing it up after the extinction, and so the muds became firm and cohesive providing ideal conditions for preservation. Periodic flooding supplied coarser grained material, enhancing swim track preservation.”
Reference:
T. J. Thomson, M. L. Droser. Swimming reptiles make their mark in the Early Triassic: Delayed ecologic recovery increased the preservation potential of vertebrate swim tracks. Geology, 2015; 43 (3): 215 DOI: 10.1130/G36332.1
New landslide maps have been developed that will help the Oregon Department of Transportation determine which coastal roads and bridges in Oregon are most likely to be usable following a major subduction zone earthquake that is expected in the future of the Pacific Northwest.
The maps were created by Oregon State University and the Oregon Department of Geology and Mineral Industries, or DOGAMI, as part of a research project for ODOT. They outline the landslide risks following a large earthquake on the Cascadia Subduction Zone.
The mapping is part of ongoing ODOT efforts to preserve the critical transportation routes that will facilitate response and recovery.
“Landslides are a natural part of both the Oregon Coast Range and Cascade Range, but it’s expected there will be a significant number of them that are seismically induced from a major earthquake,” said Michael Olsen, an assistant professor in the OSU School of Civil and Construction Engineering. “A massive earthquake can put extraordinary additional strain on unstable slopes that already are prone to landslides.”
Landslides are already a serious geologic hazard for western Oregon. But during an earthquake, lateral ground forces can be as high as half the force of gravity.
The Coast Range is of special concern, officials say, because it will be the closest part of the state to the actual subduction zone earthquake, and will experience the greatest shaking and ground movement. The research identified some of the most vulnerable landslide areas in Oregon as parts of the Coast Range between Tillamook and Astoria, and from Cape Blanco south to the California border – in each case, from the coast to about 30 miles inland.
“Major landslides have been identified by DOGAMI throughout western Oregon using high-resolution lidar mapping,” Olsen said. “Some experts believe that a number of these landslides date back to the last subduction zone earthquake in Oregon, in 1700. Coast Range slopes that are filled with weak layers of sedimentary rock are particularly vulnerable, and many areas are already on the verge of failure.”
According to the new map, the highway corridors to the coast that will face comparatively less risk from landslides will be Oregon Highway 36 from near Eugene to Florence; Oregon Highway 38 from near Cottage Grove to Reedsport; Oregon Highway 18 from Salem to Lincoln City; and large portions of U.S. Highway 30 from Portland to Astoria. However, landslides or other damages could occur on any road to the coast or in the Cascade Range due to the anticipated high levels of ground shaking.
The new research, along with other considerations, will help ODOT and other officials determine which areas merit the most investment in coming years as part of long-term planning for the expected earthquake. Given the high potential for damage and minimal resources available for mitigation, experts may choose to focus their efforts on highway corridors that are expected to receive less damage from the earthquake, Olsen said.
The research reflected in the new map considered such factors as slope, direction of ground movement, soil type, vegetation, distance to rivers, roads and fault locations, peak ground acceleration, peak ground velocity, annual precipitation averages, and other factors.
ODOT, Oregon State and DOGAMI have been state leaders in research on risks posed by the Cascadia Subduction Zone, earthquake and tsunami impacts, and initiatives to help the state prepare for a future disaster that scientists say is a certainty.
Officials said it’s important to consider not just the damage to structures that can occur as a result of an earthquake, but also landslide and transportation issues.
“ODOT recognizes the potential not only for casualties due to landslides during and after an earthquake, but also for the likelihood of isolating whole segments of the state’s population,” one ODOT official said. “Thousands of people in the coastal communities would be stranded and cut off from rescue, relief and recovery that would arrive by surface transport.”
ODOT recently completed a seismic vulnerability assessment and selected lifeline corridor routes to prioritize following an earthquake. ODOT also maintains an unstable slopes program, evaluating the frequency of rockfalls and landslides affecting highway corridors.
DOGAMI recently released another open file report as part of the Oregon Resilience Plan, which evaluated multiple potential hazards resulting from a Cascadia subduction zone earthquake, including landslides, liquefaction, and tsunamis.
Some recent efforts at OSU have also focused on understanding the different concerns raised by a subduction zone earthquake compared to the type of strike-slip faults more common in California, on which many seismic plans are based. Subduction earthquakes tend to be larger, affect a wider area and last longer.
Reference:
DOGAMI Open-File Report O-15-01, “Landslide Susceptibility Analysis of Lifeline Routes in the Oregon Coast Range,” by Rubini Mahalingam; Michael J. Olsen; Mahyar Sharifi-Mood; and Daniel T. Gillins, Oregon State University School of Civil and Construction Engineering.
ODOT Research Report SPR-740, “Impacts of Potential Seismic Landslides on Lifeline Corridors,” by Michael J. Olsen; Scott A. Ashford; Rubini Mahalingam; Mahyar Sharifi-Mood; Matt O’Banion and Daniel T. Gillins, Oregon State University School of Civil and Construction Engineering. Download the report: http://1.usa.gov/18352DF
Colima Volcano, one of Mexico’s most active, is at it again. The Operational Land Imager (OLI) on Landsat 8 captured this natural-color view of an ash plume from Colima on February 8, 2015.
Prior to this image on February 5, ash was reported to have reached an altitude of 7.9 kilometers (26,000 feet). The occurrence of lava-block avalanches decreased by late February, but residents were still warned to remain at least 5 kilometers away from the volcano.
Video of the eruption that includes footage of the eruption from February 4-9, 2015, can be viewed here.
References and Related Reading
Global Volcanism Program, Smithsonian Institution (2015) Colima. Accessed February 23, 2015.
The Telegraph (2015, February 12) Watch: Moment Colima volcano erupts in Mexico. Accessed February 23, 2015.
Thirteen million years ago, as many as seven different species of crocodiles hunted in the swampy waters of what is now northeastern Peru, new research shows. This hyperdiverse assemblage, revealed through more than a decade of work in Amazon bone beds, contains the largest number of crocodile species co-existing in one place at any time in Earth’s history, likely due to an abundant food source that forms only a small part of modern crocodile diets: mollusks like clams and snails. The work, published today in the journal Proceedings of the Royal Society B, helps fill in gaps in understanding the history of the Amazon’s remarkably rich biodiversity.
“The modern Amazon River basin contains the world’s richest biota, but the origins of this extraordinary diversity are really poorly understood,” said John Flynn, Frick Curator of Fossil Mammals at the American Museum of Natural History and an author on the paper. “Because it’s a vast rain forest today, our exposure to rocks–and therefore, also to the fossils those rocks may preserve–is extremely limited. So anytime you get a special window like these fossilized “mega-wetland” deposits, with so many new and peculiar species, it can provide novel insights into ancient ecosystems. And what we’ve found isn’t necessarily what you would expect.”
Before the Amazon basin had its river, which formed about 10.5 million years ago, it contained a massive wetland system, filled with lakes, embayments, swamps, and rivers that drained northward toward the Caribbean, instead of today’s pattern of eastward river flow to the Atlantic Ocean. Knowing the kind of life that existed at that time is crucial to understanding the history and origins of modern Amazonian biodiversity. But although invertebrates like mollusks and crustaceans are abundant in Amazonian fossil deposits, evidence of vertebrates other than fish have been very rare.
Since 2002, Flynn has been co-leading prospecting and excavating expeditions with colleagues at fossil outcrops of the Pebas Formation in northeastern Peru. These outcrops have preserved life from the Miocene, including the seven species of crocodiles discussed in Proceedings B. Three of the species are entirely new to science, the strangest of which is Gnatusuchus pebasensis, a short-faced caiman with globular teeth that is thought to have used its snout to “shovel” mud bottoms, digging for clams and other mollusks. The new work suggests that the rise of Gnatusuchus and other “durophagous,” or shell-crunching, crocodiles is correlated with a peak in mollusk diversity and numbers, which disappeared when the mega-wetlands transformed into the modern Amazon River drainage system.
“When we analyzed Gnatusuchus bones and realized that it was probably a head-burrowing and shoveling caiman preying on mollusks living in muddy river and swamp bottoms, we knew it was a milestone for understanding proto-Amazonian wetland feeding dynamics,” said Rodolfo Salas-Gismondi, lead author of the paper and a graduate student at the University of Montpellier, in France, as well as researcher and chief of the paleontology department at the National University of San Marcos’ Museum of Natural History in Lima, Peru.
Besides the blunt-snouted crocodiles like Gnatusuchus, the researchers also recovered the first unambiguous fossil representative of the living smooth-fronted caiman Paleosuchus, which has a longer and higher snout shape suitable for catching a variety of prey, like fish and other active swimming vertebrates.
“We uncovered this special moment in time when the ancient mega-wetland ecosystem reached its peak in size and complexity, just before its demise and the start of the modern Amazon River system,” Salas-Gismondi said. “At this moment, most known caiman groups co-existed: ancient lineages bearing unusual blunt snouts and globular teeth along with those more generalized feeders representing the beginning of what was to come.”
The new research suggests that with the inception of the Amazon River System, mollusk populations declined and durophagous crocodile species went extinct as caimans with a broader palate diversified into the generalist feeders that dominate modern Amazonian ecosystems. Today, six species of caimans live in the whole Amazon basin, although only three ever co-exist in the same area and they rarely share the same habitats. This is in large contrast to their ancient relatives, the seven diverse species that lived together in the same place and time.
Reference:
Rodolfo Salas-Gismondi, John J. Flynn, Patrice Baby, Julia V. Tejada-Lara, Frank P. Wesselingh, Pierre-Olivier Antoine. A Miocene hyperdiverse crocodylian community reveals peculiar trophic dynamics in proto-Amazonian mega-wetlands. Proc. R. Soc. B, 2015 DOI: 10.1098/rspb.2014.2490
Have you ever wondered exactly when a certain group of plants or animals first evolved? This week a groundbreaking new resource for scientists will go live, and it is designed to help answer just those kinds of questions. TheFossil Calibration Database, a free, open-access resource that stores carefully vetted fossil data, is the result of years of work from a worldwide team led by Dr. Daniel Ksepka, Curator of Science at the Bruce Museum in Greenwich, and Dr. James Parham, Curator at the John D. Cooper Archaeological and Paleontological Center in Orange County, California, funded through the National Evolutionary Synthesis Center (NESCent).
“Fossils provide the critical age data we need to unlock the timing of major evolutionary events,” says Dr. Ksepka. “This new resource will provide the crucial fossil data needed to calibrate ‘molecular clocks’ which can reveal the ages of plant and animal groups that lack good fossil records. When did groups like songbirds, flowering plants, or sea turtles evolve? What natural events were occurring that may have had an impact? Precisely tuning the molecular clock with fossils is the best way we have to tell evolutionary time.”
More than twenty paleontologists, molecular biologists, and computer programmers from five different countries contributed to the design and implementation of this new database. The Fossil Calibrations Database webpage launches on Tuesday February 24th, and a series of five peer-reviewed papers and an editorial on the topic will appear in the scientific journal Palaeontologia Electronica, describing the endeavor. Dr. Ksepka is the author of one of the papers and co-author of the editorial.
“This exciting field of study, known as ‘divergence dating,’ is important for understanding the origin and evolution of biodiversity, but has been hindered by the improper use of data from the fossil record,” says Dr. Parham. “The Fossil Calibration Database addresses this issue by providing molecular biologists with paleontologist-approved data for organisms across the Tree of Life.”
The Tree of Life? “Think of it as a family tree of all species,” explains Dr. Ksepka.
Note: The above story is based on materials provided by Bruce Museum.
Tropical turtle fossils discovered in Wyoming by University of Florida scientists reveal that when Earth got warmer, prehistoric turtles headed north. But if today’s turtles try the same technique to cope with warming habitats, they might run into trouble.
While the fossil turtle and its kin could move northward with higher temperatures, human pressures and habitat loss could prevent a modern-day migration, leading to the extinction of some modern species.
The newly discovered genus and species, Gomphochelys (pronounced gom-fo-keel-eez) nanus — provides a clue to how animals might respond to future climate change, said Jason Bourque, a paleontologist at the Florida Museum of Natural History at UF and the lead author of the study, which appears online this week in the Journal of Vertebrate Paleontology.
The wayfaring turtle was among the species that researchers believe migrated 500-600 miles north 56 million years ago, during a temperature peak known as the Paleocene-Eocene Thermal Maximum. Lasting about 200,000 years, the temperature peak resulted in significant movement and diversification of plants and animals.
“We knew that some plants and lizards migrated north when the climate warmed, but this is the first evidence that turtles did the same,” Bourque said. “If global warming continues on its current track, some turtles could once again migrate northward, while others would need to adapt to warmer temperatures or go extinct.”
The new turtle is an ancestor of the endangered Central American river turtle and other warm-adapted turtles in Belize, Guatemala and southern Mexico. These modern turtles, however, could face significant roadblocks on a journey north, since much of the natural habitat of these species is in jeopardy, said co-author Jonathan Bloch, a Florida Museum curator of vertebrate paleontology.
“If you look at the waterways that turtles would have to use to get from one place to another, it might not be as easy as it once was,” Bloch said. “Even if the natural response of turtles is to disperse northward, they have fewer places to go and fewer routes available.”
To put the new turtle in evolutionary context, the researchers examined hundreds of specimens from museum collections around the country, including turtles collected during the 1800s housed at the Smithsonian Institution. Co-author Patricia Holroyd, a vertebrate paleontologist at the University of California, Berkeley, said the fossil history of the modern relatives of the new species shows they could be much more wide-ranging, if it were not for their restricted habitats.
The Central American river turtle is one of the most endangered turtles in the world, threatened by habitat loss and its exploitation as a human food source, Holroyd said. “This is an example of a turtle that could expand its range and probably would with additional warming, but — and that’s a big but — that’s only going to happen if there are still habitats for it,” she said.
References:
Jason R. Bourque, Blaine W. Schubert. Fossil musk turtles (Kinosternidae,Sternotherus) from the late Miocene–early Pliocene (Hemphillian) of Tennessee and Florida. Journal of Vertebrate Paleontology, 2015; 35 (1): e885441 DOI: 10.1080/02724634.2014.885441
Jason R. Bourque, J. Howard Hutchison, Patricia A. Holroyd, Jonathan I. Bloch. A new dermatemydid (Testudines, Kinosternoidea) from the Paleocene-Eocene Thermal Maximum, Willwood Formation, southeastern Bighorn Basin, Wyoming. Journal of Vertebrate Paleontology, 2015; e905481 DOI: 10.1080/02724634.2014.905481
Note: The above story is based on materials provided by University of Florida. The original article was written by Stephenie Livingston.
A French-Kenyan research team has just described a new fossil ancestor of today’s hippo family. This discovery bridges a gap in the fossil record separating these animals from their closest modern-day cousins, the cetaceans. It also shows that some 35 million years ago, the ancestors of hippos were among the first large mammals to colonize the African continent, long before those of any of the large carnivores, giraffes or bovines. This work, co-signed by researchers of the Institut des sciences de l’évolution de Montpellier (CNRS/Université de Montpellier/IRD/EPHE) and Institut de paléoprimatologie et paléontologie humaine : évolution et paléo-environnements (CNRS/Université de Poitiers) is published in the journal Nature Communications.
The ancestry of hippopotamuses is somewhat of an enigma. For a long time, paleontologists thought these semi-aquatic animals, with their unusual morphology (canines and incisors with continual growth, primitive skull and trifoliate tooth-wear pattern), to be related to the Suidae family, which includes pigs and peccaries. But in the 1990s and 2000s, DNA comparisons showed that the hippo’s closest living relatives were the cetaceans (whales, dolphins, etc.), which disagreed with most paleontological interpretations. Moreover, the lack of fossils significantly hindered attempts to uncover the truth about hippo evolution.
New paleontological work by a group of French and Kenyan researchers has now revealed that hippos are not related to suoids but instead descend from another, now extinct, group. The new fossils studied have made it possible to build the first evolutionary scenario that is compatible with both genetic and paleontological data. By analyzing a half-jaw and several teeth discovered at Lokone (in the Lake Turkana basin, Kenya), the French-Kenyan team described a new fossil species (belonging to a new genus (2)), dating back to about 28 million years. They named it Epirigenys lokonensis, from the word “Epiri” which means hippo in the Turkana language and the site of discovery, Lokone.
By comparing the characteristics of fossil teeth with those of ruminants, suoids, hippos and fossil anthracotheres (an extinct family of ungulates), the scientists reconstructed the relationships between these groups. The results show that Epirigenys forms a kind of evolutionary transition between the oldest known hippo in the fossil record (about 20 million years ago) and an anthracothere lineage. This position in the tree of life is compatible with the genetic data, confirming that the cetaceans are the hippos’ closest living cousins.
This kind of discovery may one day enable scientists to draw a picture of the common ancestor of cetaceans and hippos. Indeed, analysis of Epirigenys (28 million years old) has linked today’s hippos to a lineage of anthracotheres, the oldest of which date back about 40 million years. However, until now, the earliest known ancestor of the hippos was about 20 million years old, while the first fossils of cetaceans are 53 million years old. The time gap between today’s hippos and the oldest cetaceans is thereby filled by nearly 75% according to the present scenario.
Furthermore, this discovery shows the whole history of the African fauna in a new light. Africa was an isolated continent from about 110 to 18 million years ago. Most of the iconic African fauna (lions, leopards, rhinos, buffaloes, giraffes, zebras, etc.) are relatively recent arrivals on the continent (they have been there less than 20 million years). Until now, the same was believed to be true of hippos, but the discovery of Epirigenys demonstrates that their anthracothere ancestors migrated from Asia to Africa some 35 million years ago.
Reference:
Fabrice Lihoreau, Jean-Renaud Boisserie, Fredrick Kyalo Manthi, Stéphane Ducrocq. Hippos stem from the longest sequence of terrestrial cetartiodactyl evolution in Africa. Nature Communications, 2015; 6: 6264 DOI: 10.1038/ncomms7264
Geysers like Old Faithful in Yellowstone National Park erupt periodically because of loops or side-chambers in their underground plumbing, according to recent studies by volcanologists at the University of California, Berkeley.
The key to geysers, said Michael Manga, a UC Berkeley professor of earth and planetary science, is an underground bend or loop that traps steam and then bubbles it out slowly to heat the water column above until it is just short of boiling. Eventually, the steam bubbles trigger sudden boiling from the top of the column, releasing pressure on the water below and allowing it to boil as well. The column essentially boils from the top downward, spewing water and steam hundreds of feet into the air.
“Most geysers appear to have a bubble trap accumulating the steam injected from below, and the release of the steam from the trap gets the geyser ready to erupt,” Manga said. “You can see the water column warming up and warming up until enough water reaches the boiling point that, once the top layer begins to boil, the boiling becomes self-perpetuating.”
The new understanding of geyser mechanics comes from Manga’s studies over the past few years of geysers in Chile and Yellowstone, as well as from an experimental geyser he and his students built in their lab. Made of glass with a bend or loop, it erupts periodically, though, surprisingly, not as regularly as a real geyser they studied in the Atacama desert of Chile, dubbed El Jefe. Over six days of observation, El Jefe erupted every 132 seconds, plus or minus two seconds.
“At many geysers it looks like there is some cavity that is stuck off on the side where steam is accumulating,” Manga explained. “So we said, ‘Let’s put in a cavity and watch how the bubble trap generates eruptions.’ It allows us to get both small eruptions and big eruptions in the lab.”
Manga and his colleagues, including first author Carolina Munoz-Saez, a UC Berkeley graduate student from Chile, report their findings on the Chilean geysers in the February 2015 issue of the Journal of Volcanology and Geothermal Research. A description of the laboratory geyser appeared in the September 2014 issue of the same journal.
Fewer than 1,000 geysers exist around the world — half of them in Yellowstone — and all are located in active or formerly active volcanic areas. Water from the surface trickles downward and gets heated by hot magma, eventually, perhaps decades later, rising back to the surface in the form of hot springs, mud pots and geysers.
Why geysers erupt periodically, some with a regularity you can set a clock by, has piqued the interest of many scientists, but German chemist Robert Bunsen was the first to make pressure and temperature measurements inside a geyser — the Great Geysir in Iceland, after which geysers are named — in 1846. Based on these measurements, he proposed that eruptions start when water starts to boil at the surface, reducing pressure within the superheated water column and allowing boiling to propagate downward from the surface. Pressurized water boils at a higher temperature, so reducing the pressure on overheated water allows it to boil.
Since then, Manga said, a few researchers have stuck video cameras into geysers and seen features that suggest there are underwater chambers or loops that trap steam bubbles. Manga’s measurements in Yellowstone and Chile link the temperature and pressure changes down the water column with the underground plumbing to explain the periodic eruptions.
Geysers key to understanding volcanoes
Manga studies geysers to gain insight into volcanic eruptions, which bear many similarities to geysers but are much harder to study. Manga and his students feed temperature and pressure sensors as deep as 30 feet into geysers — something impossible to do with a volcano — and correlate these with above-ground measurements from seismic sensors and tiltmeters to deduce the sequence of underground events leading to an eruption. They have also been able to submerge video cameras as deep as six feet into geysers to view the submerged conduits and chambers below. He hopes to be able to extrapolate his findings to volcanoes, deducing the internal mechanics from exterior seismic and gravity measurements.
But geysers are fascinating in themselves, he said.
“One of our goals is to figure out why geysers exist — why don’t you just get a hot spring — and what is it that controls how a geyser erupts, including weather and earthquakes,” he said.
In this month’s publication, Manga and his students report on El Jefe (“the chief”), a geyser located at an elevation of about 14,000 feet in the El Tatio geyser field in Chile, where water boils at 86 degrees Celsius (187 degrees Fahrenheit) instead of 100 (212 degrees F). In 2012, they recorded internal and external data during 3,600 eruptions over six days. They compared these to above-ground measurements at Lone Star and other geysers in Yellowstone. Invasive measurements are forbidden in the park.
They concluded that Bunsen was essentially correct — boiling starts at the top of the superheated water column and propagates downward — but also that it’s the escaped bubbles from trapped steam in the rock conduits below the geyser that heat the water column to the boiling point. As the entire water column boils out of the ground, more than half the volume of stuff emerging is steam, though most of the mass is liquid water, they found. The plume seen from afar is mostly steam condensing into water droplets in the air, Manga said.
Preplay
In places like Yellowstone, the bubbles that slowly escape from the underground loop cause mini-eruptions called preplay leading up to the major eruption. Eruptions stop when the water column in the geyser cools below the boiling point, and the process repeats. All these underground processes seem to be affected only by the heat source deep below the geyser, because they could find no evidence that the surface temperature affected eruptions.
Manga plans to continue his Yellowstone and Chilean studies — his next trip to Yellowstone is in the fall — to gather more data to help explain the periods of geysers and better understand below-ground processes.
Co-authors with Manga and Munoz-Saez on the February paper are Shaul Hurwitz of the U.S. Geological Survey in Menlo Park, California; Maxwell Rudolph of Portland State University in Oregon; Atsuko Namiki of Hiroshima University in Japan; and professor emeritus Chi-Yuen Wang of UC Berkeley.
The September 2014 paper was co-authored by UC Berkeley undergraduates Esther Adelstein, Aaron Tran, Carolina Muñoz-Saez and researcher Alexander Shteinberg.
The work is supported by the National Science Foundation and the CONICYT program to support Berkeley-Chile collaborations, which is administered by UC Berkeley’s Center for Latin American Studies.
Video:
Volcanologist Michael Manga and student Esther Adelstein use a laboratory geyser they built to explain how geysers like Old Faithful work. (Video by Roxanne Makasdjian and Phil Ebiner, with geyser footage by Eric King and Kristen Fauria)
Reference:
Carolina Munoz-Saez, Michael Manga, Shaul Hurwitz, Maxwell L. Rudolph, Atsuko Namiki, Chi-Yuen Wang. Dynamics within geyser conduits, and sensitivity to environmental perturbations: Insights from a periodic geyser in the El Tatio geyser field, Atacama Desert, Chile. Journal of Volcanology and Geothermal Research, 2015; 292: 41 DOI: 10.1016/j.jvolgeores.2015.01.002
Geographic Resources Analysis Support System, commonly referred to as GRASS GIS, is a Geographic Information System (GIS) used for data management, image processing, graphics production, spatial modelling, and visualization of many types of data. It is Free (Libre) Software/Open Source released under GNU General Public License (GPL) >= V2. GRASS GIS is an official project of the Open Source Geospatial Foundation.
Originally developed by the U.S. Army Construction Engineering Research Laboratories (USA-CERL, 1982-1995, see history of GRASS 1.0-4.2 and 5beta), a branch of the US Army Corp of Engineers, as a tool for land management and environmental planning by the military, GRASS GIS has evolved into a powerful utility with a wide range of applications in many different areas of applications and scientific research. GRASS is currently used in academic and commercial settings around the world, as well as many governmental agencies including NASA, NOAA, USDA, DLR, CSIRO, the National Park Service, the U.S. Census Bureau, USGS, and many environmental consulting companies.
The GRASS Development Team has grown into a multi-national team consisting of developers at numerous locations.
In September 2006, the GRASS Project Steering Commitee was formed which is responsible for the overall management of the project. The PSC is especially responsible for granting SVN write access.
GRASS GIS contains over 350 modules to render maps and images on monitor and paper; manipulate raster, and vector data including vector networks; process multispectral image data; and create, manage, and store spatial data. GRASS GIS offers both an intuitive graphical user interface as well as command line syntax for ease of operations. GRASS GIS can interface with printers, plotters, digitizers, and databases to develop new data as well as manage existing data.
GRASS GIS and support for teams
GRASS GIS supports workgroups through its LOCATION/MAPSET concept which can be set up to share data and the GRASS installation itself over NFS (Network File System) or CIFS. Keeping LOCATIONs with their underlying MAPSETs on a central server, a team can simultaneously work in the same project database.
GRASS GIS capabilities
Raster analysis: Automatic rasterline and area to vector conversion, Buffering of line structures, Cell and profile dataquery, Colortable modifications, Conversion to vector and point data format, Correlation / covariance analysis, Expert system analysis , Map algebra (map calculator), Interpolation for missing values, Neighbourhood matrix analysis, Raster overlay with or without weight, Reclassification of cell labels, Resampling (resolution), Rescaling of cell values, Statistical cell analysis, Surface generation from vector lines
3D-Raster (voxel) analysis: 3D data import and export, 3D masks, 3D map algebra, 3D interpolation (IDW, Regularised Splines with Tension), 3D Visualization (isosurfaces), Interface to Paraview and POVray visualization tools
Vector analysis: Contour generation from raster surfaces (IDW, Splines algorithm), Conversion to raster and point data format, Digitizing (scanned raster image) with mouse, Reclassification of vector labels, Superpositioning of vector layers
Point data analysis: Delaunay triangulation, Surface interpolation from spot heights, Thiessen polygons, Topographic analysis (curvature, slope, aspect), LiDAR
Image processing: Support for aerial and UAV images, satellite data (optical, radar, thermal), Canonical component analysis (CCA), Color composite generation, Edge detection, Frequency filtering (Fourier, convolution matrices), Fourier and inverse fourier transformation, Histogram stretching, IHS transformation to RGB, Image rectification (affine and polynomial transformations on raster and vector targets), Ortho photo rectification, Principal component analysis (PCA), Radiometric corrections (Fourier), Resampling, Resolution enhancement (with RGB/IHS), RGB to IHS transformation, Texture oriented classification (sequential maximum a posteriori classification), Shape detection, Supervised classification (training areas, maximum likelihood classification), Unsupervised classification (minimum distance clustering, maximum likelihood classification)
DTM-Analysis: Contour generation, Cost / path analysis, Slope / aspect analysis, Surface generation from spot heigths or contours
Geocoding: Geocoding of raster and vector maps including (LiDAR) point clouds
Visualization: 3D surfaces with 3D query (NVIZ), Color assignments, Histogram presentation, Map overlay, Point data maps, Raster maps, Vector maps, Zoom / unzoom -function
Map creation: Image maps, Postscript maps, HTML maps
The GRASS GIS 6 release introduced a new topological 2D/3D vector engine and support for vector network analysis. Attributes are managed in a SQL-based DBMS (PostgreSQL, mySQL, SQLite, ODBC, …), by default in DBF format. A new display manager has been implemented. The NVIZ visualization tool was enhanced to display 3D vector data and voxel volumes. Messages are partially translated (i18N) with support for FreeType fonts, including multibyte Asian characters. New LOCATIONs can be auto-generated eg. by EPSG code number using a location wizard. GRASS GIS is integrated with GDAL/OGR libraries to support an extensive range of raster and vector formats, including OGC-conformal Simple Features.
About GRASS GIS 7
GRASS GIS 7 is under development with a first releases in preparation (snapshots are already available). It offers large data support, an improved topological 2D/3D vector engine and much improved vector network analysis. Attributes are managed by default in SQLite format. The display manager has been improved for usability. The NVIZ visualization tool was completely rewritten. Image processing has also been improved. See more details at New Features.
Inoceramus, genus of extinct pelecypods (clams) found as fossils in Jurassic to Cretaceous rocks (laid down between 199.6 million and 65.5 million years ago). Especially important and widespread in Cretaceous rocks, Inoceramus had a distinctive shell; it is large, thick, and wrinkled in a concentric fashion, making identification relatively simple. The many pits at the dorsal region were the anchoring points for the ligaments that closed the shell.
Inoceramids: Some species of clams (bivalves) grew to giant size in the late Cretaceous, attaining diameters of four feet or more. In cross section, these shells are composed of prismatic (calcitic) crystals. The inner, nacreous (Mother of Pearl) layer of the shell (composed of aragonite) was usually dissolved during fossilization and the outer portion is usually covered with colonies of oysters and other invertebrates. Pearls are occasionally found pressed into the Inoceramid shell. According to Sowerby 1823, Inoceramus means “fibrous shell,” describing the prisms that are visible on the edge of shell fragments.
Inoceramus cuvieri was the first species of Inoceramus that was formally described by Sowerby (1814). Several species are found in the Late Cretaceous rocks of Kansas. At times in the Western Interior Sea, they provided shelter for various small fishes and at least one species of eel. They also produced pearls.
Inoceramid shells were discovered in Great Britain and France in the late 1700s and early 1800s, but they were seldom found complete. Europe in the James Parkinson (1811b) wrote one of the first descriptions of inoceramid shell fragments, in this case Inoceramus cuvieri from the English chalk:
“Fragments of thick shell of a fibrous structure: The doubts expressed respecting the nature of this shell, and the observations made with regard to it, offer another strong point of agreement between the shells of the two strata. The shell here alluded to is most probably that represented Org. Rem. vol. III. pl. V.-fig. 3; the structure of which agrees exactly with that mentioned as found in the French stratum of- chalk. That shell is however described as being of a tubular form; it is therefore right to observe, that fossil that represented Org. Rem. vol. III. pl. V.-fig. 3; the structure of which agrees exactly with that mentioned as found in the French stratum of- chalk. That shell is however described as being of a tubular form; it is therefore right to observe, that fossil pinnae do sometimes possess this peculiar structure.”
The clam had a thick shell paved with “prisms” of calcite deposited perpendicular to the surface, which gave it a pearly luster in life. Most species have prominent growth lines which appear as raised semicircles concentric to the growing edge of the shell. Paleontologists suggest that the giant size of some species was an adaptation for life in the murky bottom waters, with a correspondingly large gill area that would have allowed the animal to survive in oxygen-deficient waters.
Distribution
Species of Inoceramus had a worldwide distribution during the Cretaceous period. Many examples are found in the Pierre Shale of the Western Interior Seaway in North America. Inoceramus can also be found abundantly in the Cretaceous Gault Clay that underlies London. Other locations for this fossil include Vancouver Island, British Columbia, Canada; Texas, Tennessee, Kansas, California and Alaska, USA; Spain, France, and Germany.
A new study has found that La Niña-like conditions in the Pacific Ocean off the coast of Panamá were closely associated with an abrupt shutdown in coral reef growth that lasted 2,500 years. The study suggests that future changes in climate similar to those in the study could cause coral reefs to collapse in the future.
The study found cooler sea temperatures, greater precipitation and stronger upwelling — all indicators of La Niña-like conditions at the study site in Panama — during a period when coral reef accretion stopped in this region around 4,100 years ago. For the study, researchers traveled to Panama to collect a reef core, and then used the corals within the core to reconstruct what the environment was like as far back as 6,750 years ago.
“Investigating the long-term history of reefs and their geochemistry is something that is difficult to do in many places, so this was a unique opportunity to look at the relationship between reef growth and environment,” said Kim Cobb, an associate professor in the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology. “This study shows that there appears to have been environmental triggers for this well-documented reef collapse in Panama.”
The study was sponsored by the Geological Society of America, the American Museum of Natural History and the Smithsonian Institution’s Marine Science Network. The study is scheduled for publication on February 23 in the journal Nature Climate Change. The study was a collaboration with the Florida Institute of Technology, with Cobb’s lab providing an expertise in fossil coral analysis.
Climate change is the leading cause of coral-reef degradation. The global coral reef landscape is now characterized by declining coral cover, reduced growth and calcification, and slowdowns in reef accretion. The new study provides data to assist scientists in understanding how changes in the environment trigger long-term changes in coral reef growth and ecosystem function, which is a critical challenge to coral-reef conservation.
“Temperature was a key cause of reef collapse and modern temperatures are now within several degrees of the maximum these reefs experienced over their 6,750 year history,” said Lauren Toth, the study’s lead author, who was a graduate student at Florida Tech during the study. “It’s possible that anthropogenic climate change may once again be pushing these reefs towards another regional collapse.”
For the study, the research team analyzed a 6,750-year-old coral core from Pacific Panamá. The team then reconstructed the coral’s past functions, such as growth and accretion (accumulation of layers of coral), and compared that to surrounding environmental conditions before, during and after the 2,500-year hiatus in vertical accretion.
“We saw evidence for a different climate regime during that time period,” Cobb said. “The geochemical signals were consistent with a period that is very cool and very wet, with very strong upwelling, which is more like a modern day La Niña event in this part of the Pacific.”
In Pacific Panamá, La Niña-like periods are characterized by a cold, wet climate with strong seasonal upwelling. Due to limited data at the site, the researchers cannot quantify the intensity of La Niña events during this time, but document that conditions similar to La Niña were present at this site during this time.
“These conditions would have been for quite an extended time, which suggests that the reef was quite sensitive to prolonged change in environmental conditions,” Cobb said. “So sensitive, in fact, that it stopped accreting over that period.”
Future climate change, similar to the changes during the hiatus in coral growth, could cause coral reefs to behave similarly, the study authors suggest, leading to another shutdown in reef development in the tropical eastern Pacific.
“We are in the midst of a major environmental change that will continue to stress corals over the coming decades, so the lesson from this study is that there are these systems such as coral reefs that are sensitive to environmental change and can go through this kind of wholesale collapse in response to these environmental changes,” Cobb said.
Future work will involve expanding the study to include additional locations throughout the tropical Pacific.
“A broad-scale perspective on long-term reef growth and environmental variability would allow us to better characterize the environmental thresholds leading to reef collapse and the conditions that facilitate survival,” Toth said. “A better understanding of the controls on reef development in the past will allow us to make better predictions about which reefs may be most vulnerable to climate change in the future.”
Reference:
Lauren T. Toth, Richard B. Aronson, Kim M. Cobb, Hai Cheng, R. Lawrence Edwards, Pamela R. Grothe, Hussein R. Sayani. Climatic and biotic thresholds of coral-reef shutdown. Nature Climate Change, 2015; DOI: 10.1038/nclimate2541
A team of scientists lead by Danish geologist Nicolaj Krog Larsen have managed to quantify how the Greenland Ice Sheet reacted to a warm period 8,000-5,000 years ago. Back then temperatures were 2-4 degrees C warmer than present. Their results have just been published in the scientific journal Geology, and are important as we are rapidly closing in on similar temperatures.
While the world is preparing for a rising global sea-level, a group of scientists led by Dr. Nicolaj Krog Larsen, Aarhus University in Denmark and Professor Kurt Kjær, Natural History Museum of Denmark ventured off to Greenland to investigate how fast the Greenland Ice Sheet reacted to past warming.
With hard work and high spirits the scientists spent six summers coring lakes in the ice free land surrounding the ice sheet. The lakes act as a valuable archive as they store glacial meltwater sediments in periods where the ice is advanced. That way is possible to study and precisely date periods in time when the ice was smaller than present.
“It has been hard work getting all these lake cores home, but is has definitely been worth the effort. Finally we are able to describe the ice sheet’s response to earlier warm periods,” says Dr. Nicolaj Krog Larsen of Aarhus University, Denmark.
The size of the Greenland Ice Sheet has varied since the Ice Age ended 11,500 years ago, and scientists have long sought to investigate the response to the warmest period 8,000-5,000 years ago where the temperatures were 2-4 °C warmer than present.
“The glaciers always leave evidence about their presence in the landscape. So far the problem has just been that the evidence is removed by new glacial advances. That is why it is unique that we are now able to quantify the mass loss during past warming by combining the lake sediment records with state-of-the-art modelling,” says Professor Kurt Kjær, Natural History Museum of Denmark.
16 cm of global sea-level rise from Greenland
Their results show that the ice had its smallest extent exactly during the warming 8,000-5,000 years ago – with that knowledge in hand they were able to review all available ice sheet models and choose the ones that best reproduced the reality of the past warming.
The best models show that during this period the ice sheet was losing mass at a rate of 100 Gigaton pr. year for several thousand years, and delivered the equivalent of 16 cm of global sea-level rise when temperatures were 2-4 °C warmer. For comparison, the mass loss in the last 25 years has varied between 0-400 Gigaton pr. year, and it is expected that the Arctic will warm 2-7 °C by the year 2100.
Reference:
The response of the southern Greenland ice sheet to the Holocene thermal maximum, First published online February 18, 2015, doi: 10.1130/G36476.1
The appearance of the shell of this ammonite is quite conventional: planispirale (wrapped in a flat spiral), symmetrical, serpenticona (shaped like a coiled snake, with little overlap of the turns).
At each stage of growth, the shell has a section sub-square with rounded corners; the navel is very broad but shallow. The ornamentation consists of many coasts angular and slightly proverse (arched forward, i.e. in the direction of the opening). Near the belly, the coasts forking, crossing the ventral region seamless (there are no hulls or ventral furrows). In certain species, in the mature stage (and in general in macroconche) simplifies the ornamentation and the coast can become more sparse, simple and rounded. In juvenile forms are present constraints (bottlenecks) irregular along the spiral of the shell. The suture line (ie the line of insertion of the baffles on the wall of the shell) is of type ammonitico, rather complex, with many elements little but very jagged. The diameter of the shell of the adult forms is on average about 7 centimeters (may vary depending on the species), although specimens are found to more than 20 centimeters.
One of the most known species was Perisphinctes plicatilis.
Habitat
It is thought that Perisphinctes lived in shallow water, in the inner part of the continental shelf and shelf seas interior, like all forms of the same family ( Perisphinctidae ). This ammonite was a form cosmopolitan, widespread in warm-temperate waters worldwide.
Considerable debate surrounds the migration of human populations out of Africa. Two predominant hypotheses concerning the timing contrast in their emphasis on the role of the Arabian interior and its changing climate. In one scenario, human populations expanded rapidly from Africa to southern Asia via the coastlines of Arabia approx. 50,000 to 60,000 years ago. Another model suggests that dispersal into the Arabian interior began much earlier (approx. 75,000 to 130,000 years ago) during multiple phases, when increased rainfall provided sufficient freshwater to support expanding populations.
Ash Parton and colleagues fall into the second camp, writing, “The dispersal of early human populations out of Africa is dynamically linked with the changing climate and environmental conditions of Arabia. Although now arid, at times the vast Arabian deserts were transformed into landscapes littered with freshwater lakes and active river systems. Such episodes of dramatically increased rainfall were the result of the intensification and northward displacement of the Indian Ocean Monsoon, which caused rainfall to reach across much of the Arabian Peninsula.”
Parton and colleagues present a unique alluvial fan aggradation record from southeast Arabia spanning the past approx. 160,000 years. Situated along the proposed southern dispersal route, the Al Sibetah alluvial fan sequence provides a unique and sensitive record of landscape change in southeast Arabia. This record is to date the most comprehensive terrestrial archive from the Arabian Peninsula, and provides evidence for multiple humid episodes during both glacial and interglacial periods.
Evidence from the Al Sibetah alluvial fan sequence indicates that during insolation maxima, increased monsoon rainfall led to the widespread activation of drainage systems and grassland development throughout regions that were important for the dispersal of early human populations.
Previously, the timing of episodes of increased humidity was largely linked to global interglacials, with the climate of Arabia during the intervening glacial periods believed to be too arid to support human populations. Parton and colleagues suggest, however, that periods of increased rainfall were not driven by mid-high latitude deglaciations every ~100,000 years, but by periods of maximum incoming solar radiation every ~23,000 years.
They write, “The occurrence of humid periods previously identified in lacustrine or speleothem records highlights the complexity and heterogeneity of the Arabian paleoclimate, and suggests that interior migration pathways through the Arabian Peninsula may have been viable approximately every 23,000 years since at least marine isotope state (MIS) 6,” about 191 thousand years ago.
Reference:
A. Parton, A. R. Farrant, M. J. Leng, M. W. Telfer, H. S. Groucutt, M. D. Petraglia, A. G. Parker. Alluvial fan records from southeast Arabia reveal multiple windows for human dispersal. Geology, 2015; DOI: 10.1130/G36401.1
A paper published today in Science provides a case for increasing transparency and data collection to enable strategies for mitigating the effects of human-induced earthquakes caused by wastewater injection associated with oil and gas production in the United States.
The paper is the result of a series of workshops led by scientists at the U.S. Geological Survey in collaboration with the University of Colorado, Oklahoma Geological Survey and Lawrence Berkeley National Laboratory, suggests that it is possible to reduce the hazard of induced seismicity through management of injection activities.
Large areas of the United States that used to experience few or no earthquakes have, in recent years, experienced a remarkable increase in earthquake activity that has caused considerable public concern as well as damage to structures. This rise in seismic activity, especially in the central United States, is not the result of natural processes.
Instead, the increased seismicity is due to fluid injection associated with new technologies that enable the extraction of oil and gas from previously unproductive reservoirs. These modern extraction techniques result in large quantities of wastewater produced along with the oil and gas. The disposal of this wastewater by deep injection occasionally results in earthquakes that are large enough to be felt, and sometimes damaging. Deep injection of wastewater is the primary cause of the dramatic rise in detected earthquakes and the corresponding increase in seismic hazard in the central U.S.
“The science of induced earthquakes is ready for application, and a main goal of our study was to motivate more cooperation among the stakeholders—including the energy resources industry, government agencies, the earth science community, and the public at large—for the common purpose of reducing the consequences of earthquakes induced by fluid injection,” said coauthor Dr. William Ellsworth, a USGS geophysicist.
The USGS is currently collaborating with interested stakeholders to develop a hazard model for induced earthquakes in the U.S. that can be updated frequently in response to changing trends in energy production.
“In addition to determining the hazard from induced earthquakes, there are other questions that need to be answered in the course of coping with fluid-induced seismicity,” said lead author of the study, USGS geophysicist Dr. Art McGarr. “In contrast to natural earthquake hazard, over which humans have no control, the hazard from induced seismicity can be reduced. Improved seismic networks and public access to fluid injection data will allow us to detect induced earthquake problems at an early stage, when seismic events are typically very small, so as to avoid larger and potentially more damaging earthquakes later on.”
“It is important that all information of this sort be publicly accessible, because only in this way can it be used to provide the timely guidance needed to reduce the hazard and consequences of induced earthquakes,” said USGS hydrologist and co-author of the paper, Dr. Barbara Bekins.
Reference:
“Coping with earthquakes induced by fluid injection.” Science 20 February 2015: DOI: 10.1126/science.aaa0494