A rash of earthquakes in southern Colorado and northern New Mexico recorded between 2008 and 2010 was likely due to fluids pumped deep underground during oil and gas wastewater disposal, says a new University of Colorado Boulder study.
The study, which took place in the 2,200-square-mile Raton Basin along the central Colorado-northern New Mexico border, found more than 1,800 earthquakes up to magnitude 4.3 during that period, linking most to wastewater injection well activity. Such wells are used to pump water back in the ground after it has been extracted during the collection of methane gas from subterranean coal beds.
One key piece of the new study was the use of hydrogeological modeling of pore pressure in what is called the “basement rock” of the Raton Basin – rock several miles deep that underlies the oldest stratified layers. Pore pressure is the fluid pressure within rock fractures and rock pores.
While two previous studies have linked earthquakes in the Raton Basin to wastewater injection wells, this is the first to show that elevated pore pressures deep underground are well above earthquake-triggering thresholds, said CU Boulder doctoral student Jenny Nakai, lead study author. The northern edges of the Raton Basin border Trinidad, Colorado, and Raton, New Mexico.
“We have shown for the first time a plausible causative mechanism for these earthquakes,” said Nakai of the Department of Geological Sciences. “The spatial patterns of seismicity we observed are reflected in the distribution of wastewater injection and our modeled pore pressure change.”
A paper on the study was published in the Journal of Geophysical Research: Solid Earth. Co-authors on the study include CU Boulder Professors Anne Sheehan and Shemin Ge of geological sciences, former CU Boulder doctoral student Matthew Weingarten, now a postdoctoral fellow at Stanford University, and Professor Susan Bilek of the New Mexico Institute of Mining and Technology in Socorro.
The Raton Basin earthquakes between 2008 and 2010 were measured by the seismometers from the EarthScope USArray Transportable Array, a program funded by the National Science Foundation (NSF) to measure earthquakes and map Earth’s interior across the country. The team also used seismic data from the Colorado Rockies Experiment and Seismic Transects (CREST), also funded by NSF.
As part of the research, the team simulated in 3-D a 12-mile long fault gleaned from seismicity data in the Vermejo Park region in the Raton Basin. The seismicity patterns also suggest a second, smaller fault in the Raton Basin that was active from 2008-2010.
Nakai said the research team did not look at the relationship between the Raton Basin earthquakes and hydraulic fracturing, or fracking.
The new study also showed the number of earthquakes in the Raton Basin correlates with the cumulative volume of wastewater injected in wells up to about 9 miles away from the individual earthquakes. There are 28 “Class II” wastewater disposal wells – wells that are used to dispose of waste fluids associated with oil and natural gas production – in the Raton Basin, and at least 200 million barrels of wastewater have been injected underground there by the oil and gas industry since 1994.
“Basement rock is typically more brittle and fractured than the rock layers above it,” said Sheehan, also a fellow at CU’s Cooperative Institute for Research in Environmental Sciences. “When pore pressure increases in basement rock, it can cause earthquakes.”
There is still a lot to learn about the Raton Basin earthquakes, said the CU Boulder researchers. While the oil and gas industry has monitored seismic activity with seismometers in the Raton Basin for years and mapped some sub-surface faults, such data are not made available to researchers or the public.
The earthquake patterns in the Raton Basin are similar to other U.S. regions that have shown “induced seismicity” likely caused by wastewater injection wells, said Nakai. Previous studies involving CU Boulder showed that injection wells likely caused earthquakes near Greeley, Colorado, in Oklahoma and in the mid-continent region of the United States in recent years.
Southern California has the highest earthquake risk of any region in the U.S., but exactly how risky and where the greatest risks lie remains an open question.
Earthquakes occur infrequently and depend on complex geological factors deep underground, making them hard to reliably predict in advance. For that reason, forecasting earthquakes means relying on massive computer models and multifaceted simulations, which recreate the rock physics and regional geology and require big supercomputers to execute.
In June 2017, a team of researchers from the U.S. Geological Survey and the Southern California Earthquake Center (SCEC) released a major paper in Seismological Research Letters that summarized the scientific and hazard results of one of the world’s biggest and most well-known earthquake simulation projects: The Uniform California Earthquake Rupture Forecast (UCERF3).
The results relied on computations performed on the original Stampede supercomputer at the Texas Advanced Computing Center, resources at the University of Southern California Center for High-Performance Computing, as well as the newly deployed Stampede2 supercomputer, to which the research team had early access. (Stampede 1 and Stampede2 are supported by grants from the National Science Foundation.)
“High-performance computing on TACC’s Stampede system, and during the early user period of Stampede2, allowed us to create what is, by all measures, the most advanced earthquake forecast in the world,” said Thomas H. Jordan, director of the Southern California Earthquake Center and one of the lead authors on the paper.
The new forecast is the first fault-based model to provide self-consistent rupture probabilities from the very short-term — over a period of less than an hour — to the very long term — up to more than a century. It is also the first model capable of evaluating the short-term hazards that result from multi-event sequences of complex faulting.
To derive the model, the researchers ran 250,000 rupture scenarios of the state of California, vastly more than in the previous model, which simulated 8,000 ruptures.
Among its novel findings, the researchers’ simulations showed that in the week following a magnitude 7.0 earthquake, the likelihood of another magnitude 7.0 quake would be up to 300 times greater than the week beforehand. This scenario of ‘cascading’ ruptures was demonstrated in the 2002 magnitude 7.9 Denali, Alaska, and the 2016 magnitude 7.8 Kaikoura, New Zealand earthquakes, according to David Jacobson and Ross Stein of Temblor.
The dramatic increase in the likelihood of powerful aftershocks is due to the inclusion of a new class of models that assess short-term changes in seismic hazard based on what is known about earthquake clustering and aftershock excitations. These factors have never been used in a comprehensive, statewide model like this one.
The current model also takes into account the likelihood of ruptures jumping from one fault to a nearby one, which has been observed in California’s highly interconnected fault system.
Based on these and other new factors, the new model increases the likelihood of powerful aftershocks but downgrades the predicted frequency of earthquakes between magnitude 6.5 and 7.0, which did not match historical records.
Importantly, UCERF3 can be updated with observed seismicity — real-time data based on earthquakes in action — to capture the static or dynamic triggering effects that play out during a particular sequence of events. The framework is adaptable to many other continental fault systems, and the short-term component might be applicable to the forecasting of minor earthquakes and tremors that are caused by human activity.
The impact of such an improved model goes beyond the fundamental scientific improvement it represents. It has the potential to impact building codes, insurance rates, and the state’s response to a powerful earthquake.
Said Jordan, “The U.S. Geological Survey has included UCERF3 as the California component of the National Seismic Hazard Model, and the model is being evaluated for use in operational earthquake forecasting on timescales from hours to decades.”
ESTIMATING THE COST TO REBUILD
In addition to forecasting the likelihood of an earthquake, models like UCERF3 help predict the associated costs of earthquakes in the region. In recent months, the researchers used UCERF3 and Stampede2 to create a prototype operational loss model, which they described in a paper posted online to Earthquake Spectra in August.
The model estimates the statewide financial losses to the region (the costs to repair buildings and other damages) caused by an earthquake and its aftershocks. The risk metric is based on a vulnerability function and the total replacement cost of asset types in a given census tract.
The model found that the expected loss per year when averaged over many years would be $4.0 billion statewide. More importantly, the model was able to quantify how expected losses change with time due to recent seismic activity. For example, the expected losses in a year following an magnitude 7.1 main shock spike to $24 billion due to potentially damaging aftershocks, a factor of six greater than during “normal” times.
Being able to quantify such fluctuations will enable financial institutions, such as earthquake insurance providers, to adjust their business decisions accordingly.
“It’s all about providing tools that will help make society more resilient to damaging earthquake sequences,” says Ned Field of the USGS, another lead author of the two studies.
Though there’s a great deal of uncertainty in both the seismicity and the loss estimates, the model is an important step at quantifying earthquake risk and potentially devastation in the region, thereby helping decision-makers determine whether and how to respond.
Reference:
A Synoptic View of the Third Uniform California Earthquake Rupture Forecast (UCERF3). DOI: 10.1785/0220170045
The Earth is 4.565 billion years old, give or take a few million years. How do scientists know that? Since there’s no “established in” plaque stuck in a cliff somewhere, geologists deduced the age of the Earth thanks to a handful of radioactive elements.
With radiometric dating, scientists can put an age on really old rocks — and even good old Mother Earth. For the 30th anniversary of National Chemistry Week, this edition of Reactions describes how scientists date rocks
The American Chemical Society, the world’s largest scientific society, is a not-for-profit organization chartered by the U.S. Congress. ACS is a global leader in providing access to chemistry-related information and research through its multiple databases, peer-reviewed journals and scientific conferences. ACS does not conduct research, but publishes and publicizes peer-reviewed scientific studies. Its main offices are in Washington, D.C., and Columbus, Ohio.
Nitrogen is one of the most enigmatic elements within system Earth. No matter where in the world scientists take measurements, in the atmosphere or in solid rock, everywhere they come across the “missing nitrogen“ problem: compared to other planets there is obviously far too little nitrogen found. The scientists Felix Kaminsky, KM Diamond Exploration, Canada, and Richard Wirth, GFZ section Chemistry and Physics of Earth Materials, now identified a “witness“ from the deep that is able to unravel the mystery.
With an amount of 78 percent nitrogen is the main component of air on Earth and it also is a key component of life. A comparison with other planets, however, reveals that there should be a much higher amount found on Earth. According to recent estimates the balance is missing up to 90 percent of nitrogen. Where is it gone? Existing hypotheses assume that large amounts of nitrogen may have been degassed during the formation of Earth or following a meteor impact. Another hypothesis assumes that large amounts of nitrogen may be found within the Earth’s interior, in the Earth’s mantle or core. Since measuring devices cannot reach down there these are, however, not more than assumptions so far.
Diamonds from Northwest Brazil now give the crucial hint. In Rio Soriso volcanic vents, called kimberlite pipes, broke through the Earth’s crust and thereby transported the diamonds up to the Earth’s surface. Felix Kaminsky and Richard Wirth now precisely investigated the molecular composition of inclusions within the diamonds and published their results in the scientific journal American Mineralogist. Wirth: “Diamonds are formed under high pressure and high temperatures within the Earth’s mantle and are transported to the Earth’s surface by volcanic activity. The chemical composition of diamonds and of the inclusions within them are therefore a reflection of the composition of the Earth’s interior”.
Diamonds from kimberlite pipes also occur on other places of the Earth, for example in South Africa, Siberia or the Canadian Shield. However, the diamonds of Rio Soriso are especially rich in inclusions. They were formed in the lowermost layers of the lower mantle and thereby allow for a rare insight into the deep. At the GFZ, Wirth investigated the inclusions by different electron-microscopic methods.
Wirth: “Unlike other diamond deposits on Earth, the inclusions of the Rio Soriso diamonds contain large amounts of nitrogen. For the first time, we were able to detect iron nitrides and carbonitride, chemical compounds of iron and carbon with nitrogen, within diamond inclusions. .” This provides science with an unambiguous proof of the existence of nitrogen in the lower Earth’s mantle and core. The scientists assume that the chemical compounds of iron nitrides and carbonitride are typical compounds of the core-mantle-boundary. Wirth: “The compounds were probably transported by liquid metal from the core to the lowermost layers of the lower mantle.” The search for the “missing nitrogen” of system Earth seems to have come to an end. (ak)
Reference:
Kaminsky, F., Wirth, R., 2017. Nitrides and carbonitrides from the lowermost mantle and their importance in the search for Earth’s “lost” nitrogen. American Mineralogist 102, 1667-1676. DOI: 10.2138/am-2017-6101
Giant lateral collapses are huge landslides occurring at the flanks of a volcano. Giant lateral collapses are rather common events during the evolution of a large volcanic edifice, often with dramatic consequences such as tsunami and volcano explosions. These catastrophic events interact with the magmatic activity of the volcano, as a new research in Nature Communications suggests. Giant lateral collapses may change the style of volcanism and the chemistry of magma, and as a new study by GFZ scientists reveals, also affects and diverges the deep paths of magmas. New volcano centres may form at other places, which the scientists explain by studying the stress field changes associated with the lateral collapse.
In the study entitled “The effect of giant lateral collapses on magma pathways and the location of volcanism,” authored by F. Maccaferri, N. Richter and T. Walter, all working at GFZ, in section 2.1 (Physics of earthquakes and volcanoes), the propagation path of magmatic intrusions underneath a volcanic edifice has been simulated by means of a mathematical model. Computer simulations revealed that the mechanical effect on the earth crust resulting from a large lateral collapse, can promote the deflection of deep magmatic intrusions, favouring the formation of a new eruptive centre within the collapse embayment. This result has been quantitatively validated against observations at Fogo Volcano, Cabo Verde.
A broader view to other regions reveals that this shift of volcanism associated with giant lateral collapses is rather common, as observed at several of the Canary Islands, Hawaii, Stromboli and elsewhere. This study may have implications particularly for our understanding of the long term evolution of intraplate volcanic ocean islands and sheds lights on the interacting processes occurring during growth and collapse of volcanic edifices.
Reference:
Francesco Maccaferri, Nicole Richter, Thomas R. Walter. The effect of giant lateral collapses on magma pathways and the location of volcanism. Nature Communications, 2017; 8 (1) DOI: 10.1038/s41467-017-01256-2
Zircon crystals in igneous rocks must be carefully examined and not relied upon solely to predict future volcanic eruptions and other tectonic events, QUT researchers have shown.
Zircon is a robust mineral and a timekeeper of Earth history
Distinguishing the origins of zircon crystals, their individual chemistry and properties is not straightforward
Misinterpreting data from zircon crystals could skew timescales for geological events such as volcanic eruptions by millions of years
This has implications for understanding volcanic hazards and the future risks they pose
The researchers’ findings have been published in Earth-Science Reviews. The paper, Use and abuse of zircon-based thermometers: A critical review and a recommended approach to identify antecrystic zircons, also proposes an efficient and integrated approach to assist in identifying zircons and evaluating zircon components sourced from older rocks.
Associate Professor Scott Bryan, from QUT’s Science and Engineering Faculty, said the researchers had “gone back to basic science” and reassessed large data sets of analyses of igneous rocks in Queensland and from around the world, to show that wrong assumptions can be made about zircon crystals.
Igneous rocks are formed by the cooling of magma (molten rock) which makes its way to Earth’s surface, often leading to volcanic eruptions.
“One of the assumptions being made is that the composition of the zircons and the rocks in which they have formed give an accurate record of the magmas and conditions at which the zircons and magmas formed,” Associate Professor Bryan said.
“From this, we then estimate the age of the event that caused them to form.
“But some zircon crystals may not be related to their host rocks at all. They may have come from the source of the magma deep in the Earth’s crust or they may have been picked up by the magma on its way to the surface.
“If you don’t distinguish between the types of crystals then you get a big variation in the age of the event which formed the rocks, potentially millions of years, as well as developing incorrect views on the conditions needed to make magmas.
“It is critical to get the timescales of magmatism correct, so we can understand how long it might take for reservoirs of magma to build up and erupt.”
This is particularly relevant to ‘supervolcanoes’ which do not always have pools of magma sitting beneath them, Associate Professor Bryan said.
There are more than 20 supervolcanoes on Earth, including Yellowstone in the US and Taupo in New Zealand.
“Determining accurately what zircon is telling us is fundamental to understanding Earth’s history, defining major events such as mass extinctions, and how we understand global plate tectonics,” he said.
“We need to understand the past, and read the geological clocks correctly, to accurately predict the future and to mitigate future hazards.”
Reference:
C. Siégel, S.E. Bryan, C.M. Allen, D.A. Gust. Use and abuse of zircon-based thermometers: A critical review and a recommended approach to identify antecrystic zircons. Earth-Science Reviews, 2018; 176: 87 DOI: 10.1016/j.earscirev.2017.08.011
A Yale-led research team has discovered a cache of embryo-like microfossils in northern Mongolia that may shed light on questions about the long-ago shift from microbes to animals on Earth.
Called the Khesen Formation, the site is one of the most significant for early Earth fossils since the discovery of the Doushantuo Formation in southern China nearly 20 years ago. The Dousantuo Formation is 600 million years old; the Khesen Formation is younger, at about 540 million years old.
“Understanding how and when animals evolved has proved very difficult for paleontologists. The discovery of an exceptionally well-preserved fossil assemblage with animal embryo-like fossils gives us a new window onto a critical transition in life’s history,” said Yale graduate student Ross Anderson, first author of a study in the journal Geology.
The new cache of fossils represents eight genera and about 17 species, comprising tens to hundreds of individuals. Many of them are spiny microfossils called acritarchs, which are roughly 100 microns in size — about one-third the thickness of a fingernail.
The Khesen Formation is located to the west of Lake Khuvsgul in northern Mongolia. “This site was of particular interest to us because it had the right type of rocks — phosphorites — that had preserved similar organisms in China,” Anderson said.
The discovery may help scientists confirm a much earlier date for the existence of Earth ecosystems with animals, rather than just microbes. For two decades, researchers have debated the findings at the Doushantuo Formation, with no resolution. If confirmed as animals, these microfossils would represent the oldest animals to be preserved in the geological record.
The other authors of the study are Derek Briggs, Yale’s G. Evelyn Hutchinson Professor of Geology and Geophysics and curator at the Yale Peabody Museum of Natural History; Sean McMahon, a postdoctoral fellow in the Briggs lab; Francis Macdonald of Harvard; and David Jones of Amherst College.
The researchers said the Khesen Formation should provide scientists with additional information for years to come.
“This study is only the tip of the iceberg, as most of the fossils derive from only two samples,” Anderson said. Since the original discovery, the Yale team has worked with Harvard and the Mongolian University of Science and Technology to sample several additional sites within the formation.
Reference:
Ross P. Anderson, Francis A. Macdonald, David S. Jones, Sean McMahon, Derek E.G. Briggs. Doushantuo-type microfossils from latest Ediacaran phosphorites of northern Mongolia. Geology, 2017; DOI: 10.1130/G39576.1
Note: The above post is reprinted from materials provided by Yale University. Original written by Jim Shelton.
One of the worst nightmares for many Pacific Northwest residents is a huge earthquake along the offshore Cascadia Subduction Zone, which would unleash damaging and likely deadly shaking in coastal Washington, Oregon, British Columbia and northern California.
The last time this happened was in 1700, before seismic instruments were around to record the event. So what will happen when it ruptures next is largely unknown.
A University of Washington research project, to be presented Oct. 24 at the Geological Society of America’s annual meeting in Seattle, simulates 50 different ways that a magnitude-9.0 earthquake on the Cascadia subduction zone could unfold.
“There had been just a handful of detailed simulations of a magnitude-9 Cascadia earthquake, and it was hard to know if they were showing the full range,” said Erin Wirth, who led the project as a UW postdoctoral researcher in Earth and space sciences. “With just a few simulations you didn’t know if you were seeing a best-case, a worst-case or an average scenario. This project has really allowed us to be more confident in saying that we’re seeing the full range of possibilities.”
Off the Oregon and Washington coast, the Juan de Fuca oceanic plate is slowly moving under the North American plate. Geological clues show that it last jolted and unleashed a major earthquake in 1700, and that it does so roughly once every 500 years. It could happen any day.
Wirth’s project ran simulations using different combinations for three key factors: the epicenter of the earthquake; how far inland the earthquake will rupture; and which sections of the fault will generate the strongest shaking.
Results show that the intensity of shaking can be less for Seattle if the epicenter is fairly close to beneath the city. From that starting point, seismic waves will radiate away from Seattle, sending the biggest shakes in the direction of travel of the rupture.
“Surprisingly, Seattle experiences less severe shaking if the epicenter is located just beneath the tip of northwest Washington,” Wirth said. “The reason is because the rupture is propagating away from Seattle, so it’s most affecting sites offshore. But when the epicenter is located pretty far offshore, the rupture travels inland and all of that strong ground shaking piles up on its way to Seattle, to make the shaking in Seattle much stronger.”
The research effort began by establishing which factors most influence the pattern of ground shaking during a Cascadia earthquake. One, of course, is the epicenter, or more specifically the “hypocenter,” which locates the earthquake’s starting point in three-dimensional space.
Another factor they found to be important is how far inland the fault slips. A magnitude-9.0 earthquake would likely give way along the whole north-south extent of the subduction zone, but it’s not well known how far east the shake-producing area would extend, approaching the area beneath major cities such as Seattle and Portland.
The third factor is a new idea relating to a subduction zone’s stickiness. Earthquake researchers have become aware of the importance of “sticky points,” or areas between the plates that can catch and generate more shaking. This is still an area of current research, but comparisons of different seismic stations during the 2010 Chile earthquake and the 2011 Tohoku earthquake show that some parts of the fault released more strong shaking than others.
Wirth simulated a magnitude-9.0 earthquake, about the middle of the range of estimates for the magnitude of the 1700 earthquake. Her 50 simulations used variables spanning realistic values for the depth of the slip, and had randomly placed hypocenters and sticky points. The high-resolution simulations were run on supercomputers at the Pacific Northwest National Laboratory and the University of Texas, Austin.
Overall, the results confirm that coastal areas would be hardest hit, and locations in sediment-filled basins like downtown Seattle would shake more than hard, rocky mountaintops. But within that general framework, the picture can vary a lot; depending on the scenario, the intensity of shaking can vary by a factor of 10. But none of the pictures is rosy.
“We are finding large amplification of ground shaking by the Seattle basin,” said collaborator Art Frankel, a U.S. Geological Survey seismologist and affiliate faculty member at the UW. “The average duration of strong shaking in Seattle is about 100 seconds, about four times as long as from the 2001 Nisqually earthquake.”
The research was done as part of the M9 Project, a National Science Foundation-funded effort to figure out what a magnitude-9 earthquake might look like in the Pacific Northwest and how people can prepare. Two publications are being reviewed by the USGS, and engineers are already using the simulation results to assess how tall buildings in Seattle might respond to the predicted pattern of shaking.
As a new employee of the USGS, Wirth will now use geological clues to narrow down the possible earthquake scenarios.
“We’ve identified what parameters we think are important,” Wirth said. “I think there’s a future in using geologic evidence to constrain these parameters, and maybe improve our estimate of seismic hazard in the Pacific Northwest.”
An earthquake is the shaking of the surface of the Earth, resulting from the sudden release of energy in the Earth’s lithosphere that creates seismic waves. Earthquakes can range in size from those that are so weak that they cannot be felt to those violent enough to toss people around and destroy whole cities. The seismicity or seismic activity of an area refers to the frequency, type and size of earthquakes experienced over a period of time.
At the Earth’s surface, earthquakes manifest themselves by shaking and sometimes displacement of the ground. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can also trigger landslides, and occasionally volcanic activity.
In its most general sense, the word earthquake is used to describe any seismic event — whether natural or caused by humans — that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults, but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. An earthquake’s point of initial rupture is called its focus or hypocenter. The epicenter is the point at ground level directly above the hypocenter.
What causes earthquakes and where do they happen?
The earth has four major layers: the inner core, outer core, mantle and crust. The crust and the top of the mantle make up a thin skin on the surface of our planet. But this skin is not all in one piece – it is made up of many pieces like a puzzle covering the surface of the earth. Not only that, but these puzzle pieces keep slowly moving around, sliding past one another and bumping into each other. We call these puzzle pieces tectonic plates, and the edges of the plates are called the plate boundaries. The plate boundaries are made up of many faults, and most of the earthquakes around the world occur on these faults. Since the edges of the plates are rough, they get stuck while the rest of the plate keeps moving. Finally, when the plate has moved far enough, the edges unstick on one of the faults and there is an earthquake.
Why does the earth shake when there is an earthquake?
While the edges of faults are stuck together, and the rest of the block is moving, the energy that would normally cause the blocks to slide past one another is being stored up. When the force of the moving blocks finally overcomes the friction of the jagged edges of the fault and it unsticks, all that stored up energy is released. The energy radiates outward from the fault in all directions in the form of seismic waves like ripples on a pond. The seismic waves shake the earth as they move through it, and when the waves reach the earth’s surface, they shake the ground and anything on it, like our houses and us! (see P&S Wave inset)
How are earthquakes recorded?
Earthquakes are recorded by instruments called seismographs. The recording they make is called a seismogram. The seismograph has a base that sets firmly in the ground, and a heavy weight that hangs free. When an earthquake causes the ground to shake, the base of the seismograph shakes too, but the hanging weight does not. Instead the spring or string that it is hanging from absorbs all the movement. The difference in position between the shaking part of the seismograph and the motionless part is what is recorded.
How do scientists measure the size of earthquakes?
The size of an earthquake depends on the size of the fault and the amount of slip on the fault, but that’s not something scientists can simply measure with a measuring tape since faults are many kilometers deep beneath the earth’s surface. So how do they measure an earthquake? They use the seismogram recordings made on the seismographs at the surface of the earth to determine how large the earthquake was. A short wiggly line that doesn’t wiggle very much means a small earthquake, and a long wiggly line that wiggles a lot means a large earthquake. The length of the wiggle depends on the size of the fault, and the size of the wiggle depends on the amount of slip.
The size of the earthquake is called its magnitude. There is one magnitude for each earthquake. Scientists also talk about the intensity of shaking from an earthquake, and this varies depending on where you are during the earthquake.
How can scientists tell where the earthquake happened?
Seismograms come in handy for locating earthquakes too, and being able to see the P wave and the S wave is important. You learned how P & S waves each shake the ground in different ways as they travel through it. P waves are also faster than S waves, and this fact is what allows us to tell where an earthquake was. To understand how this works, let’s compare P and S waves to lightning and thunder. Light travels faster than sound, so during a thunderstorm you will first see the lightning and then you will hear the thunder. If you are close to the lightning, the thunder will boom right after the lightning, but if you are far away from the lightning, you can count several seconds before you hear the thunder. The further you are from the storm, the longer it will take between the lightning and the thunder.
P waves are like the lightning, and S waves are like the thunder. The P waves travel faster and shake the ground where you are first. Then the S waves follow and shake the ground also. If you are close to the earthquake, the P and S wave will come one right after the other, but if you are far away, there will be more time between the two. By looking at the amount of time between the P and S wave on a seismogram recorded on a seismograph, scientists can tell how far away the earthquake was from that location. However, they can’t tell in what direction from the seismograph the earthquake was, only how far away it was. If they draw a circle on a map around the station where the radius of the circle is the determined distance to the earthquake, they know the earthquake lies somewhere on the circle. But where?
Scientists then use a method called triangulation to determine exactly where the earthquake was (figure 6). It is called triangulation because a triangle has three sides, and it takes three seismographs to locate an earthquake. If you draw a circle on a map around three different seismographs where the radius of each is the distance from that station to the earthquake, the intersection of those three circles is the epicenter!
Can scientists predict earthquakes?
No, and it is unlikely they will ever be able to predict them. Scientists have tried many different ways of predicting earthquakes, but none have been successful. On any particular fault, scientists know there will be another earthquake sometime in the future, but they have no way of telling when it will happen.
Effects of earthquakes
Shaking and ground rupture
Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude, the distance from the epicenter, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation. The ground-shaking is measured by ground acceleration.
Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic motion from hard deep soils to soft superficial soils and to effects of seismic energy focalization owing to typical geometrical setting of the deposits.
Ground rupture is a visible breaking and displacement of the Earth’s surface along the trace of the fault, which may be of the order of several meters in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams, bridges and nuclear power stations and requires careful mapping of existing faults to identify any which are likely to break the ground surface within the life of the structure.
Landslides and avalanches
Earthquakes, along with severe storms, volcanic activity, coastal wave attack, and wildfires, can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel are attempting rescue.
Fires
Earthquakes can cause fires by damaging electrical power or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake were caused by fire than by the earthquake itself.
Soil liquefaction
Soil liquefaction occurs when, because of the shaking, water-saturated granular material (such as sand) temporarily loses its strength and transforms from a solid to a liquid. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves.
Tsunami
Tsunamis are long-wavelength, long-period sea waves produced by the sudden or abrupt movement of large volumes of water – including when an earthquake occurs at sea. In the open ocean the distance between wave crests can surpass 100 kilometers (62 mi), and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600-800 kilometers per hour (373–497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them.
Ordinarily, subduction earthquakes under magnitude 7.5 on the Richter magnitude scale do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more. “ex: Japan Tsunami 2011”
Floods
A flood is an overflow of any amount of water that reaches land. Floods occur usually when the volume of water within a body of water, such as a river or lake, exceeds the total capacity of the formation, and as a result some of the water flows or sits outside of the normal perimeter of the body. However, floods may be secondary effects of earthquakes, if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods.
The terrain below the Sarez Lake in Tajikistan is in danger of catastrophic flood if the landslide dam formed by the earthquake, known as the Usoi Dam, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly 5 million people.
A group of researchers from the UK and the US have used machine learning techniques to successfully predict earthquakes. Although their work was performed in a laboratory setting, the experiment closely mimics real-life conditions, and the results could be used to predict the timing of a real earthquake.
The team, from the University of Cambridge, Los Alamos National Laboratory and Boston University, identified a hidden signal leading up to earthquakes, and used this ‘fingerprint’ to train a machine learning algorithm to predict future earthquakes. Their results, which could also be applied to avalanches, landslides and more, are reported in the journal Geophysical Review Letters.
For geoscientists, predicting the timing and magnitude of an earthquake is a fundamental goal. Generally speaking, pinpointing where an earthquake will occur is fairly straightforward: if an earthquake has struck a particular place before, the chances are it will strike there again. The questions that have challenged scientists for decades are how to pinpoint when an earthquake will occur, and how severe it will be. Over the past 15 years, advances in instrument precision have been made, but a reliable earthquake prediction technique has not yet been developed.
As part of a project searching for ways to use machine learning techniques to make gallium nitride (GaN) LEDs more efficient, the study’s first author, Bertrand Rouet-Leduc, who was then a PhD student at Cambridge, moved to Los Alamos National Laboratory in New Mexico to start a collaboration on machine learning in materials science between Cambridge University and Los Alamos. From there the team started helping the Los Alamos Geophysics group on machine learning questions.
The team at Los Alamos, led by Paul Johnson, studies the interactions among earthquakes, precursor quakes (often very small earth movements) and faults, with the hope of developing a method to predict earthquakes. Using a lab-based system that mimics real earthquakes, the researchers used machine learning techniques to analyse the acoustic signals coming from the ‘fault’ as it moved and search for patterns.
The laboratory apparatus uses steel blocks to closely mimic the physical forces at work in a real earthquake, and also records the seismic signals and sounds that are emitted. Machine learning is then used to find the relationship between the acoustic signal coming from the fault and how close it is to failing.
The machine learning algorithm was able to identify a particular pattern in the sound, previously thought to be nothing more than noise, which occurs long before an earthquake. The characteristics of this sound pattern can be used to give a precise estimate (within a few percent) of the stress on the fault (that is, how much force is it under) and to estimate the time remaining before failure, which gets more and more precise as failure approaches. The team now thinks that this sound pattern is a direct measure of the elastic energy that is in the system at a given time.
“This is the first time that machine learning has been used to analyse acoustic data to predict when an earthquake will occur, long before it does, so that plenty of warning time can be given – it’s incredible what machine learning can do,” said co-author Professor Sir Colin Humphreys of Cambridge’s Department of Materials Science & Metallurgy, whose main area of research is energy-efficient and cost-effective LEDs. Humphreys was Rouet-Leduc’s supervisor when he was a PhD student at Cambridge.
“Machine learning enables the analysis of datasets too large to handle manually and looks at data in an unbiased way that enables discoveries to be made,” said Rouet-Leduc.
Although the researchers caution that there are multiple differences between a lab-based experiment and a real earthquake, they hope to progressively scale up their approach by applying it to real systems which most resemble their lab system. One such site is in California along the San Andreas Fault, where characteristic small repeating earthquakes are similar to those in the lab-based earthquake simulator. Progress is also being made on the Cascadia fault in the Pacific Northwest of the United States and British Columbia, Canada, where repeating slow earthquakes that occur over weeks or months are also very similar to laboratory earthquakes.
“We’re at a point where huge advances in instrumentation, machine learning, faster computers and our ability to handle massive data sets could bring about huge advances in earthquake science,” said Rouet-Leduc.
Reference:
Bertrand Rouet-Leduc et al, Machine Learning Predicts Laboratory Earthquakes, Geophysical Research Letters (2017). DOI: 10.1002/2017GL074677
Gold enrichment at the crustal or mantle source has been proposed as a key ingredient in the production of giant gold deposits and districts. However, the lithospheric-scale processes controlling gold endowment in a given metallogenic province remain unclear.
Here we provide the first direct evidence of native gold in the mantle beneath the Deseado Massif in Patagonia that links an enriched mantle source to the occurrence of a large auriferous province in the overlying crust. A precursor stage of mantle refertilisation by plume-derived melts generated a gold-rich mantle source during the Early Jurassic.
The interplay of this enriched mantle domain and subduction-related fluids released during the Middle-Late Jurassic resulted in optimal conditions to produce the ore-forming magmas that generated the gold deposits. Our study highlights that refertilisation of the subcontinental lithospheric mantle is a key factor in forming large metallogenic provinces in the Earth’s crust, thus providing an alternative view to current crust-related enrichment models.
The traditional notion of Au endowment in a given metallogenic province is that Au accumulates by highly efficient magmatic-hydrothermal enrichment processes operating in a chemically ‘average’ crust. However, more recent views point to anomalously enriched source regions and/or melts that are critical for the formation of Au provinces at a lithospheric scale. Within this perspective, Au-rich melts/fluids might originate from a mid or lower crust reservoir and later migrate through favourable structural zones to shallower crustal levels where the Au deposits form. Alternatively, the subcontinental lithospheric mantle (SCLM) may also play a role as a source of metal-rich magmas.
This model involves deep-seated Au-rich magmas that may infiltrate the edges of buoyant and rigid domains in the SCLM producing transient Au storage zones. Upon melting, the ascending magma scavenges the Au as it migrates towards the uppermost overlying crust. Discontinuities between buoyant and rigid domains in the SCLM provide the channelways for the uprising of Au-rich fluids or melts from the convecting underlying mantle, and when connected to the overlying crust by trans-lithospheric faults, a large Au deposit or well-endowed auriferous province can be formed. Thus, the generation of Au deposits in the crust may result from the conjunction in time and space of three essential factors: an upper mantle or lower crustal source region particularly enriched in Au, a transient remobilisation event and favourable lithospheric-scale plumbing structures.
The giant Ladolam Au deposit in Papua New Guinea gives a good single-deposit case example of this mechanism since deep trans-lithospheric faults connect the crustal Au deposit directly with the mantle source, and similar Os isotopic compositions are exhibited by Au ores and metal-enriched peridotite of the underlying mantle. Despite these evidences, the genetic relation between a pre-enriched mantle source and the occurrence of gold provinces in the upper crust remains controversial since limited evidence is available at a broader regional scale.
Reference:
Plume-subduction interaction forms large auriferous provinces. Santiago Tassara, José M. González-Jiménez, Martin Reich, Manuel E. Schilling, Diego Morata, Graham Begg, Edward Saunders, William L. Griffin, Suzanne Y. O’Reilly, Michel Grégoire, Fernando Barra & Alexandre Corgne. DOI:10.1038/s41467-017-00821-z
Plate tectonics is a scientific theory describing the large-scale motion of seven large plates and the movements of a larger number of smaller plates of the Earth’s lithosphere, since tectonic processes began on Earth between 3 and 3.5 billion years ago. The model builds on the concept of continental drift, an idea developed during the first decades of the 20th century. The geoscientific community accepted plate-tectonic theory after seafloor spreading was validated in the late 1950s and early 1960s.
The lithosphere, which is the rigid outermost shell of a planet (the crust and upper mantle), is broken into tectonic plates. The Earth’s lithosphere is composed of seven or eight major plates (depending on how they are defined) and many minor plates. Where the plates meet, their relative motion determines the type of boundary: convergent, divergent, or transform. Earthquakes, volcanic activity, mountain-building, and oceanic trench formation occur along these plate boundaries (or faults). The relative movement of the plates typically ranges from zero to 100 mm annually.
How do these massive slabs of solid rock float despite their tremendous weight?
The answer lies in the composition of the rocks. Continental crust is composed of granitic rocks which are made up of relatively lightweight minerals such as quartz and feldspar. By contrast, oceanic crust is composed of basaltic rocks, which are much denser and heavier. The variations in plate thickness are nature’s way of partly compensating for the imbalance in the weight and density of the two types of crust. Because continental rocks are much lighter, the crust under the continents is much thicker (as much as 100 km) whereas the crust under the oceans is generally only about 5 km thick. Like icebergs, only the tips of which are visible above water, continents have deep “roots” to support their elevations.
How did oceanic plate boundaries mapped?
Most of the boundaries between individual plates cannot be seen, because they are hidden beneath the oceans. Yet oceanic plate boundaries can be mapped accurately from outer space by measurements from GEOSAT satellites. Earthquake and volcanic activity is concentrated near these boundaries. Tectonic plates probably developed very early in the Earth’s 4.6-billion-year history, and they have been drifting about on the surface ever since-like slow-moving bumper cars repeatedly clustering together and then separating.
Types of plate boundaries
Transform boundaries
Transform boundaries (Conservative) occur where two lithospheric plates slide, or perhaps more accurately, grind past each other along transform faults, where plates are neither created nor destroyed. The relative motion of the two plates is either sinistral (left side toward the observer) or dextral (right side toward the observer). Transform faults occur across a spreading center. Strong earthquakes can occur along a fault. The San Andreas Fault in California is an example of a transform boundary exhibiting dextral motion.
Divergent boundaries
Divergent boundaries (Constructive) occur where two plates slide apart from each other. At zones of ocean-to-ocean rifting, divergent boundaries form by seafloor spreading, allowing for the formation of new ocean basin. As the ocean plate splits, the ridge forms at the spreading center, the ocean basin expands, and finally, the plate area increases causing many small volcanoes and/or shallow earthquakes. At zones of continent-to-continent rifting, divergent boundaries may cause new ocean basin to form as the continent splits, spreads, the central rift collapses, and ocean fills the basin. Active zones of Mid-ocean ridges (e.g., Mid-Atlantic Ridge and East Pacific Rise), and continent-to-continent rifting (such as Africa’s East African Rift and Valley, Red Sea) are examples of divergent boundaries.
Convergent boundaries
Convergent boundary, also known as a destructive plate boundary, is a region of active deformation where two or more tectonic plates or fragments of the lithosphere near the end of their life cycle. This is in contrast to a constructive plate boundary (also known as a mid-ocean ridge or spreading center). As a result of pressure, friction, and plate material melting in the mantle, earthquakes and volcanoes are common near destructive boundaries, where subduction zones or an area of continental collision (depending on the nature of the plates involved) occurs. The subducting plate in a subduction zone is normally oceanic crust, and moves beneath the other plate, which can be made of either oceanic or continental crust. During collisions between two continental plates, large mountain ranges, such as the Himalayas are formed. In other regions, a divergent boundary or transform faults may be present.
A remarkable new fossilized skeleton of a tyrannosaur discovered in the Bureau of Land Management’s Grand Staircase-Escalante National Monument (GSENM) in southern Utah was airlifted by helicopter Sunday, Oct 15, from a remote field site, and delivered to the Natural History Museum of Utah where it will be uncovered, prepared, and studied. The fossil is approximately 76 million years old and is most likely an individual of the species Teratophoneus curriei, one of Utah’s ferocious tyrannosaurs that walked western North America between 66 and 90 million years ago during the Late Cretaceous Period.
“With at least 75 percent of its bones preserved, this is the most complete skeleton of a tyrannosaur ever discovered in the southwestern US,” said Dr. Randall Irmis, curator of paleontology at the Museum and associate professor in the Department of Geology and Geophysics at the University of Utah. “We are eager to get a closer look at this fossil to learn more about the southern tyrannosaur’s anatomy, biology, and evolution.”
GSENM Paleontologist Dr. Alan Titus discovered the fossil in July 2015 in the Kaiparowits Formation, part of the central plateau region of the monument. Particularly notable is that the fossil includes a nearly complete skull. Scientists hypothesize that this tyrannosaur was buried either in a river channel or by a flooding event on the floodplain, keeping the skeleton intact.
“The monument is a complex mix of topography — from high desert to badlands — and most of the surface area is exposed rock, making it rich grounds for new discoveries, said Titus. “And we’re not just finding dinosaurs, but also crocodiles, turtles, mammals, amphibians, fish, invertebrates, and plant fossils — remains of a unique ecosystem not found anywhere else in the world,” said Titus.
Although many tyrannosaur fossils have been found over the last one hundred years in the northern Great Plains region of the northern US and Canada, until relatively recently, little was known about them in the southern US. This discovery, and the resulting research, will continue to cement the monument as a key place for understanding the group’s southern history, which appears to have followed a different path than that of their northern counterparts.
This southern tyrannosaur fossil is thought to be a sub-adult individual, 12-15 years old, 17-20 feet long, and with a relatively short head, unlike the typically longer-snouted look of northern tyrannosaurs.
Collecting such fossils from the monument can be unusually challenging. “Many areas are so remote that often we need to have supplies dropped in and the crew hikes in,” said Irmis. For this particular field site, Museum and monument crews back-packed in, carrying all of the supplies they needed to excavate the fossil, such as plaster, water and tools to work at the site for several weeks. The crews conducted a three-week excavation in early May 2017, and continued work during the past two weeks until the specimen was ready to be airlifted out.
Irmis said with the help of dedicated volunteers, it took approximately 2,000-3,000 people hours to excavate the site and estimates at least 10,000 hours of work remain to prepare the specimen for research. “Without our volunteer team members, we wouldn’t be able to accomplish this work. We absolutely rely on them throughout the entire process,” said Irmis.
Irmis says that this new fossil find is extremely significant. Whether it is a new species or an individual of Teratophoneus, the new research will provide important context as to how this animal lived. “We’ll look at the size of this new fossil, it’s growth pattern, biology, reconstruct muscles to see how the animal moved, how fast could it run, and how it fed with its jaws. The possibilities are endless and exciting,” said Irmis.
During the past 20 years, crews from the Natural History Museum of Utah and GSENM have unearthed more than a dozen new species of dinosaurs in GSENM, with several additional species awaiting formal scientific description. Some of the finds include another tyrannosaur named Lythronax, and a variety of other plant-eating dinosaurs — among them duck-billed hadrosaurs, armored ankylosaurs, dome-headed pachycephalosaurs, and a number of horned dinosaurs, such as Utahceratops, Kosmoceratops, Nasutoceratops, and Machairoceratops. Other fossil discoveries include fossil plants, insect traces, snails, clams, fishes, amphibians, lizards, turtles, crocodiles, and mammals. Together, this diverse bounty of fossils is offering one of the most comprehensive glimpses into a Mesozoic ecosystem. Remarkably, virtually all of the dinosaur species found in GSENM appear to be unique to this area, and are not found anywhere else on Earth
An international expedition aims to better understand seismic activity through samples collected from one of the most geologically active areas in Europe.
More than 30 scientists, including Dr Richard Collier from the University of Leeds, will be participating in an expedition which will analyse data gathered from a tear in the ocean floor – the Corinth Rift.
The rift is caused by one of the Earth’s tectonic plates being ripped apart causing such geological hazards as earthquakes.
The overall aim of the project is to gain insight into the rifting process by collecting sediment cores and compiling data from the samples on their geological history, composition, age and structure.
The research vessel, DV Fugro Synergy, will launch in late October to collect the cores at three different locations with drilling going to a depth of 750 metres below the seabed.
Dr Collier, from the School of Earth and Environment at Leeds said: “The Corinth Rift provides a unique laboratory in one of the most seismically active areas in Europe. It is a relatively young tectonic feature having only formed in the last five million years. It is an ideal location to learn more about early rift development and how tectonics affect the landscape.
“The cores will also allow us to determine the relative impacts of sea level change of and climate change through time on the transfer of sediment from the surrounding landscape to the basin floor.
“The opportunity to quantify these competing controls on rift sedimentation for the first time makes this project particularly exciting. By increasing our understanding of this particular rift we may be better able to predict seismic hazards in other areas and inform the hunt for sediment bodies in other parts of the world that might contain hydrocarbons. ”
Researchers have been working in the Gulf of Corinth region for many decades – examining sediments and active fault traces exposed on land and using marine geophysics to image the basin and its structure below the seafloor. But there is very little information about the age of the sediments and of the environment of the rift in the last one to two million years.
The core samples collected and analysed by the team will help answer such questions as: What are the implications for earthquake activity in a developing rift? How does the rift actually evolve and grow and on what timescale? How did the activity on faults change with time? How does the landscape respond to tectonic and climatic changes? And what was the climate and the environment of the rift basin in the last one to two million years?
Co-chief scientist of the expedition, Professor Lisa McNeill from the University of Southampton, said: “By drilling, we hope to find this last piece of the jigsaw puzzle. It will help us to unravel the sequence of events as the rift has evolved and, importantly, how fast the faults, which regularly generate damaging earthquakes, are slipping.”
The 33 scientists involved in the expedition are from Australia, Brazil, China, France, Germany, Greece, India, Norway, Spain, the United States, and the United Kingdom and cover a range of different geoscience disciplines.
Nine of them will sail onboard the drill ship Fugro Synergy from October to December of this year. After the offshore phase in the Gulf of Corinth the entire team, including Dr Collier, will meet for the first time at the IODP Bremen Core Repository (BCR), located at MARUM – Center for Marine Environmental Sciences at the University of Bremen, Germany. There they will spend a month splitting, analysing and sampling the cores and reviewing the data collected.
A new database showcasing hundreds of examples of human-triggered earthquakes should shake up policy-makers, regulators and industry executives looking to mitigate these unacceptable hazards caused by our own actions, according to a Western Earth Sciences professor.
“More and more, we are recognizing how many earthquakes are actually human-induced,” said Gail Atkinson, Industrial Research Chair in Hazards from Induced Seismicity at Western.
“Researchers at the U.S. Geological Survey are now raising the possibility many of the large, well-known earthquakes in California that happened over the 1930s-50s – like the Long Beach Earthquake (in 1933) or the Kern County Earthquake (in 1952), which was a magnitude of 7.5 – may have been induced by oil-production in southern California at the time,” she explained.
Atkinson’s research group is studying this phenomenon of human-triggered earthquakes – or, induced seismicity – in western Canada, with a particular focus in Alberta. Her team has found evidence showing a significant increase in the number of earthquakes in the last five years or so in the active region. More than half of those appear to be related to hydraulic fracturing.
These findings are included in the new Human-Induced Earthquake Database – or HiQuake – which contains 728 examples of earthquakes (or sequences of earthquakes) that may have been set off by humans over the past 149 years.
While her team has uncovered evidence linking hydraulic fracturing to an increase in earthquakes, research also suggests a link between earthquakes and wastewater disposal in Alberta.
“There’s only a relatively small fraction of earthquakes purely tectonic or natural – so most of the seismicity we see in western Alberta and eastern British Columbia appears to be related to the oil and gas industry,” Atkinson noted.
“And that’s been raising a whole host of new issues in terms of how we should be planning and regulating hydraulic fracturing and oil and gas activity so we’re not causing unacceptable hazards – from seismic activity in particular – and ensuring we don’t conduct fracturing operations close to major infrastructure such as major dams or cortical facilities that (we don’t want to damage).”
With all these findings of human-induced seismicity emerging, and a new encyclopedic database storing the instances, researchers have been trying to wedge themselves between science and public policy in order to mitigate damage caused by human-triggered earthquakes, she continued.
“We’ve been trying to translate that knowledge into suggested guidelines, for example, for exclusion zones around critical infrastructures. We’ve suggested there shouldn’t be any hydraulic fracturing within 5 km of major dams or critical infrastructure,” Atkinson said.
“That’s the beginning. We’re working with regulators and policy-makers to try to get those ideas out there. The ideas are gaining traction. With some of the larger players – oil companies, Canadian associations for petroleum producers, and so on – if we can get them to start building that kind of thinking into best practices, that might actually be more achievable than regulation, which seems difficult to enforce. We’ve certainly started a dialogue; we have people talking. But how to translate findings into concrete policy, that is going to take time.”
Having something like HiQuake compile all documented instances of human-triggered earthquakes in one place makes it easier for researchers when they try to conduct studies establishing links between factors, Atkinson continued, adding this establishes the possibility of, at the very least, mitigating damage caused by such events.
“Unlike with natural earthquake hazards, we can do something about this. That’s what really motivates us. Whereas, with natural hazards, you can’t do anything about it, other than be prepared. You can’t stop an earthquake from happening; you can’t predict where it might happen. Similarly, with other natural disasters like hurricanes, you can be prepared, but you can’t stop it.
“This is something within our power to control. We really do have an opportunity here to make sure we don’t cause a major environmental disaster through actions we’ve taken that we didn’t need to take.”
New Macquarie University research, published in the journal Proceedings of the Royal Society B, has shown that birds and pterosaurs did, in fact, co-exist for millions of years peacefully, as opposed to the long-held and historical belief that birds competitively-displaced pterosaurs as suggested.
It had previously been suggested that birds and pterosaurs competed with each other during the Cretaceous, a period more than 65 million years ago, and that this led to pterosaurs evolving larger body sizes to avoid competition with the smaller birds. However, after comparing jaw sizes, limb proportions and other functional characteristics not explored in previous studies, lead author Dr Nicholas Chan says this is not the case.
The research used morphospaces, a way of mapping the forms of organisms, and found distinct ecological separation between the two groups based on size, features of the wings and legs and feeding adaptations. In other words, this would suggest that the two were not long term competitors. Had the two been in direct competition, birds were believed to have been the reason that pterosaurs evolved into larger species in order to avoid competition for resources.
“Any competition between the two groups was likely localised over a relatively short periods of time,” says Dr Chan, from the Department of Biological Sciences at Macquarie University.
“While previous research only compared the limb bones of the two groups, our research compared jaw lengths, wing and leg proportions in order to determine functionally-equivalent traits, and found that the there was very little ecomophological overlap between the two.”
“The difference in the species functional morphology means that both groups co-existed without ongoing competition. Birds had shorter mid-wings, longer metatarsals, and shorter jaws. So they likely flew, walked, and fed differently from pterosaurs.”
Reference:
Nicholas R. Chan. Morphospaces of functionally analogous traits show ecological separation between birds and pterosaurs, Proceedings of the Royal Society B: Biological Sciences (2017). DOI: 10.1098/rspb.2017.1556
Until recently, glaciers in the United States have been measured in two ways: placing stakes in the snow, as federal scientists have done each year since 1957 at South Cascade Glacier in Washington state; or tracking glacier area using photographs from airplanes and satellites.
We now have a third, much more powerful tool. While he was a doctoral student in University of Washington’s Department of Earth and Space Sciences, David Shean devised new ways to use high-resolution satellite images to track elevation changes for massive ice sheets in Antarctica and Greenland. Over the years he wondered: Why aren’t we doing this for mountain glaciers in the United States, like the one visible from his department’s office window?
He has now made that a reality. In 2012, he first asked for satellite time to turn digital eyes on glaciers in the continental U.S., and he has since collected enough data to analyze mass loss for Mount Rainier and almost all the glaciers in the lower 48 states. He will present results from these efforts Oct. 22 at the Geological Society of America’s annual meeting in Seattle.
“I’m interested in the broad picture: What is the state of all of the glaciers, and how has that changed over the last 50 years? How has that changed over the last 10 years? And at this point, how are they changing every year?” said Shean, who is now a research associate with the UW’s Applied Physics Laboratory.
The maps provide a twice-yearly tally of roughly 1,200 mountain glaciers in the lower 48 states, down to a resolution of about 1 foot. Most of those glaciers are in Washington state, with others clustered in the Rocky Mountains of Montana, Wyoming and Colorado, and in California’s Sierra Nevada.
To create the maps, a satellite camera roughly half the size of the Hubble Space Telescope must take two images of a glacier from slightly different angles. As the satellite passes overhead, moving at about 4.6 miles per second, it takes images a few minutes apart. Each pixel of the image covers 30 to 50 centimeters (about 1 foot) and a single image can be tens of miles across.
Shean’s technique uses automated software that matches millions of small features, such as rocks or crevasses, in the two images. It then uses the difference in perspective to create a 3-D model of the surface.
The first such map of a Mount St. Helens glacier was obtained in 2012, and the first for Mount Rainier in 2014. The project has grown steadily since then to include more glaciers every year.
The results confirm stake measurements at South Cascade Glacier, showing significant loss over the past 60 years. Results at Mount Rainier also reflect the broader shrinking trends, with the lower-elevation glaciers being particularly hard hit. Shean estimates cumulative ice loss of about 0.7 cubic kilometers (900 million cubic yards) at Mount Rainier since 1970. Distributed evenly across all of Mount Rainier’s glaciers, that’s equivalent to removing a layer of ice about 25 feet (7 to 8 meters) thick.
“There are some big changes that have happened, as anyone who’s been hiking on Mount Rainier in the last 45 years can attest to,” Shean said. “For the first time we’re able to very precisely quantify exactly how much snow and ice has been lost.”
The glacier loss at Rainier is consistent with trends for glaciers across the U.S. and worldwide. Tracking the status of so many glaciers will allow scientists to further explore patterns in the changes over time, which will help pinpoint the causes — from changes in temperature and precipitation to slope angle and elevation.
“The next step is to integrate our observations with glacier and climate models and say: Based on what we know now, where are these systems headed?” Shean said.
Those predictions could be used to better manage water supplies and flood risks.
“We want to know what the glaciers are doing and how their mass is changing, but it’s important to remember that the meltwater is going somewhere. It ends up in rivers, it ends up in reservoirs, it ends up downstream in the ocean. So there are very real applications for water resource management,” Shean said. “If we know how much snow falls on Mount Rainier every winter, and when and how much ice melts every summer, that can inform water resource managers’ decisions.”
Davidsmithite, a newly approved feldspathoid mineral (IMA 2016-070), occurs as a rock-forming mineral in the Liset eclogite pod (Norwegian Caledonides).
The approved electron-microprobe analysis gave the crystal–chemical formula: ([Ca0.636◰0.636]◰0.414K0.165Na0.149)Σ2.000Na6.000(Al7.863Fe3+0.019)Σ7.882Si8.192O32 (where ◰ = vacancy)
Davidsmithite completes the compositional space of the nepheline-structure group by providing a new root-composition, (Ca◰)2Na6Al8Si8O32. It is the Ca-analogue of classical nepheline, to which it is related by the heterovalent substitution of K+2 by [Ca2+◰]. Most of the Ca2+ ions are situated in the same atomic position as K+ in nepheline, but some occur in a new and disordered (Ca′) atomic position, whose centre is shifted by 2.18 Å along the 6-fold axis.
The studied samples show some solid-solution towards the other two possible end-members of the nepheline compositional space, so that the channel site contains all of Ca and K in the unit formula, with some Na and ◰. In the Liset eclogite pod, davidsmithite occurs in retrogressed, formerly jadeite-rich zones; it commonly overgrows lisetite and is associated with albitic plagioclase and taramitic amphibole.
This eclogite occurrence is noted for its bulk-rock compositions rich in (Na + Al) and poor in growth of a (K + Mg). The paucity in K prevented the growth of nepheline, and the paucity in Si in precursor jadeite led to the growth of a feldspathoid (davidsmithite) as well as of lisetite; a feldspar (albite or oligoclase) also occurs nearby.
Sid-Ali Kechid, Gian Carlo Parodi, Sylvain Pont, Roberta Oberti. Davidsmithite, (Ca,◰)2Na6Al8Si8O32: a new, Ca-bearing nepheline-group mineral from the Western Gneiss Region, Norway. DOI: 10.1127/ejm/2017/0029-2667 Published on June 2017, First Published on June 28, 2017
Smith, D.C., Kechid, S-A. and Rossi, G. (1986): Occurence and properties of lisetite, CaNa2Al4Si4O16, a new tectosilicate in the system Ca-Na-Al-Si-O. American Mineralogist 71, 1372-1377. [as K-poor, Ca-rich nepheline structure mineral]
Kechid, S.-A., Oberti, R., Rossi, G., Parodi, G., Pont, P. (2016): Davidsmithite, IMA 2016-070. CNMNC Newsletter No. 34, December 2016, page 1317; Mineralogical Magazine: 80: 1315–1321
Turquoise is an icon of the desert Southwest, with enduring cultural significance, especially for Native American communities. Yet, relatively little is known about the early history of turquoise procurement and exchange in the region.
University of Arizona researchers are starting to change that by blending archaeology and geochemistry to get a more complete picture of the mineral’s mining and distribution in the region prior to the 16th-century arrival of the Spanish.
In a new paper, published in the November issue of the Journal of Archaeological Science, UA anthropology alumnus Saul Hedquist and his collaborators revisit what once was believed to be a relatively small turquoise mine in eastern Arizona. Their findings suggest that the Canyon Creek mine, located on the White Mountain Apache Indian Reservation, was actually a much more significant source of turquoise than previously thought.
With permission from the White Mountain Apache Tribe, Hedquist and his colleagues visited the now essentially exhausted Canyon Creek source—which has been known to archaeologists since the 1930s—to remap the area and collect new samples. There, they found evidence of previously undocumented mining areas, which suggest the output of the mine may have been 25 percent higher than past surveys indicated.
“Pre-Hispanic workings at Canyon Creek were much larger than previously estimated, so the mine was clearly an important source of turquoise while it was active,” said Hedquist, lead author of the paper, who earned his doctorate from the UA School of Anthropology in the College of Social and Behavioral Sciences in May.
In addition, the researchers measured ratios of lead and strontium isotopes in samples they collected from the mine, and determined that Canyon Creek turquoise has a unique isotopic fingerprint that distinguishes it from other known turquoise sources in the Southwest. The isotopic analysis was conducted in the lab of UA College of Science Dean Joaquin Ruiz in the Department of Geosciences by study co-author and UA geosciences alumna Alyson Thibodeau. Now an assistant professor at Dickinson College in Pennsylvania, Thibodeau did her UA dissertation on isotopic fingerprinting of geological sources of turquoise throughout the Southwest.
“If you pick up a piece of turquoise from an archaeological site and say ‘where does it come from?’ you have to have some means of telling the different turquoise deposits apart,” said David Killick, UA professor of anthropology, who co-authored the paper with Hedquist, Thibodeau and John Welch, a UA alumnus now on the faculty at Simon Fraser University. “Alyson’s work shows that the major mining areas can be distinguished by measurement of major lead and strontium isotopic ratios.”
Based on the isotopic analysis, researchers were able to confidently match turquoise samples they collected at Canyon Creek to several archaeological artifacts housed in museums. Their samples matched artifacts that had been uncovered at sites throughout much of east-central Arizona—some more than 100 kilometers from the mine—suggesting that distribution of Canyon Creek turquoise was broader than previously thought, and that the mine was a significant source of turquoise for pre-Hispanic inhabitants of the Mogollon Rim area.
The researchers also were able to pinpoint when the mine was most active. Their samples matched artifacts found at sites occupied between A.D. 1250-1400, suggesting the mine was primarily used in the late 13th and/or 14th centuries.
“Archaeologists have struggled for decades to find reliable means of sourcing archaeological turquoise—linking turquoise artifacts to their geologic origin—and exploring how turquoise was mined and traded throughout the greater pre-Hispanic Southwest,” said Hedquist, who now lives in Tempe, Arizona, and works as an archaeologist and ethnographer for Logan Simpson Inc., a cultural resources consulting firm. “We used both archaeology and geochemistry to document the extent of workings at the mine, estimate the amount of labor spent at the mine and identify turquoise from the mine in archaeological assemblages.”
Research Paves Way for Future Studies
Turquoise is a copper mineral, found only immediately adjacent to copper ore deposits. While detailed documentation of pre-Hispanic turquoise mines is limited, the work at Canyon Creek could pave the way for future investigations.
“I think our study raises the bar a bit by combining archaeological and geochemical analyses to gain a more complete picture of operations at one mine: when it was active, how intensely it was mined and how its product moved about the landscape,” Hedquist said. “Researchers have only recently developed a reliable means of sourcing the mineral, so there’s plenty of potential for future research.”
Similar work involving the UA is already underway to explore the origin of turquoise artifacts found at the Aztec capital of Tenochtitlan in Mexico.
“Canyon Creek is but one of many ancient turquoise mines,” Hedquist said. “This study provides a standard for the detailed documentation of ancient mineral procurement and a framework for linking archaeological turquoise to specific geologic locations. Building on other archaeological patterns—the circulation of pottery and flaked stone artifacts, for example—we can piece together the social networks that facilitated the ancient circulation of turquoise in different times and places.”
A better understanding of the pre-Hispanic history of turquoise is important not only to archaeologists and mining historians but to modern Native Americans, Killick said.
“It’s of great interest to modern-day Apache, Zuni and Hopi, whose ancestors lived in this area, because turquoise continues to be ritually important for them,” he said. “They really have shown a great deal of interest in this work, and they’ve encouraged it.”
Reference:
Saul L. Hedquist et al, Canyon Creek revisited: New investigations of a late prehispanic turquoise mine, Arizona, USA, Journal of Archaeological Science (2017). DOI: 10.1016/j.jas.2017.09.004
Whether it started with exhibits at the Natural History Museum or fun-terrified screams watching Jurassic Park, humans have always been awestruck by dinosaurs.
But little is known about what, if any, role dinosaurs and other large animals like mammoths or elephants play in ecosystem functioning. What would the world be like if they never existed?
Christopher Doughty, faculty member in the School of Informatics, Computing and Cyber Systems at Northern Arizona University, asks that question often. He has been studying large animals for more than 10 years, specifically how these animals have increased the planet’s fertility.
“Theory suggests that large animals are disproportionately important to the spread of fertility across the planet,” Doughty said. “What better way to test this than to compare fertility in the world during the Cretaceous period — where sauropods, the largest herbivores to exist, roamed freely — to the Carboniferous period — a time in Earth’s history before four-legged erbivores evolved.”
During these two periods, plants were buried faster than they could decompose. As a result, coal was formed. Doughty gathered coal samples from mines throughout the U.S. By measuring the coal elemental concentrations, he found elements needed by plants, like phosphorus, were more abundant and much better distributed during the era of the dinosaurs than the Carboniferous. The data also revealed that elements not needed by plants and animals, such as aluminum, showed no difference, suggesting the herbivores contributed to increased global fertility.
According to Doughty, these large animals are important not for the quantity of dung they produce, but for their ability to move long distances across landscapes, effectively mixing the nutrients. By increasing the abundance and distribution of elements like phosphorus, plants grow faster, meaning large herbivores are responsible for producing their own food and contributing to their lush habitats.
But as today’s large animal populations become more in danger of extinction, the environment too is at risk. Simply put, fewer large animals may mean less plant growth.
“This is important for two reasons,” Doughty said. “First, we are rapidly losing our remaining large animals, like forest elephants, and this loss will critically impair the future functioning of these ecosystems by reducing their fertility. Second, combining the idea that large animals are disproportionately important for the spread of nutrients with the natural rule that animal size increases over time, means the planet may have a Gaia-like mechanism of increasing fertility over time. Life makes the planet easier for more life.”