17 Feb 2017: Brazilian Pepper Tree Berry Juice stops MRSA

Ann Morrison
By Ann Morrison February 27, 2017 21:19

am170217 Update as of 17 Feb 2017

John Moore, The Liberty Man, & Ann

Dr. Bill Deagle, Nutrimedical Report, & Ann

Contents

Biosecurity: Zika

Biosecurity: Superbug

Biosecurity: Poison

Biosecurity: Chimeras CRISPR-Cas9

Radiation

Preparation

Constitution – Crony Capitalism

Climate Change

Seismic

 

______________________________________________________________

.

Climate Change

Arctic Ozone Watch 15 February 2017

Arctic Ozone Watch 15 February 2017

NASA

National Aeronautics and Space Administration Goddard Space Flight Center

https://ozonewatch.gsfc.nasa.gov/Scripts/big_image.php?date=2017-02-15&hem=N

.

Antarctic Ozone Hole Watch 15 February 2017

Antarctic Ozone Hole Watch 15 February 2017

NASA

https://ozonewatch.gsfc.nasa.gov/ozone_maps/images/Y2017/M02/OZONE_D2017-02-07_G%5E716X716.IOMI_PAURA_V8F_MGEOS5FP_LSH.PNG

.

Strong northern jet stream offshore Pacific Coast!

2017-02-17 1500 UTC Wind at 250hPa

Nullschool

https://earth.nullschool.net/#current/wind/isobaric/250hPa/orthographic/loc=-137.170,35.215

.

Current Large Fire Incidents 17 February 2017

34 Large Fires

Large Fires in US on 2017-02-17

National Interagency Fire Council

https://fsapps.nwcg.gov/afm/

National Interagency Coordination Center

Incident Management Situation Report Friday, February 17, 2017 – 0800 MT

National Preparedness Level 1

National Fire Activity (Feb. 10 – Feb. 16)

Initial attack activity: Light (926 new fires)
New large incidents: 29
Large fires contained: 21
Uncontained large fires:** 13
Area Command Teams Committed: 0
NIMOs committed: 0
Type 1 IMTs committed: 0
Type 2 IMTs committed: 0

 

**Uncontained large fires include only fires being managed under a full suppression strategy.

Southern Area (PL 1)

New fires: 845
New large incidents: 24
Uncontained large fires: 13

 

* Three Forks #8, Florida Forest Service. Twelve miles west of Palm Bay, FL. Southern rough. Moderate fire behavior.

* CR630 E, Florida Forest Service. Ten miles east of Frostproof, FL. Southern rough. Moderate fire behavior.

* T Island, North Carolina Forest Service. Nine miles northwest of Holly Ridge, NC. Dormant brush and hardwood slash. Active fire behavior with creeping.

* Spike Road, Northeast Area, Oklahoma DOF. Thirteen miles east of Muskogee, OK. Hardwood litter. Active fire behavior with flanking.

* Ball Diamond, Northeast Area, Oklahoma DOF. Eight miles southwest of Tahlequah, OK. Hardwood litter. Minimal fire behavior.

* Talala, Central and Western, Oklahoma DOF. Four miles west of Talala, OK. Tall grass and brush. Active fire behavior with wind driven runs. Residences threatened.

* OK-OSA-017015, Osage Agency, BIA. One mile west of Prue, OK. Tall grass. Minimal fire behavior.

* East Bimini, Florida Forest Service. Six miles southwest of Dupont, FL. Southern rough. Moderate fire behavior.

Wagon Wheel, East Central Area, Oklahoma DOF. Twenty miles south of McAlester, OK. Hardwood litter and short grass. No new information. Last report unless significant activity occurs.

* Strawberry Hollow, Northeast Area, Oklahoma DOF. Four miles northeast of Rolland, OK. Tall grass and brush. Minimal fire behavior. Structures threatened.

* Badger McCoy, Northeast Area, Oklahoma DOF. Two miles north of Sallisaw, OK. Tall grass. Minimal fire behavior. Structures threatened.

* Porum, East Central Area, Oklahoma DOF. Six miles northeast of Porum, OK. Timber. Active fire behavior.

* Eagle Pass, Northeast Area, Oklahoma DOF. Nine miles southeast of Stillwell, OK. Timber. Minimal fire behavior. Residences threatened.

 

*** Changes in some agency YTD acres reflect more accurate mapping or reporting adjustments. ***

Additional wildfire information is available through the Geographic Areas at http://gacc.nifc.gov/

Predictive Services Discussion: More than a foot of rain is expected across portions of the mountains across Northern California through Tuesday night as a series of wet and warm systems move on shore once again elevating flooding concerns. Critical fire weather conditions may be possible Saturday across West Texas and New Mexico as a breezy, dry Southerly flow develops. Florida may see minor drought relief Saturday and again Thursday as a pair of wet frontal systems track southeast across the state. The Central and Southern Plains will continue to be warm and dry except Monday night and Tuesday when a fast moving front provides rain to mainly Texas and Oklahoma. Southern Missouri and Southeastern Kansas could see some moisture as well. The Central Northern Rockies and the Pacific Northwest will remain entrenched in an overall cool and wet pattern that will promote continued snowpack accrual. The Northern Plains and New England should be mostly warm and dry through the period. Alaska will receive needed snowfall as a pair of systems impact the state this weekend and again late next week.

http://www.predictiveservices.nifc.gov/outlooks/outlooks.htm

 

Posted February 1, 2017 04:30 pm By Justin Juozapavicius The Associated Press TULSA, Okla. —

.

Rare national fire advisory issued for drought-hit Oklahoma; areas of Kansas could have similar wildfires

Oklahoma has been placed under a national fire advisory as much of the state struggles with unrelenting drought and tinder-dry vegetation capable of igniting and quickly spreading out of control, state forestry officials said Wednesday.

The rare advisory — and the first for Oklahoma — issued by the National Interagency Fire Center in Boise, Idaho, is in effect for two weeks and warns residents and fire departments to prepare for potentially severe wildfires.

The national center also cautioned that areas in the neighboring states of Texas, Colorado, New Mexico and Kansas could be ripe for similar extreme wildfires through February. While only two of Oklahoma’s 77 counties are currently under a burn ban, Oklahoma Forestry Services officials cautioned residents Wednesday to “avoid doing anything that can cause a spark.

The ingredients for a potentially disastrous fire outbreak are already in place in the mounds of accumulated limbs, dry brush, leaves and needles from years of ice storms and tornadoes that carpet forest floors.

“The situation we’re in with the state of our fuels, drought, dryness of fields, fires become more resistant to control under these conditions,” said Mark Goeller, the forestry services fire management chief. “Pine needles, logs, big-diameter woody material … the things from these natural disasters we’ve experienced throughout the state, all those fuels are critically dry.”

Oklahoma is just entering its peak fire season of February and March, but dozens of wildfires have already scorched thousands of acres in the past two months.

Last week, a fast-moving wildfire 30 miles north of Oklahoma City devoured two homes, outbuildings and hay bales. In December, seven fires broke out in one afternoon across Oklahoma City; another consumed 200 acres near the town of Tecumseh the same month.

“We’re just entering a period where things can get really dicey with the fire situation,” said state climatologist Gary McManus. “We have set ourselves up for more of a damaging wildfire season.”

Oklahoma is enduring a drought that has lasted for several years. Except for a handful of counties in the very southwestern part of the state, most of Oklahoma is in some state of drought, according to the latest data from the U.S. Drought Monitor.

Conditions are worse in southeastern Oklahoma, where roughly eight counties are in an extreme drought, according to the monitor.

Wheat farmer Jim Freudenberger said Wednesday he’s keeping closer watch over his 750-acre property in Coyle, a town in north-central Oklahoma where several wildfires have recently broken out.

“Seems every time the wind gets up good, there’s a fire somewhere,” he said.

The Topeka Capital-Journal © 2017. All Rights Reserved.
Accessed at http://cjonline.com/news/2017-02-01/rare-national-fire-advisory-issued-drought-hit-oklahoma-areas-kansas-could-have# on February 12, 2017.

.

Limited rain leads to dry conditions in mid-Missouri

By: Brigit Mahoney, Posted: Feb 11, 2017 08:19 PM CST. Updated: Feb 11, 2017 08:27 PM CST

The majority of mid-Missouri is classified as “abnormally dry,” according to the United States Drought Monitor. Parts of Randolph, Macon, and Monroe counties are considered to be in a “moderate drought.” Abnormally dry areas are areas where there is short term dryness and where slowing of plant and crop growth is common. In areas where there is a moderate drought, we can expect to see damage to crops and a low amount of water in streams or reservoirs.

In mid-Missouri, we can expect to see these drought conditions continue for the time being since there is no chance of rain in the forecast for the entire week. These mild and dry conditions also enhance the risk of fires. It is important to be cautious of any outdoor burning during these times, especially when windy conditions are present. High winds help these outdoor fires spread easily.

Copyright 2017 KMIZ
Accessed at http://www.abc17news.com/weather/limited-rain-leads-to-continued-dry-conditions-in-mid-missouri/328432711 on February 12, 2017.

.

URGENT – FIRE WEATHER MESSAGE

Subject: Fwd: AccuWeather.com Alert(tm) (ADCA264373541)

From: AccuWeather.com Alert <inbox@messaging.accuweather.com>

Date: Thu, Feb 16, 2017 at 2:02 PM

Subject: AccuWeather.com Alert(tm) (ADCA264373541)

 

Your Severe Weather Watches and Warnings

CHESTERFIELD, MO

URGENT – FIRE WEATHER MESSAGE

National Weather Service Saint Louis MO

202 PM CST Thu Feb 16 2017

Adams IL-Calhoun IL-Crawford MO-Jefferson MO-Knox MO-Lewis MO-

Marion MO-Pike IL-Pike MO-Ralls MO-Saint Charles MO-

Saint Louis City MO-Saint Louis MO-Shelby MO-Washington MO-

202 PM CST Thu Feb 16 2017

…RED FLAG WARNING IN EFFECT UNTIL 6 PM CST THIS EVENING FOR WIND AND LOW RELATIVE HUMIDITY FOR MOST OF EASTERN MISSOURI AND PARTS OF WEST CENTRAL ILLINOIS…

The National Weather Service in Saint Louis has issued a Red Flag Warning for wind and low relative humidity…which is in effect until 6 PM CST this evening.

* AFFECTED AREA…In Illinois…Fire Weather Zones 095…097 and 098. In Missouri…Fire Weather Zones 018…019…026…027… 035…036…061…063…064…065…072 and 073.

* 20 FOOT WINDS…Southwest 10 to 20 mph with gusts up to 30 mph.

* RELATIVE HUMIDITY…As low as 18 percent.

* 10 HOUR FUELS…less than 9 percent.

* IMPACTS…these conditions are conducive for the rapid spread and growth of uncontrolled wildfires.

PRECAUTIONARY/PREPAREDNESS ACTIONS…

A Red Flag Warning means that critical fire weather conditions are either occurring now… or will shortly. A combination of strong winds… low relative humidity… and dry fuels can contribute to extreme fire behavior.

A Red Flag Warning means that critical fire weather conditions are either occurring now, or will shortly. A combination of strong winds, low relative humidity, and dry fuels can contribute to extreme fire behavior.

&&

Glass

AccuWeather.com

385 Science Park Road

State College, PA 16803

Copyright (c) 2017 AccuWeather, Inc. All Rights Reserved. | Terms & Conditions | Privacy Statement

.

Global Ocean De-Oxygenation Quantified

NatureOxygen-Dissolved Oxygen change

Collaborative Research Center (SFB) 754 funded by the German Research Foundation (DFG) at the Kiel University and GEOMAR.

Press Release
11/2017 | Please note the embargo until Wednesday, 15 February 2017, 18:00 London time 15 February 2017 / Kiel.
 The first in-depth study on the observed global ocean oxygen content was just published by Kiel scientists in Nature.

The ongoing global change causes rising ocean temperatures and changes the ocean circulation. Therefore less oxygen is dissolved in surface waters and less oxygen is transported into the deep sea. This reduction of oceanic oxygen supply has major consequences for the organisms in the ocean. In the international journal Nature, oceanographers of GEOMAR Helmholtz Centre for Ocean Research Kiel have now published the most comprehensive analysis on oxygen loss in the world’s oceans and their cause so far.

Oxygen is an essential necessity of life on land. The same applies for almost all organisms in the ocean. However, the oxygen supply in the oceans is threatened by global warming in two ways: Warmer surface waters take up less oxygen than colder waters. In addition, warmer water stabilizes the stratification of the ocean. This weakens the circulation connecting the surface with the deep ocean and less oxygen is transported into the deep sea. Therefore, many models predict a decrease in global oceanic oxygen inventory of the oceans due to global warming. The first global evaluation of millions of oxygen measurements seems to confirm this trend and points to first impacts of global change.

In the renowned scientific journal Nature the oceanographers Dr. Sunke Schmidtko, Dr. Lothar Stramma and Prof. Dr. Martin Visbeck from GEOMAR Helmholtz Centre for Ocean Research Kiel just published the most comprehensive study on global oxygen content in the world’s oceans so far. It demonstrates that the ocean’s oxygen content has decreased by more than two percent over the last 50 years. “Since large fishes in particular avoid or do not survive in areas with low oxygen content, these changes can have far-reaching biological consequences,” says Dr. Schmidtko, the lead-author of the study.

The researchers used all historic oxygen data available around the world for their work, supplemented it with current measurements and refined the interpolation procedures to more accurately reconstruct the development of the oxygen budget over the past 50 years. In some areas previous research had already shown a decrease in oxygen. “To quantify trends for the entire ocean, however, was more difficult since oxygen data from remote regions and the deep ocean is sparse,” explains Dr. Schmidtko, “we were able to document the oxygen distribution and its changes for the entire ocean for the first time. These numbers are an essential prerequisite for improving forecasts for the ocean of the future.”

Dissolved Oxygen by column in oceans

The study also shows that, with the exception of a few regions, the oxygen content decreased throughout the entire ocean during the period investigated. The greatest loss was found in the North Pacific. “While the slight decrease of oxygen in the atmosphere is currently considered non-critical, the oxygen losses in the ocean can have far-reaching consequences because of the uneven distribution. For fisheries and coastal economies this process may have detrimental consequences,” emphasizes the co-author Dr. Lothar Stramma.

“However, with measurements alone, we cannot explain all the causes,” adds Professor Martin Visbeck, “natural processes occurring on time scales of a few decades may also have contributed to the observed decrease.” However, the results of the research are consistent with most model calculations that predict a further decrease in oxygen in the oceans due to higher atmospheric carbon dioxide concentrations and consequently higher global temperatures.

The new study is an important result for the ongoing work in the Collaborative Research Center (SFB) 754 funded by the German Research Foundation (DFG) at the Kiel University and GEOMAR. The SFB 754’s aim is to better understand the interaction between climate and biogeochemistry of the tropical ocean. “From the beginning of March onwards, four expeditions aboard the German research vessel METEOR will investigate the tropical oxygen minimum zone in the eastern Pacific off Peru. We hope to obtain further data on regional development which will also help us to better understand the global trends,” emphasizes Dr. Stramma, the expedition coordinator for the SFB.

Note:
This study was supported by the project MIKLIP (http://www.geomar.de/nc/forschen/fb1/fb1-po/projekte/miklip/) which is funded by the German Federal Ministry of Education and Research and by the Collaborative Research Center (SFB) 754 “Climate – Biogeochemical Interactions in the Tropical Ocean”.
Reference:
Schmidtko, S., L. Stramma und M. Visbeck (2017): Decline in global oxygen content during the past five decades. Nature, http://dx.doi.org/10.1038/nature21399
Links:
www.geomar.de GEOMAR Helmholtz Centre for Ocean Research Kiel
https://www.ocean-oxygen.org/ Website about the deoxygenation of the ocean
www.sfb754.de The Collaborative Research Center 754
Images:
Images are available for download at www.geomar.de/n4997-e
Contact:
Jan Steffen (GEOMAR, Communication & Media), Tel.: (+49) 0431 600-2811, presse@geomar.de

.

Oroville Dam, for 1st time in history, uses emergency spillway

Oroville Dam Spillway Damage

Hearst Newspapers © Copyright 2017 Hearst Communications, Inc.

Photo: Max Whittaker/Prime, Special To The Chronicle Department of Water Resources personnel, left, inspects a hole was torn in the spillway of the Oroville Dam while releasing approximately 60,000 cubic-feet-second of water in advance of more rain on February 7, 2017 in Oroville, California.

By Kurtis Alexander and Filipa A. Ioannou Updated 8:10 pm, Saturday, February 11, 2017

California’s second-largest reservoir filled with so much water Saturday, thanks to extraordinary winter storms and unexpected damage to a release channel, that officials at Oroville Dam took the unprecedented step of opening the lake’s emergency spillway.

Dam operators said the maneuver posed no risk of flooding or dam failure on the Feather River, about 75 miles north of Sacramento. But the untested move sent lake water cascading down a muddy hillside where boulders and brush in the unpaved spill route threatened to wash into the river and create hazards for fish and levees downstream.

The lake’s power plant and electrical transmission towers at the foot of the dam, the nation’s tallest at 770 feet, were also being monitored for damage.

Jason Newton of the Department of Water Resources takes a picture as water pours over the emergency spillway at Oroville Dam for the first time in history.

Officials said the emergency spillway, activated at 8 a.m. Saturday, would remain in use through at least Sunday night as mountain runoff from recent storms continued to swell the lake.

“The event that we never wanted to happen, and didn’t expect to happen, has happened,” said Doug Carlson, spokesman for the California Department of Water Resources, which owns and operates the dam and reservoir. “But it has performed as we hoped it would, even though it was the first time.”

Problems for the reservoir began Tuesday when a section of the lake’s primary spillway — a concrete channel to the Feather River below that is 180 feet wide and more than 3,000 feet long — collapsed amid high-volume water releases.

The resulting craterlike hole has grown dramatically, prompting officials to ease the amount of water released out the main spillway and ultimately use the emergency channel to keep the lake from flowing over the top of the dam. The emergency spillway, which is nothing more than an open hill that drains toward the river, has not been used since the dam was built in 1968, when Ronald Reagan was governor.

Lake Oroville is a key state water-storage plant, second in carrying capacity to only Lake Shasta. It supplies water to Central Valley farms as well as several urban water agencies, including the Metropolitan Water District of Southern California and the Santa Clara Valley Water District.

The reservoir also provides flood control for downstream communities and helps regulate salinity in the Sacramento-San Joaquin River Delta.

State officials had hoped to avoid using the emergency spillway and, as late as Friday afternoon, remained optimistic that necessary releases could be handled by the main spillway. Crews, however, took the precautionary measure of clearing the emergency channel of brush, trees and other debris, which served them well when they realized Saturday morning that more water needed to be liberated from the lake.

Once Lake Oroville reaches 901 feet above sea level, which is 21 feet below the top of the dam, water begins to flow automatically into the emergency corridor.

The spillway was expected to release up to 12,000 cubic feet of water per second, a relatively small amount compared with the roughly 90,000 cubic feet of water per second that was pouring into the reservoir Saturday. But it was still enough to send a steady stream of water into a diversion pool below and ultimately into the Feather River.

California Department of Forestry and Fire Protection teams were running boats downriver, where they deployed floating traps to catch debris.

“It doesn’t sound like they’re retrieving a whole lot,” said Cal Fire Capt. Dan Olson. “What little they are getting they’re moving to the shoreline.”

Dam operators continued to release water out of the main spillway, too, despite its damage. About 55,000 cubic feet of water per second was being released Saturday afternoon, well short of the 270,000 cubic feet the channel was built to handle.

The total outflow from the lake, anticipated to be as much as 67,000 cubic feet per second, was not likely to create flooding problems, officials said.

“The rated capacity of Feather River is much bigger than that, much larger,” Carlson said. “So there is no public danger. There is no expected evacuation.”

The cost of repairing the channel rose from previous estimates Saturday, to as much as $200 million, but the fix can’t be made until winter rains end and water releases are no longer necessary. An alternative option presented by state officials is to build a new spillway at another point on the lake.

Continuing to use the impaired spillway not only risks more damage to the structure, but also was posing a threat to the Hyatt Powerplant, officials said. Concrete chunks from the spillway’s tear were piling up beneath the dam, causing water to pool up behind the debris and flow toward the utility station.

“The water in the pool creates a certain amount of back pressure,” said Eric See with the Department of Water Resources. “That can lead to damage. We definitely don’t want to damage our power plant.”

The station was shut down late Friday, and as of Saturday afternoon, no damage had been reported.

Besides being unable to generate electricity when it’s closed, the power house also is unable to serve as a third release point on the reservoir. When it’s in service, the power house discharges as much as 14,000 cubic feet of water per second downriver.

Two sets of transmission towers along the emergency spillway were also at risk of collapsing as water releases softened the ground and destabilized the soil, officials said.

Water releases earlier last week on the main spillway already have turned the Feather River’s normally clear water brown with silt and debris, a problem for fish.

At the Feather River Fish Hatchery about 4½ miles downstream, where endangered salmon are reared, the cloudiness of the water was running “off the charts,” said a spokesman for the California Department of Fish and Wildlife.

With the turbidity threatening to asphyxiate the salmon, hatchery workers had been frantically collecting fish all week and trucking them to a nearby holding pond. By Saturday afternoon, 10 million salmon had been moved, officials said.

The hatchery is very important to California salmon production, said John McManus, executive director of the Golden Gate Salmon Association.

“It provides a lot of the fish that are caught in the ocean,” McManus said. “The loss of those fish would indeed be a blow to the salmon fishery.”

The cause of the spillway rupture remained unknown Saturday.

The dam and spillway passed a routine safety review in 2015, but inspectors did not specifically examine the sloped surface of the chute, doing only a visual evaluation from above. State officials did not say why a closer inspection wasn’t performed.

Improvements had been made to the spillway in 2013, officials said, though it was not immediately clear if the work was in the same area as the recent tear.

In 2010, dam managers were reprimanded by state safety officials for operating without a critical piece of equipment a year earlier, which caused a wall to collapse at the power plant and five employees to be nearly sucked out of the facility by a powerful vacuum.

Downstream Saturday, Oroville residents had no doubt that everything was under control at the lake.

“I’m not worried,” said Cooper Davis, 15, who was working an evening shift at the Boss Burger, where he could see the river rushing below. “But you can tell that something is wrong. The water is real milky.”

The height of the river, though, remained nothing out of the ordinary.

Many residents remembered the flows in 1997, when nearly twice as much water was being released from the dam during an unusually stormy winter. The season brought widespread flooding to the region.

This year is also on track to be an usually wet one. Seasonal precipitation in the northern Sierra measured 228 percent of average for the date, as of Saturday, while snowpack across the range was at 180 percent of normal.

The forecast is for sun over the next few days. State officials hope dry weather will help lower water levels at Lake Oroville and eliminate the need for continued emergency releases.

Kurtis Alexander and Filipa A. Ioannou are San Francisco Chronicle staff writers. Email: kalexander@sfchronicle.com, fioannou@sfchronicle.com Twitter: @kurtisalexander, @obioannoukenobi
   Plenty of road problems remain even after rain disappears
Hearst Newspapers © Copyright 2017 Hearst Communications, Inc.
Accessed at http://www.sfgate.com/bayarea/article/Oroville-Dam-emergency-spillway-in-use-for-first-10925628.php on February 12, 2017.

.

Seismic

Volcanic Activity for the week of 8 February-14 February 2017

Volcanic Activity for the week of 8 February-14 February 2017

USGS/Smithsonian

Smithsonian’s Global Volcanism Program and the US Geological Survey’s Volcano Hazards Program

The Weekly Volcanic Activity Report is a cooperative project between the Smithsonian’s Global Volcanism Program and the US Geological Survey’s Volcano Hazards Program. Updated by 2300 UTC every Wednesday, notices of volcanic activity posted on these pages are preliminary and subject to change as events are studied in more detail. This is not a comprehensive list of all of Earth’s volcanoes erupting during the week, but rather a summary of activity at volcanoes that meet criteria discussed in detail in the “Criteria and Disclaimers” section. Carefully reviewed, detailed reports on various volcanoes are published monthly in the Bulletin of the Global Volcanism Network.

http://volcano.si.edu/reports_weekly.cfm#vn_282080

.

VA SIGMETs valid 17 Feb 2017 Volcanic Ash Hazards

SABANCAYA Peru VA SIGMETs valid 17 Feb 2017 Volcanic Ash Hazards

VA SIGMETs valid 17 Feb 2017 Volcanic Ash Hazards

www.aviationweather.gov

https://www.aviationweather.gov/sigmet

.

World earthquakes for the week ending 17 Feb 2017

World earthquakes for the week ending 17 Feb 2017

USGS

http://earthquake.usgs.gov/earthquakes/map

.

Scientists uncover huge 1.8 million km2 reservoir of melting carbon

Map showing area of carbon discovery

© Royal Holloway, University of London

Map showing area of carbon discovery

Royal Holloway, University of London logo
Home > About us home > News and events > News > Scientists uncover huge 1.8 million km2 reservoir of melting carbon
Posted on 13/02/2017

Discovery of 350km deep carbon reservoir in the mantle has important implications for understanding the carbon cycle

New research published in Earth and Planetary Science Letters describes how scientists have used the world’s largest array of seismic sensors to map a deep-Earth area of melting carbon covering 1.8 million square kilometers. Situated under the Western US, 350km beneath the Earth’s surface, the discovered melting region challenges accepted understanding of how much carbon the Earth contains – much more than previously understood.

The study, conducted by geologist at Royal Holloway, University of London’s Department of Earth Sciences used a huge network of 583 seismic sensors that measure the Earth’s vibrations, to create a picture of the area’s deep sub surface. Known as the upper mantle, this section of the Earth’s interior is recognized by its high temperatures where solid carbonates melt, creating very particular seismic patterns.

“It would be impossible for us to drill far enough down to physically ‘see’ the Earth’s mantle, so using this massive group of sensors we have to paint a picture of it using mathematical equations to interpret what is beneath us,” said Dr Sash Hier-Majumder of Royal Holloway.

He continued, “Under the western US is a huge underground partially-molten reservoir of liquid carbonate. It is a result of one of the tectonic plates of the Pacific Ocean forced underneath the western USA, undergoing partial melting thanks to gasses like CO2 and H2O contained in the minerals dissolved in it.”

As a result of this study, scientists now understand the amount of CO2 in the Earth’s upper mantle may be up to 100 trillion metric tons. In comparison, the US Environmental Protection Agency estimates the global carbon emission in 2011 was nearly 10 billion metric tons – a tiny amount in comparison. The deep carbon reservoir discovered by Dr. Hier-Majumder will eventually make its way to the surface through volcanic eruptions, and contribute to climate change albeit very slowly.

“We might not think of the deep structure of the Earth as linked to climate change above us, but this discovery not only has implications for subterranean mapping but also for our future atmosphere,” concluded Dr Hier-Majumder, “For example, releasing only 1% of this CO2 into the atmosphere will be the equivalent of burning 2.3 trillion barrels of oil. The existence of such deep reservoirs show how important is the role of deep Earth in the global carbon cycle.”

© Royal Holloway, University of London
Egham,
Surrey,
TW20 0EX
T: +44 (0)1784 434455
Accessed at https://www.royalholloway.ac.uk/aboutus/newsandevents/news/2017-articles/scientists-uncover-huge-1.8-million-km2-reservoir-of-melting-carbon.aspx on February 14, 2017.

.

Ancient tectonic plate blocks magma plume at Yellowstone, simulation shows

Scientists need new explanation for what fueled the North American supervolcano

morning glory hot spring 2 February 2016

© Society for Science & the Public 2000 – 2016. All rights reserved.

morning glory hot spring SUPERVOLCANIC ORIGINS A rising plume of hot rock from Earth’s mantle isn’t fueling the Yellowstone supervolcano and its natural wonders (Morning Glory hot spring, shown), new research suggests. Julie Falk/Flickr (CC BY-NC 2.0)

By Thomas Sumner 6:00am, February 3, 2016

The supervolcano lurking under Yellowstone National Park may not have resulted from a rising plume of hot rock from the planet’s depths as previously suggested.

New simulations of North America’s underside reveal that the mantle plume blamed for powering the Yellowstone supervolcano is in fact cut off from the surface by the remnants of a lost tectonic plate. That plate keeps a lid on the plume and has prevented its heat from playing a significant role in the region’s long history of massive eruptions, geoscientists report in a paper to be published in Geophysical Research Letters.

That means scientists need a new explanation for Yellowstone’s origins, says study author Lijun Liu, a geodynamicist at the University of Illinois at Urbana-Champaign. “For Yellowstone, the plume is not a big deal at all,” he says.

© Society for Science & the Public 2000 – 2016. All rights reserved.
Accessed at https://www.sciencenews.org/article/ancient-tectonic-plate-blocks-magma-plume-yellowstone-simulation-shows on February 14, 2017.

.

Ventura fault could cause stronger shaking, new research finds

Researchers find that the fault has a staircase-like structure, which would result in stronger shaking and more damage during an earthquake

San Andreas

Copyright 2016 ScienceDaily or by third parties, where indicated. All rights controlled by their respective owners.

The figure shows the location of the Ventura-Pitas Point fault with respect to the cities involved. The view is of Southern California, as seen from the Pacific coast looking east. The thin white line is the coastline; the outlines of the Channel Islands can be seen off to the right (the south). The pink triangulated surface is the Ventura-Pitas Point fault. At the edge of it can be seen the stair-step cross-section, the flat part being under Santa Barbara.

Credit: Gareth J. Funning and Scott T. Marshall

Date:     February 14, 2017
Source:     University of California – Riverside

FULL STORY

Summary:

   The fault under Ventura, California, would likely cause stronger shaking during an earthquake and more damage than previously suspected, researchers warn. The Ventura-Pitas Point fault in Southern California has been the focus of much recent attention because it is thought to be capable of magnitude 8 earthquakes. It underlies the city of Ventura and runs offshore, and thus could generate tsunamis.

A new study by a team of researchers, including one from the University of California, Riverside, found that the fault under Ventura, Calif., would likely cause stronger shaking during an earthquake and more damage than previously suspected.

The Ventura-Pitas Point fault in southern California has been the focus of a lot of recent attention because it is thought to be capable of magnitude 8 earthquakes. It underlies the city of Ventura and runs offshore, and thus may be capable of generating tsunamis.

Since it was identified as an active and potentially dangerous fault in the late 1980s, there has been a controversy about its location and geometry underground, with two competing models.

Originally, researchers assumed the fault was planar and steeply dipping, like a sheet of plywood positioned against a house, to a depth of about 13 miles. But a more recent study, published in 2014, suggested the fault had a “ramp-flat geometry,” with a flat section between two tilting sections, similar to a portion of a staircase.

In a recently published paper in Geophysical Research Letters, a team of researchers used computer modeling to test the two alternatives.

In these computer models, the crust — outermost layer of rock — in the Ventura-Santa Barbara region is represented as a three-dimensional volume, with the surfaces of the region’s faults as weaknesses within it. That volume is then “squeezed” at the rate and direction that the region is being squeezed by plate tectonics. In comparisons of the expected movement in the models with GPS data, the fault with the staircase-like structure was favored.

That means more of the fault, which runs westward 60 miles from the city of Ventura, through the Santa Barbara Channel, and beneath the cities of Santa Barbara and Goleta, is closer to the surface. That would likely cause stronger shaking during an earthquake and more damage.

“Our models confirm that the Ventura-Pitas Point fault is a major fault, that lies flat under much of the coast between Ventura and Santa Barbara,” said Gareth Funning, an associate professor of geophysics at UC Riverside, one of the authors of the study. “This means that a potential source of large earthquakes is just a few miles beneath the ground in those cities. We would expect very strong shaking if one occurred.”

Future research will address the consequences of there being a fault ramp under Ventura. Researchers now can run more accurate simulations based on the ramp model to predict where the shaking will be strongest, and whether they would expect a tsunami.

Story Source:
Materials provided by University of California – Riverside. Original written by Sean Nealon. Note: Content may be edited for style and length.
Copyright 2016 ScienceDaily or by third parties, where indicated. All rights controlled by their respective owners.
Content on this website is for information only. It is not intended to provide medical or other professional advice.
Views expressed here do not necessarily reflect those of ScienceDaily, its staff, its contributors, or its partners.
Accessed at https://www.sciencedaily.com/releases/2017/02/170214104252.htm on February 15, 2017.

 

.

Radiation

LaSalle Illinois Event 52547 SCRAM Hot Shutdown

LaSalle Illinois Nuclear Power Plant

NRC

http://www.nbcnews.com/id/42555888/ns/us_news-life/

 

Power Reactor Event Number: 52547
Facility: LASALLE Region: 3 State: IL
Unit: [1] [ ] [ ] RX Type: [1] GE-5,[2] GE-5
NRC Notified By: WAYNE CLAYTON HQ OPS Officer: JEFF ROTTON
Notification Date: 02/14/2017 Notification Time: 02:40 [ET]
Event Date: 02/13/2017 Event Time: 23:09 [CST]
Last Update Date: 02/14/2017 Emergency Class: NON EMERGENCY
10 CFR Section: 50.72(b)(2)(iv)(B) – RPS ACTUATION – CRITICAL
Person (Organization): PATRICIA PELKE (R3DO)

 

Unit SCRAM Code RX CRIT Initial PWR Initial RX Mode Current PWR Current RX Mode
1 A/R Y 100 Power Operation 0 Hot Shutdown

 

Event Text

UNIT 1 AUTOMATIC REACTOR SCRAM DUE TO MAIN GENERATOR TRIP ON DIFFERENTIAL CURRENT

“This notification is being provided in accordance with 10CFR50.72(b)(2)(iv)(B).

“On February 13, 2017 at 2309 CST, Unit 1 Reactor Automatic Scram signal was received due to Turbine Control Valves Fast Closure. The turbine trip was due to the main generator trip on Differential Current. The ‘U’ safety relief valve actuated in the relief mode on the turbine trip, and has subsequently reset with tailpipe temperature returning to normal. The plant is in a stable condition with reactor pressure being maintained by the Turbine Bypass valves. Reactor water level is being controlled with feedwater. Investigation into the cause of the event is in progress.”

All control rods fully inserted on the scram. Unit 1 is in a normal shutdown electric plant alignment. Unit 2 was not affected.

The licensee notified the NRC Resident Inspector.

Page Last Reviewed/Updated Tuesday, February 14, 2017

Accessed at https://www.nrc.gov/reading-rm/doc-collections/event-status/event/2017/20170214en.html on February 17, 2017.

.

RADCON on 17 February 2017 Two of Concern-Watch

Radcon Two of Interest

Netcom.com

© Copyright 2012-2014 Nuclear Emergency Tracking Center, LLC (netc.com).All information that is produced by netc.com websites belongs to Nuclear Emergency Tracking Center, LLC   (netc.com).
https://netc.com/
NETC.COM   © 2014

 

  1. Station ID 1:74F5E72E.5   Kotzebue, AK, US

CPM: current 35 Low 7 High 42

Average 22

Farthest North station on Earth. MAZUR PRM 9000

Last updated: 2017-02-17 10:23:39 GMT-0600

 

  1. Station ID 1:EB59F343   Oak Harbor, WA, US

CPM: current 27 Low 4 High 34

Average 16

Last updated: 2017-02-17 10:23:57 GMT-0600

.

Constitution – Crony Capitalism

Worker missing after New Orleans pipeline explosion

Worker missing after New Orleans pipeline explosion

Map courtesy of St Charles Parish

Dozens of homes evacuated in New Orleans after a gas pipeline exploded Thursday evening. Map courtesy of St. Charles Parish.

By Daniel J. Graeber Follow @dan_graeber Contact the Author   |   Feb. 10, 2017 at 8:12 AM Feb. 10 (UPI) —

One worker for Phillips 66 is unaccounted for and two others were injured after an explosion on a gas pipeline near New Orleans, the company said.

The company confirmed in a joint statement with officials at St. Charles Parish in New Orleans that a 20-inch gas pipeline exploded at the Paradis plant about 30 minutes outside of New Orleans.

Three Phillips 66 employees and three contractors were on-site at the time of explosion, reported first at around 7 p.m.

“Two of the contract workers were injured and taken to area hospitals for treatment,” the joint statement read. “One Phillips 66 employee remains unaccounted for.”

St. Charles Parish officials evacuated about 60 homes in and around the vicinity of the explosion. The company said the pipeline has been isolated, leaving whatever is left inside the pipeline to burn off.

Parish Sheriff Greg Champagne told reporters the fire was high pressure and high intensity. Phillips 66 said the pipeline was carrying raw natural gas liquids to the Paridis plant in Louisiana.

State police were quoted by the Times-Picayune as saying the cause of the fire remains under investigation. Champagne said he’s “not sure” if a valve or gasket failed on site.

Eight people died, dozens were injured and 38 homes were destroyed in the September 2010 explosion caused by the rupture of a 54-year-old gas line in San Bruno, Calif. Utility company Pacific Gas & Electric Co. paid more than $500 million to settle claims arising from the blast.

Copyright © 2017 United Press International, Inc. All Rights Reserved.
Accessed at http://www.upi.com/Energy-News/2017/02/10/Worker-missing-after-New-Orleans-pipeline-explosion/4831486728152/?utm_source=fp&utm_campaign=ts&utm_medium=23 on February 10, 2017.

.

Sheriff: Pipeline fire could burn for hours, maybe days

Sheriff New Orleans Pipeline fire could burn for hours maybe days

Written by: FOX8Live.com Staff
(WVUE) – Source: Facebook

Sheriff New Orleans Pipeline fire could burn for hours maybe days

Pipeline map (Source: Phillips 66)

Pipeline map (Source: Phillips 66)

Thursday, February 9th 2017, 7:45 pm CST
Friday, February 10th 2017, 6:12 am CST
Written by: FOX8Live.com Staff
(WVUE) – Source: Facebook

Highway 90 and Highway 631 had been closed in both directions, and nearby residents were evacuated, according to St. Charles Parish Sheriff Greg Champagne.

The explosion happened just before 7 p.m. at a Phillips 66 pipeline.

A total of six workers from two companies – three from Phillips Petroleum and three from Blanchard Contractors – were cleaning the 20-inch line when a valve or gasket failed, Champagne said.

Two of the injured workers were hospitalized, one of which was airlifted to a Baton Rouge burn unit. There was no further information available on their conditions and no further information on the missing worker.

The three other workers suffered minor injuries.

The Edward A. Dufresne Community Center in Luling was serving as a shelter. A total of 60 homes were evacuated. People living east of Highway 635 remained in the shelter. Residents living west of Highway 635 were allowed to return to their homes.

Champagne said the fire was 30-40 feet wide and just as high. He said it could burn for hours or even days, and that the safest course of action was to let it burn itself out.

“Number one, it’s too hot to get near,” he said. “It is a large blowtorch at this point. It is burning clean, but it is not safe to go near it.”

The sheriff said the liquid in the line was a highly volatile byproduct of natural gas production.

Copyright 2017 WVUE. All rights reserved.
Accessed at http://www.wafb.com/story/34471546/update-residents-being-evacuated-near-paradis-pipeline-fire on February 10, 2017.

.

Study Finds Rise in Methane in Pennsylvania Gas Country

Readings showed a rise in the potent greenhouse gas from 2012 to 2015, and the region’s boom in natural gas production is likely to blame.

extent of the Marcellus Basin in Pennsylvania

Swarthout, et. al.

Kirsten Pearsons, István Mikó, John F Tooker
Research Ideas and Outcomes 3: e12249 (14 Feb 2017)
https://doi.org/10.3897/rio.3.e12249

 
By Zahra Hirji Feb 11, 2017

Researchers found a methane surge in Pennsylvania gas region

Researchers found a methane surge in a major Pennsylvania gas region between 2012 and 2015. Photo courtesy of J. Douglas Goetz

New research shows a recent three-year surge in methane levels in northeastern Pennsylvania, a hub of the state’s natural gas production.

After sampling the region’s air in 2012 and again in 2015, researchers found that methane levels had increased from 1,960 parts per billion in 2012 up to 2,060 in 2015, according to a study published Thursday in the journal Elementa: Science of the Anthropocene.

During that span, the region’s drilling boom slowed and natural gas production ramped up. The researchers said this shift in gas activity is possibly to blame for the spike in methane levels.

“The rapid increase in methane is likely due to the increased production of natural gas from the region which has increased significantly over the 2012 to 2015 period,” Peter DeCarlo, an assistant professor at Drexel University and a study author, said in a statement. “With the increased background levels of methane, the relative climate benefit of natural gas over coal for power production is reduced.”

Methane is a potent greenhouse gas. Its emissions have been hard for regulators to quantify, with the EPA only last year beginning to target reductions from oil and gas production.

Also last year, the Obama administration released new rules to reduce methane leakage, but the Trump administration has targeted many such rules for repeal.

Some states are also starting to find ways to reduce methane emissions from oil and gas activities. Colorado was the first state to adopt rules to control drilling-related methane emissions. Pennsylvania, the second-ranked state for natural gas production, is following suit. Democratic Gov. Tom Wolf last year launched a strategy to reduce the emissions from natural gas wells, compressor stations and pipelines.

DeCarlo and his colleagues drove around northeastern Pennsylvania in a van equipped with air monitoring equipment. They measured what’s called background concentrations of methane and other chemicals in August 2012. Researchers used a different van, and took a different driving route, for their monitoring expedition in August 2015.

“Every single background measurement in 2015 is higher than every single measurement in 2012,” DeCarlo told InsideClimate News. “It’s pretty statistically significant that this increase is happening.”

While most of the air samples were collected in different locations during the two research trips, there was some overlap. One of the areas that overlapped revealed a slightly higher increase in methane levels (an approximate increase in 125 ppb) than was observed across the full study area (about 100 ppb).

The study also showed that carbon monoxide levels decreased between 2012 and 2015. Researchers suggest this too is a possible result of the region’s transition away from so much gas development—which involves lots of truck traffic that can be a big source of carbon monoxide.

Zahra Hirji is a Boston-based reporter for InsideClimate News. She covers natural gas drilling, fossil fuel divestment, renewable energy policy and climate science, among other issues. She also runs ICN’s two news aggregations, Today’s Climate and Clean Economy Wire, as well as helps with multimedia presentations, social media and website upkeep. She has a degree in geology from Brown University and a masters degree in science writing from the Massachusetts Institute of Technology.
A Pulitzer Prize-winning, non-profit, non-partisan news organization dedicated to covering climate change, energy and the environment.
Accessed at https://insideclimatenews.org/news/10022017/methane-emissions-pennsylvania-natural-gas on February 16, 2017.

.

2016 National Clinic Violence Survey conducted by the Feminist Majority Foundation

National Clinic Violence Survey

National Clinic Violence Survey

EXECUTIVE SUMMARY

The 2016 National Clinic Violence Survey is the 14th comprehensive nationwide survey of women’s health clinics conducted since 1993 by the Feminist Majority Foundation. The Survey results provide new evidence that the misleading videos targeting Planned Parenthood, released by the anti-abortion group Center for Medical Progress (CMP) in the summer of 2015, have sparked a wave of violence and threats against abortion providers. The CMP videos have been thoroughly debunked; investigations in 13 states, and by three Congressional Committees, found no evidence of any wrongdoing on the part of Planned Parenthood. Tragically, on November 2015, Robert Lewis Dear, apparently influenced by the distortions in the videos, opened fire at a Planned Parenthood clinic in Colorado Springs, CO, killing 3 and wounding nine.

This incident and countless anecdotes of increasing levels of violence against providers across the country in the fall of 2015 led us to fear that anti-abortion extremists had been emboldened by the release of these videos. The results of the 2016 National Clinic Violence Survey, the first quantitative measure of nationwide violence recorded since the release of the CMP videos, corroborate these fears.

The Survey found that the percentage of clinics reporting the most severe types of anti-abortion violence and threats of violence has dramatically increased in the past two years, jumping to 34.2% of clinics in the first six months of 2016, up from 19.7% in the first six months of 2014. Some of the most frequent types of violence and threats were blocking access to and invasions of clinics, stalking, death threats, and bombing threats. One clinic reported that protestors repeatedly tell staff and doctors to “watch our backs” and “nobody cares when a murderer gets killed.”

The 2016 survey also measured the concentration of violence and threats against clinics. The proportion of clinics reporting high levels of violence, threats, and harassment (three or more types) increased from 13.5% in 2014 to 16.0% in 2016. Clinics reporting moderate levels of violence, threats, and harassment (one to two types) have increased as well, up to 33.5% from 29.8% in 2014. Incredibly, nearly half of all abortion providers in the country (49.5%) experienced some form of severe violence, threats of violence and harassment in 2016, up from 43.3% in 2014.

Clinics were asked how often they experience anti-abortion activity, including protests and demonstrations. Nearly 25% of clinics report they experience anti-abortion activity at their facility on a daily basis. Another 38.4% report that such activity occurs weekly. Thus, some 63.2% of women’s health clinics nationwide experience frequent and regular anti-abortion activity. An additional 27.9% of clinics report occasional activity. This means that the overwhelming majority of clinics (91.1%) report experiencing some type of anti-abortion activity in the first half of 2016, with 63.2% of providers experiencing activity at least once a week. By contrast, in 2014, 88% of clinics reported experiencing some type of anti-abortion activity.

Effective law enforcement response continues to be essential in preventing incidences of violence and harassment. Clinics that rated their experience with local law enforcement as “poor” or “fair” were significantly more likely to experience high levels of severe violence and harassment (47.0%) than those who rated local law enforcement as “good” or “excellent” (16%). Some clinics even responded that local law enforcement is a hindrance, rather than a help, to their clinics. One stated that, “They never arrest or do anything no matter what the [anti-abortion] protestors do…. the police department lets their officers’ religious beliefs interfere with doing their job as an officer of the law. It seems like they are officers of THEIR law.”

These survey results show a clear need for increased prosecution of anti-abortion extremists to counter the dramatic increase in levels of severe violence and threats. Furthermore, the incendiary rhetoric of the anti-abortion members of the House Select Committee in their final report, and the outcome of the 2016 election, is providing new fuel for already emboldened extremists. The potential for more violence remains high, especially in locations where lax or hostile law enforcement allows harassment and threats to go unchecked.

.

Preparation

Modeling sustainability: population, inequality, consumption, and bidirectional coupling of the Earth and Human Systems

Population and GDP per capita from year 1 to 2010 AD

Copyright © 2017 Oxford University Press

Safa Motesharrei, Jorge Rivas, Eugenia Kalnay, Ghassem R. Asrar, Antonio J. Busalacchi, Robert F. Cahalan, Mark A. Cane, Rita R. Colwell, Kuishuang Feng, Rachel S. Franklin
Natl Sci Rev (2016) 3 (4): 470-494.
DOI: https://doi.org/10.1093/nsr/nww081
Published: 11 December 2016
Oxford University Press, National Science Review, Chinese Academy of Sciences

Abstract

Over the last two centuries, the impact of the Human System has grown dramatically, becoming strongly dominant within the Earth System in many different ways. Consumption, inequality, and population have increased extremely fast, especially since about 1950, threatening to overwhelm the many critical functions and ecosystems of the Earth System. Changes in the Earth System, in turn, have important feedback effects on the Human System, with costly and potentially serious consequences. However, current models do not incorporate these critical feedbacks. We argue that in order to understand the dynamics of either system, Earth System Models must be coupled with Human System Models through bidirectional couplings representing the positive, negative, and delayed feedbacks that exist in the real systems. In particular, key Human System variables, such as demographics, inequality, economic growth, and migration, are not coupled with the Earth System but are instead driven by exogenous estimates, such as United Nations population projections. This makes current models likely to miss important feedbacks in the real Earth–Human system, especially those that may result in unexpected or counterintuitive outcomes and thus requiring different policy interventions from current models. The importance and imminence of sustainability challenges, the dominant role of the Human System in the Earth System, and the essential roles the Earth System plays for the Human System, all call for collaboration of natural scientists, social scientists, and engineers in multidisciplinary research and modeling to develop coupled Earth–Human system models for devising effective science-based policies and measures to benefit current and future generations.

Earth and Human System Models, population, migration, inequality, data assimilation, bidirectional couplings and feedbacks, sustainability

SIGNIFICANCE STATEMENT

The Human System has become strongly dominant within the Earth System in many different ways. However, in current models that explore the future of humanity and environment, and guide policy, key Human System variables, such as demographics, inequality, economic growth, and migration, are not coupled with the Earth System but are instead driven by exogenous estimates such as United Nations (UN) population projections. This makes the models likely to miss important feedbacks in the real Earth–Human system that may result in unexpected outcomes requiring very different policy interventions. The importance of humanity’s sustainability challenges calls for collaboration of natural and social scientists to develop coupled Earth–Human system models for devising effective science-based policies and measures.

HIGHLIGHTS

  • The Human System has become strongly dominant within the Earth System in many different ways.
  • Consumption, inequality, and population have increased extremely fast, especially since ~1950.
  • The collective impact of these changes threatens to overwhelm the viability of natural systems and the many critical functions that the Earth System provides.
  • Changes in the Earth System, in turn, have important feedback effects on the Human System, with costly and serious consequences.
  • However, current models, such as the Integrated Assessment Models (IAMs), that explore the future of humanity and environment, and guide policy, do not incorporate these critical feedbacks.
  • Key Human System variables, such as demographics, inequality, economic growth, and migration, are instead driven by exogenous projections, such as the UN population tables.
  • Furthermore, such projections are shown to be unreliable.
  • Unless models incorporate such two-way couplings, they are likely to miss important dynamics in the real Earth–Human system that may result in unexpected outcomes requiring very different policy interventions.
  • Therefore, Earth System Models (ESMs) must be bi-directionally coupled with Human System Models.
  • Critical challenges to sustainability call for a strong collaboration of both earth and social scientists to develop coupled Earth–Human System models for devising effective science-based policies and measures.
  • We suggest using Dynamic Modeling, Input–Output (IO) models, and Data Assimilation to build and calibrate such coupled models.

DESCRIPTION OF SECTIONS

The First Section, entitled ‘Dominance of the Human System within the Earth System’, describes major changes in the relationship between the Earth System and the Human System, and key Human System factors driving these changes.

The Second Section, entitled ‘Inequality, consumption, demographics, and other key Human System properties: projections vs. bidirectional coupling’, provides examples of fundamental problems in the exogenous projections of key Human System factors used in current models.

The Third Section, entitled ‘Human System threatens to overwhelm the Carrying Capacity and ecosystem services of Earth System’, describes examples of changes in the Earth System that may impact the Human System seriously, as well as missing feedbacks from the Earth System onto the Human System.

The Fourth Section, entitled ‘Bidirectional coupling of Human System and Earth System Models is needed. Proposed methodology: Dynamic Modeling, Input–Output models, Data Assimilation’, argues for the need to bidirectionally couple both systems in order to model the future of either system more realistically, and proposes practical methods to implement this coupling.

Dominance of the Human System within the Earth System

Humans impact the Earth System by extracting resources and returning waste and pollution to the system, and simultaneously altering land cover, fragmenting ecosystems, and reducing biodiversity.1

The level of this impact is determined by extraction and pollution rates, which in turn, are determined by the total consumption rate. Total consumption equals population multiplied by average consumption per capita, both of which are recognized as primary drivers of human environmental impact.2

 

Figure 1.

World population and GDP per capita from year 1 to 2010 AD. The total human impact is their product. The inset shows the relative annual change of each between 1950 and 2010 [228–230]. Their averages were 1.69% and 2.21%, respectively, out of a total of ~4% (average annual change in GDP). Therefore, growth in population and consumption per capita have both played comparably important roles in the remarkable increase of human impact on planet Earth. This ~4% total growth corresponds to doubling the total impact every ~17 years. Note that the contribution from population growth has been relatively steady, while the contribution from the relative change in GDP per capita has been much more variable from year to year (even negative for some years). See Footnote 2 for a description of the mathematical formula used to generate the inset and sources of data.

World population and GDP per capita from year 1 to 2010 AD. The total human impact is their product. The inset shows the relative annual change of each between 1950 and 2010 [228–230]. Their averages were 1.69% and 2.21%, respectively, out of a total of ~4% (average annual change in GDP). Therefore, growth in population and consumption per capita have both played comparably important roles in the remarkable increase of human impact on planet Earth. This ~4% total growth corresponds to doubling the total impact every ~17 years. Note that the contribution from population growth has been relatively steady, while the contribution from the relative change in GDP per capita has been much more variable from year to year (even negative for some years). See Footnote 2 for a description of the mathematical formula used to generate the inset and sources of data.

Global inequality in greenhouse gas emissions

© Copyright Worldometers.info – All rights reserved

Figure 2.

Global inequality in greenhouse gas emissions.

Relationship of the Human System within the Earth System

Copyright © 2017 Oxford University Press

Figure 3.

Relationship of the Human System within the Earth System, not separate from it (after [110]). The Earth System provides the sources of the inputs to, and the sinks that absorb the outputs of, the Human System.

Relationship of the Human System within the Earth System, not separate from it (after [110]). The Earth System provides the sources of the inputs to, and the sinks that absorb the outputs of, the Human System.

The rapid expansion of the Human System has been a remarkably recent phenomenon (see Fig. 1). For over 90% of human existence, world population remained less than 5 million. After the Agricultural Revolution, it still took ~10,000 years to reach 1 billion, around the year 1800. About a century later, the second billion was reached, around 1930. Thereafter, in less than a century, 5 billion more humans were added (within a single human lifetime). The peak in the rate of growth occurred in the 1960s, but because of the larger total population, the peak in absolute growth has persisted since the early 1990s [1]. To go from 5 to 6 billion took ~12 years (1987–1999), and from 6 to 7 billion also took ~12 years (1999–2011). The decline in the rate of growth over the past few decades has not significantly reduced the absolute number currently added every year, ~80 million (e.g., 83 million in 2016), equivalent to the population of Germany [2,3].

A similar pattern holds for GDP per capita, but with the acceleration of growth occurring even more recently (see Fig. 1) and with the distribution of consumption becoming much more unequal. Thus, until the last century, population and GDP per capita were low enough that the Human System remained a relatively small component of the Earth System. However, both population and GDP per capita experienced explosive growth, especially after ~1950, and the product of these two growths—total human impact—has grown from relatively small to dominant in the Earth System.

Contrary to popular belief, these trends still continue for both population and consumption per capita. The world is projected to add a billion people every 13–15 years for decades to come. The current UN medium estimate expects 11.2 billion people by 2100, while the high estimate is 16.6 billion.3 Similarly, while the highest rates of growth of global per capita GDP took place in the 1960s, they are not projected to decline significantly from their current rates (so that estimates of annual global growth until 2040 range from 3.3% (IMF and IEA) to 3.8% (US EIA), implying a doubling time of ~20 years).4

Two major factors enabled this demographic explosion. First, advances in public health, sanitation, and medicine significantly reduced mortality rates and lengthened average lifespan. Second, the rapid and large-scale exploitation of fossil fuels [4]—a vast stock of nonrenewable resources accumulated by Nature over hundreds of millions of years that are being drawn down in just a few centuries—and the invention of the Haber–Bosch process to use natural gas to produce nitrogen fertilizer [5,6] enabled increasingly higher levels of food and energy production. All these factors allowed for this fast growth of the Human System [7,8].5

Technological advances also allowed for the rapid increase in consumption per capita (see Fig. 1). This increase was made possible by a dramatic expansion in the scale of resource extraction, which, by providing a very large increase in the use of inputs, greatly increased production, consumption, throughput, and waste. During the fossil fuel era, per capita global primary energy and per capita global materials consumption have significantly increased over time. Despite tremendous advances in technological efficiencies, world energy use has increased for coal, oil, gas, and electricity since 1900, and global fossil fuel use per capita has continued to rise over the last few decades. There is no empirical evidence of reduction in use per capita, nor has there been an abandonment or long-term decline of one category through substitution by another. The same applies to global per capita use of materials for each of the major materials categories of minerals—industrial, construction, materials for ores, and materials derived from fossil energy carriers.6

The rapidly growing size and influence of the Human System has come to dominate the Earth System in many different ways [9–12]. Estimates of the global net primary production (of vegetation) appropriated by humans range as high as 55%, and the percentage impacted, not just appropriated, is much larger [13–23]. Human activity has also had a net negative effect on total global photosynthetic productivity since the most productive areas of land are directly in the path of urban sprawl [24]. Human use of biomass for food, feed, fiber, fuel, and materials has become a primary component of global biogeochemical cycles of carbon, nitrogen, phosphorous, and other nutrients. Land use for biomass production is one of the most important stressors on biodiversity, while total biomass use has continued to grow and demand is expected to continue growing over the next few decades [25]. Global food demand alone has been projected to double or more from 2000 to 2050 due to both rising population and rising incomes [26]. Most agriculturally usable land has already been converted to agriculture [27]. Most large mammals are now domesticated animals [28,29]. Soils worldwide are being eroded, fisheries exhausted, forests denuded, and aquifers drawn down, while desertification due to overgrazing, deforestation, and soil erosion is spreading [30–33]. Deforestation, in turn, affects local climate through evapotranspiration and albedo [34,35]. Since climate change is expected to make subtropical regions drier, desertification is expected to further increase, especially due to bidirectional albedo–vegetation feedback [22].

At the same time, greenhouse gases (GHGs) from fossil fuels, together with land-use change, have become the major drivers of global climate change [10,36,37]. Atmospheric levels of carbon dioxide, methane, and nitrous oxide not only exceed pre-industrial concentrations by ~40%, 150%, and 20%, respectively, they are now substantially above their maximum ranges of fluctuation over the past 800,000 years7 [36] while total carbon dioxide emissions continue to grow at a rapid rate [38]. Arctic sea ice, Antarctic and Greenland ice sheets, global glacier mass, permafrost area, and Northern Hemisphere snow cover are all decreasing substantially, while ocean surface temperatures, sea level, and ocean acidification are rising [36]. Arctic sea ice is decreasing at an average rate of 3.0 ± 0.3 m2 per metric ton of CO2 emissions and at the current emissions rate of 35 gigaton per year could completely disappear by 2050 during Septembers [39]. The rate of ocean acidification, in particular, is currently estimated to be at least 100 times faster than at any other time in the last 20 million years [12].

The Human System dominates the global nitrogen cycle, having produced a 20% rise in nitrous oxide (N2O) in the atmosphere, now the third largest contributor to global warming, and a tripling of ammonia (NH3) in the atmosphere due to human activities [40]. In total, human processes produce more reactive nitrogen than all natural processes combined [12,41–45], altering the global nitrogen cycle so fundamentally that Canfield et al. [41] estimate the closest geological comparison occurred ~2.5 billion years ago. Nitrogen and phosphorus fertilizer runoff, along with nitrogen oxides from fossil-fuel combustion (which is then deposited by rain over land and water) are causing widespread eutrophication in rivers, lakes, estuaries, and coastal oceans, and creating massive Dead Zones with little or no oxygen, which are increasing in number and size in coastal waters and oceans globally, killing large swaths of sea life and damaging or destroying fisheries. This may be compounded further by potentially dangerous positive feedbacks between hypoxia, ocean acidification, and rising sea temperatures [12,41,46–52].

Human activities also dominate many regional hydrological cycles [53–60], with more than half of all accessible surface freshwater being used by humans, to such an extent that some major rivers are being so excessively depleted that they sometimes no longer reach the sea, while some major inland fresh and salty water bodies, such as Lake Chad, Lake Urmia, Lake Poopó, and Aral Sea, are drying up. In many of the principal aquifers that support the world’s agricultural regions and in most of the major aquifers in the world’s arid and semi-arid zones, groundwater extraction is occurring at far greater rates than natural recharge. This includes aquifers in the US High Plains and Central Valley, the North China Plain, Australia’s Canning Basin, the Northwest Sahara Aquifer System, the Guarani Aquifer in South America, and the aquifers beneath much of the Middle East and northwestern India [61–68]. Climate change can increase the frequency and severity of extreme weather events, such as hot and cold temperature extremes, heat waves, droughts, heavy precipitation, tropical cyclones, and storms [69–74]. In addition, more impervious surfaces together with other land cover changes increase runoff significantly, hence intensifying the adverse impacts of extreme hydrological events [75–77].

Many other socioeconomic trends and their impacts on the Earth System have accelerated synchronously since the 1950s with little sign of abatement [78,79]. For example, human processes play a major role in virtually every major metal cycle, leading to atmospheric and direct contamination of terrestrial and aquatic environments by trace-metal pollutants. Coal combustion is the major source of atmospheric Cr, Hg, Mn, Sb, Se, Sn, and Tl emissions, oil combustion of Ni and V, and gasoline combustion of Pb, while nonferrous metal production is the largest source of As, Cd, Cu, In, and Zn [80]. Surface mining has also become a dominant driver of land-use change and water pollution in certain regions of the world, where mountaintop removal, coal and tar sands exploitation, and other open pit mining methods strip land surfaces of forests and topsoils, produce vast quantities of toxic sludge and solid waste, and often fill valleys, rivers, and streams with the resulting waste and debris [81]. All of these trends can have an impact on other species, and while the exact causes are difficult to establish, current animal and plant species extinction rates are estimated to be at least 100 times the natural background rate [33,82–84]. Furthermore, ecosystems worldwide are experiencing escalating degradation and fragmentation, altering their health and provision of important ecosystem functions and services for humans and other species.8

Thus, the Human System has fundamentally impacted the Earth System in a multitude of ways. But as we will show, these impacts on the Earth System also feed back onto the Human System through various factors and variables: human health, fertility, well-being, population, consumption, economic growth, development, migration, and even producing societal conflict. Rather than incorporating these feedbacks, current models simply use independent projections of these Human System variables, often in a highly unreliable way.

Inequality, consumption, demographics, and other key Human System properties: projections vs. bidirectional coupling

These large human impacts on the Earth System must be considered within the context of the large global economic inequality to realize that current levels of resource extraction and throughput only support societies at First World living standards for ~17% of the world’s current population [1]. The majority of the world’s people live at what would be considered desperate poverty levels in developed countries, the average per capita material and energy use in developed countries is higher than in developing countries by a factor of 5 to 10 [25], and the developed countries are responsible for over three quarters of cumulative greenhouse gas emissions from 1850 to 2000 [85]. To place global resource-use inequality into perspective, it would require global resource use and waste production at least 2 to 5 times higher than it is now to bring the average levels of the ~83% of the world’s people living in developing countries up to the average levels of developed countries today [25]. The near-tripling in CO2 emissions per capita in China from just 1990 to 2010 demonstrates the similar potential increases that could take place in the less developed world if this economic disparity is reduced [86]. Despite making China a focus of global concern because it became the single largest energy user and carbon emitter, China’s 2010 per capita energy use (1.85 metric tons of oil equivalent, toe) was actually still below the world average (1.87 toe) [87,88]. The rest of the developing world, with the potential to reach similar levels of per capita emissions, has more than three times China’s population. Furthermore, China’s 2010 per capita carbon emissions (6.2 metric tons) were still only about a third of the US (17.6 metric tons) [89], indicating much more growth can still occur [86].

However, overall inequality in resource consumption and waste generation is greater than what these comparisons between countries demonstrate since resource consumption within countries is skewed towards higher income groups. In some countries, the Gini coefficient for carbon emissions (e.g., 0.49 in China and 0.58 in India [90]) is actually higher than the Gini coefficient for income (0.42 and 0.34 in 2010, respectively [91]). Rather than disaggregating resource consumption within countries, using national per capita GDP calculations and projections provides a distorted understanding of the distribution and characteristics of resource use and waste generation. Consumption patterns and associated per capita shares of resource use and pollution differ enormously, and using a consumption-based calculation rather than a national territorial production-based approach demonstrates even further the extent of global economic and environmental inequality: about 50% of the world’s people live on less than $3 per day, 75% on less than $8.50, and 90% on less than $23 (US$ at current purchasing power parity). The top 10%, with 27.5 metric tons of GHG emissions per capita, produce almost as much total GHG emissions (46% of global total) as the bottom 90% combined (54%), with their per capita GHG emission of only 3.6 metric tons [87,90] (see Fig. 2).

Furthermore, even if per capita emissions stabilize or decline in the developed countries, population growth in these wealthy countries will remain a major driver of future increases in resource use and emissions. While some consider population growth a serious issue only in very poor countries, large population growth is still projected for some of the wealthiest countries today. For example, US, Canada, Switzerland, Sweden, Norway, Australia, and New Zealand are each projected to grow by about an additional 40%–80%.9 Population growth in the developed countries is likely to be much higher than these UN estimates project because these estimates include arbitrary projections of very low future immigration from less developed countries.10 A model that uses these UN population projections incorporates these arbitrary and unrealistic assumptions into their projections, undermining its reliability. This is one reason why it is essential to project demographic variables endogenously within the models. One result of such unrealistically low projections of future migration to the developed countries is to produce lower estimates of future total emissions of the developed countries, which means the developed countries are not required to make as much effort today to lower their own emissions.11

Furthermore, the UN projections of a relatively stable population for the whole of the developed world depend on dramatic, and highly unlikely, declines projected in a handful of key countries. Japan, for example, must decline by ~34%, Germany by ~31%, and Russia by ~30% for the projected stability in total developed country population to be born out.12 In addition, countries often highlighted for their low birth rates, like Italy and Spain, are not projected to decline by even 1% for decades. Small increases in fertility and/or immigration could extend that period for decades longer. Even without those increases in fertility or immigration, the total population of developed countries is not projected by the UN to peak until about 2050, and trajectories beyond that are very difficult to predict given their high dependence on future policies.13

Since population is stabilizing in some countries, it is often thought that the human population explosion of the 20th century (growing by a factor of ~4) is over, and since rates of growth have been declining, family planning and population growth are no longer concerns. However, the UN projections of a stabilization in global population [3] are so far into the future (after 2100) that the projections are unreliable [8] and global stabilization itself is highly uncertain [92]. In fact, alternate projection methodologies suggest with much more certainty that stabilization is unlikely to occur [93]. Furthermore, even the UN projections are based on large assumed decreases in fertility rates in much of the world. If those projected decreases in fertility rates are off by only 0.5 births per woman (an error of less than 10% in many high-fertility countries), the date at which the world reaches 11 billion will occur five decades earlier and will raise the global total population by 2100 to nearly 17 billion and still rapidly growing [3]. Current projections should be understood in the context of past projections that have overestimated fertility declines.14 Past mistaken projections reflect the use of highly questionable assumptions about fertility rate declines in developing countries [92] that tend to reproduce a ‘natural’ decline following the trajectory of more developed countries. Yet, the empirical record shows that reductions (or increases) in fertility rates reflect a complex range of sociodemographic, economic, and policy conditions [94–100]. If those conditions are not present, the projected declines will not necessarily occur, as can be seen in countries like Niger (7.4 births per woman in 1970, 7.6 in 2012) [101,102]. Needless to say, the use of demographic projections with overestimated fertility declines again produces underestimated projections of future resource use and emissions.

Even with these assumptions of large fertility declines in the projections, each additional billion humans added will not be spread evenly across the planet. The vast majority of the growth will be concentrated in countries that today continue to have very high rates of fertility (~25 countries above 5, ~45 above 4, and ~65 above 3). These countries are also the lowest income countries in the world, with the lowest resource use per capita, which means that the majority of population growth is taking place precisely in regions with the highest potential for growth in resource use per capita over the coming decades [87], and the growth of the total impact is the product of the two. For example, the lowest-income continent, Africa, which had 230 million people in 1950 and over a billion in 2010 (a 4-fold increase in one lifetime) is currently on track to add another 3 billion in the next 85 years (another 4-fold increase in one lifetime). Nigeria, by itself, is projected to reach almost 1 billion people. These high projections already assume very large decreases in fertility rates from their current average levels of approximately 5 children/woman in Sub-Saharan Africa. (Without this projected decrease in fertility rates, Africa alone would add 16 billion rather than 3 billion more people by 2100 [3].) The UN medium range projections show that the developing world (not including China) will grow by an additional 2.4 billion people in just the next three and a half decades, and a total of an additional 4 billion by 2100. Due to these uneven population growth rates, ~90% of the world’s population will be living in today’s less developed countries, with most of the growth in the poorest of these countries [3].15

Current projections of future resource use and greenhouse gas emissions used in the Intergovernmental Panel on Climate Change (IPCC) reports and Integrated Assessment Models (discussed further in the third Section) also depend heavily on a continuation of high levels of global economic inequality and poverty far into the future. Projections that global resource use and emissions will not rise very much due to rapid population growth in the poorest countries are based on the assumption that those countries will remain desperately poor by the standards of developed countries. (This assumption again provides the added benefit for today’s wealthy countries that the wealthy countries have to make less effort today to reduce their own emissions. Given total global carbon emissions targets, projections of low economic growth in poor countries translate into less stringent carbon reduction requirements in wealthy countries [103,104].) However, China’s recent rapid rise in emissions per capita shows this is a potential future path for the rest of the developing world. To argue otherwise requires assuming that today’s developing countries will remain in desperate poverty, and/or will adopt technologies that the developed countries themselves have yet to adopt. Real-world CO2 emissions have tracked the high end of earlier emissions scenarios [105], and until the currently wealthy countries can produce a large decline in their own emissions per capita, it is dubious to project that emissions per capita in the less developed countries will not continue on a trajectory up to the levels of currently wealthy countries.16

As we will show, all of these points raise the critical issue of the accuracy and reliability of the assumptions underlying the projections of key Human System variables, such as inequality, population, fertility, health, per capita GDP, and emissions per capita. These are the central elements in many of the standard models used to explore the future of humanity and the environment, such as various IAMs used to guide policy, and models used by the IPCC whose output guides international negotiations on energy use and emissions. Common to these models is their deficiency in capturing dynamic bidirectional feedbacks between key variables of the Human System and the Earth System; instead, they simply use independent projections of Human System variables in Earth System Models.

Human System threatens to overwhelm the Carrying Capacity and Ecosystem Services of Earth System

Economic theories that endorse limitless growth are based on a model of the economy that, in essence, does not account for the resource inputs and waste-absorption capacities of the environment, and the limitations of technological progress and resource substitutability. In other words, these theories essentially model the economy as a perpetual motion machine [106]. But in the real world, economic activity both consumes physical material and energy inputs and produces physical waste outputs. The Earth System performs the functions (‘ecosystem services’) of providing both the sources of these material and energy inputs to the human economy, as well as the sinks which absorb and process the pollution and waste outputs of the human economy. See Figs 3 and 4 [20,106–109]. Since the scale of the Human System has grown dramatically relative to the Earth System’s capacity to provide these ecosystem services, the problems of depletion and pollution have grown dramatically.

Growth of the Human System has changed its relationship with the Earth System

Copyright © 2017 Oxford University Press

Figure 4.

Growth of the Human System has changed its relationship with the Earth System and thus both must be modeled interactively to account for their feedback on each other.

Growth of the Human System has changed its relationship with the Earth System and thus both must be modeled interactively to account for their feedback on each other.

Herman Daly has pointed out that the magnitude and growth rates of resource input and waste output are not sustainable: ‘We are drawing down the stock of natural capital as if it was infinite’ [110]. For example, the rapid consumption of fossil fuels is releasing vast stocks of carbon into the atmospheric and ocean sinks at a rate about a million times faster than Nature accumulated these carbon stocks.17 Furthermore, while certain sectors of global society have benefited from the growth of the past 200 years, the environmental consequences are global in their impact, and the time scale of degradation of the resulting waste products (e.g., emissions, plastics, nuclear waste, etc.) is generally much longer than they took to produce or consume. Contrary to some claims within the field of Economics, physical laws do place real constraints on the way in which materials and energy drawn from the Earth System can be used and discharged by the Human System [111–114]. To be sustainable, human consumption and waste generation must remain at or below what can be renewed and processed by the Earth System.

It is often suggested that technological change will automatically solve humanity’s environmental sustainability problems [115–117]. However, there are widespread misunderstandings about the effects of technological change on resource use. There are several types of technological changes, and these have differing effects on resource use. While some changes, such as in efficiency technologies, can increase resource-use efficiency, other changes, such as in extraction technologies and consumption technologies, raise the scale of resource extraction and per capita resource consumption. In addition, absent policy effects, even the increased technological efficiencies in resource use, are often compensated for by increases in consumption associated with the ‘Rebound Effect’ [118–121 ]. Furthermore, advances in production technologies that appear to be increases in productivity are instead very often due to greater energy and material inputs, accompanied by greater waste outputs. While the magnitude and even the sign of the effect of technological change on resource use varies greatly, the empirical record shows that the net effect has been a continued increase in global per capita resource use, waste generation, and emissions despite tremendous technological advances.

While per capita emissions of developed countries appear to be stabilizing when measured within the country of production, this is largely due to the shift in the location of energy-intensive manufacturing to developing countries, and estimates of developed countries’ per capita emissions measured based on their consumption show that they continued to grow [122–130]. Even if today’s industrialized societies can stabilize their resource use (at probably already unsustainable levels), large parts of the world are in the midst of this industrial transition and the associated technological regime shift from agrarian to industrial society that greatly increases per capita resource use and waste production [131,132].18 For example, from just 1971 to 2010, total primary energy use per capita in Korea increased by a factor of 10, from 0.52 to 5.16 toe [88]. Therefore, technological advancement can add to the problems of global sustainability, rather than solve them. The question is not only whether technology can help solve environmental sustainability challenges, but also what policies and measures are required to develop the right technologies and adopt them in time. Technological advances could, and should, be part of the solutions for environmental and sustainability problems, for example, by transitioning to renewables, increasing use efficiency, and fostering behavioral changes to cut resource use and emissions, particularly in the high-resource-using countries. But these technological solutions do not just happen by themselves; they require policies based on scientific knowledge and evidence to guide and support their development and adoption.

An important concept for the scientific study of sustainability is Carrying Capacity (CC), the total consumption—determined by population, inequality, and per capita consumption—that the resources of a given environment can maintain over the long term [110,133].19 Consumption of natural resources by a population beyond the rate that nature can replenish overshoots the Carrying Capacity of a given system and runs the risk of collapse. Collapses in population are common in natural systems and have occurred many times in human societies over the last 10,000 years.20

To study key mechanisms behind such collapses, Motesharrei et al. [134] developed a two-way coupled Human and Nature Dynamic model, by adding accumulated wealth and economic inequality to a predator–prey model of human–nature interaction. The model shows that both economic inequality and resource overdepletion can lead to collapse, in agreement with the historical record. Experiments presented in that paper for different kinds of societies show that as long as total consumption does not overshoot the CC by too much, it is possible to converge to a sustainable level. However, if the overshoot is too large, a collapse becomes difficult to avoid (see Fig. 5).21 Modern society has been able to grow far beyond Earth’s CC by using nonrenewable resources such as fossil fuels and fossil water. However, results from the model show that an unsustainable scenario can be made sustainable by reducing per capita depletion rates, reducing inequality to decrease excessive consumption by the wealthiest, and reducing birth rates to stabilize the population [134]. The key question is whether these changes can be made in time.

collapse due to both overdepletion of Nature and inequality

Copyright © 2017 Oxford University Press

Figure 5.

Example results from [134]. (Left panel) A type-N (Nature) collapse due to both overdepletion of Nature and inequality. (Right panel) A type-L (Labor) collapse: after an apparent equilibrium, population collapses due to overexploitation of Labor, although Nature eventually recovers.

Example results from [134]. (Left panel) A type-N (Nature) collapse due to both overdepletion of Nature and inequality. (Right panel) A type-L (Labor) collapse: after an apparent equilibrium, population collapses due to overexploitation of Labor, although Nature eventually recovers.

Current models of climate change include sea level rise, land degradation, regional changes in temperature and precipitation patterns, and some consequences for agriculture, but without modeling the feedbacks that these significant impacts would have on the Human System, such as geographic and economic displacement, forced migration, destruction of infrastructure, increased economic inequality, nutritional sustenance, fertility, mortality, conflicts, and spread of diseases or other human health consequences [135,136].

For example, nearly all features of the hydrologic system are now impacted by the Human System [60] with important feedbacks onto humans, e.g., snowpack decline due to climate change [53] reduces water availability; agricultural processes further affect water availability and water quality [54]; and land-use changes can reduce groundwater recharge [77]. Thus many populations face both reduced water availability and increased flood frequency and magnitude [76]. Furthermore, chemicals used in hydraulic fracturing, which involves cracking shale rock deep underground to extract oil and gas, can contaminate groundwater resources [137–139]. In addition, the injection of wastewater from oil and gas operations for disposal into deep underground wells is also altering the stresses of geologic faults, unleashing earthquakes [140].22 Increases in the frequency and magnitude of extreme weather events can impact agriculture and ecosystems [71,141].

Changes in the structure and functions of the ecosystem can also pose important threats to human health in many different ways [142]. Climate, climatic events (e.g., El Niño), and environmental variables (e.g., water temperature and salinity) can play a fundamental role in the spread of diseases [143–146]. A recent report by the US Global Change Research Program illustrates how climate change could affect human health through various processes and variables such as temperature-related death and illness, air quality, extreme events, vector-borne diseases, water-related illness, food safety and nutrition, and mental health and well-being [147].23 Environmental catastrophes can result in the decline of national incomes for a few decades [148], and higher temperatures can severely affect human health [149] and reduce economic productivity [150]. This effect is in addition to the well-established reduction in agricultural yields due to higher temperatures [151–156]. Climate could also be a strong driver of civil conflicts [157–161]. Environmental change is also a known trigger of human migration [162–164]. These, in turn, will significantly increase the unrealistically small future migration projections described in the second Section of this paper. In fact, climate change alone could affect migration considerably through the consequences of warming and drying, such as reduced agricultural potential, increased desertification and water scarcity, and other weakened ecosystem services, as well as through sea level rise damaging and permanently inundating highly productive and densely populated coastal lowlands and cities [165–168]. Furthermore, the impacted economic activities and migration could then feed back on human health [169]. Bidirectional coupling is required to include the effects of all of these feedbacks.

Bidirectional coupling of Human System and Earth System Models is needed. Proposed methodology: dynamic modeling, Input–Output models, Data Assimilation

Coupled systems can reveal new and complex patterns and processes not evident when studied separately. Unlike systems with only unidirectional couplings (or systems that just input data from extrapolated or assumed projections), systems with bidirectional feedbacks often produce nonlinear dynamics that can result in counterintuitive or unexpected outcomes [170]. Nonlinear systems often feature important dynamics which would be missed if bidirectional interactions between subsystems are not modeled. These models also may call for very different measures and policy interventions for sustainable development than those suggested by models based on exogenous forecasts of key variables.

The need for bidirectional coupling can be seen from the historical evolution of Earth System modeling. In the 1960s, atmospheric scientists developed the first mathematical models to understand the dynamics of the Earth’s climate, starting with atmospheric models coupled to simple surface models (e.g., [171]). Over the following decades, new components such as ocean, land, sea-ice, clouds, vegetation, carbon, and other chemical constituents were added to make Earth System Models more physically complete. These couplings needed to be bidirectional in order to include feedbacks [171]. The importance of accounting for bidirectional feedbacks is shown by the phenomenon of El Niño-Southern Oscillation (ENSO), which results from the coupled dynamics of the ocean–atmosphere subsystems. Until the 1980s, atmospheric and ocean models were unidirectionally coupled (in a simple, ‘one-way’ mode): the atmospheric models were affected by sea surface temperatures (SST) but could not change them, and ocean models were driven by atmospheric wind stress and surface heat fluxes, but could not change them. Such unidirectional coupling could not represent the positive, negative, and delayed feedbacks occurring in nature that produce ENSO episodes. Zebiak and Cane [172] developed the first prototype of a bidirectional coupled ocean–atmosphere model. This model, for the first time, allowed prediction of El Niño several seasons in advance [173]. Similarly, improving the modeling of droughts requires bidirectional coupling of the atmosphere and land submodels (see, for example, [174]). Most current climate models have since switched to fully coupled atmosphere–ocean–land–ice submodels. This example shows that we can miss very important possible outcomes if the model fails to consider bidirectional feedbacks between different coupled components of systems that the model represent. Since the Human System has become dominant, it is essential to couple the Earth and Human Systems models bidirectionally in order to simulate their positive and negative feedbacks, better reflecting interactions in the real world.24

This coupling process has taken place to a certain extent, but the coupling does not include bidirectional feedbacks between the Human and Earth Systems. Energy and Agriculture sectors have been added to ESMs creating comprehensive Integrated Assessment Models. There are now several important, advanced IAMs, including MIT’s IGSM, US DOE’s GCAM, IIASA’s MESSAGE, the Netherlands EAA’s IMAGE, etc25 [175–180 ]. However, in the IAMs, many of which are used in producing the IPCC reports, population levels are obtained from a demographic projection like the UN’s population projections discussed in the first and second Sections, which do not include, for example, impacts that climate change may have on the Human System [181].

In today’s IAMs, tables of projected demographic and socioeconomic variables determine changes in resource use and pollution/emission levels, which in turn can determine Earth System variables such as atmospheric temperature. However, changes in resource levels, pollution, temperature, precipitation, etc. estimated by the IAMs cannot, in turn, impact levels of these Human System variables and properties because they are exogenous to the IAMs. This is true even for the scenarios of the IPCC, which are constructed without full dynamic coupling between the human and natural systems. Although there are certain IAMs that include some couplings within human subsystems, critical feedbacks from natural systems onto demographic, economic, and human health variables are missing, so that the coupling between human subsystems and the Earth System in most current modeling is unidirectional, whereas in reality, this coupling is bidirectional [181].

Without including such coupled factors, the independent projections of population levels, economic growth, and carbon emissions could be based on inconsistent or contradictory assumptions, and hence could be inherently incorrect. For example, the UN’s population projections assume fertility declines in today’s lowest-income countries that follow trajectories established by higher-income countries. However, the economic growth projections and, therefore, per capita carbon emission projections, assume today’s poorest countries will not grow close to anywhere near the level of today’s wealthy countries. The 45 lowest income countries are projected to grow to ~$6,500 GDP per capita by 2100 [104] but the average GDP per capita of today’s high-income countries is ~$45,000. Rather than relying on projections, the demographic component of any coupled model must include the factors that contributed to the demographic transition in other countries, such as education, family planning, health care, and other government policies and programs [98,182–184].

Projections of key variables in a realistic model should not be based on mechanistic temporal extrapolations of those variables but rather on the intrinsic internal dynamics of the system. Specifically, key parameters of the Human System, such as fertility, health, migration, economic inequality, unemployment, GDP per capita, resource use per capita, and emissions per capita, must depend on the dynamic variables of the Human–Earth coupled system.26 Not including these feedbacks would be like trying to make El Niño predictions using dynamic atmospheric models but with sea surface temperatures as an external input based on future projections independently produced (e.g., by the UN) without feedbacks.

To address the above issues, the development of coupled global Human–Earth System frameworks that capture and represent the interactive dynamics of the key subsystems of the coupled human–nature system with two-way feedbacks are urgently needed. The global Human–Earth System framework we propose, and represent schematically in Fig. 6, combines not only data collection, analysis techniques, and Dynamic Modeling, but also Data Assimilation, to bidirectionally couple an ESM containing subsystems for Global Atmosphere, Land (including both Land–Vegetation and Land-Use models) and Ocean and Ice, to a Human System Model with subsystems for Population Demographics, Water, Energy, Agriculture, Industry, Construction, and Transportation. The Demographics subsystem includes health and public policy factors that influence key variables such as fertility, disease, mortality, and immigration rates, while the Water, Energy, Agriculture, Industry, Construction, and Transportation subsystems include cross-national input–output modeling to provide the consumption-based resource-input and waste-output ‘footprint’ accounting analyses missing from the territorial-based methods.27

Schematic of a Human–Earth System with drivers and feedbacks

Copyright © 2017 Oxford University Press

Figure 6.

Schematic of a Human–Earth System with drivers and feedbacks.

The need for global Earth System frameworks coupled to population drivers has been recommended since the 1980s, in the pioneering report by the Earth System Sciences Committee of the NASA Advisory Council, chaired by Francis P. Bretherton [185]. While Earth System components and needed feedbacks were described and interaction of the Earth System and the Human System was shown in the report, multiple subsystems of the Human System and feedbacks between those subsystems and with the Earth System were not fully included. The coupled framework proposed here includes the major subsystems and components of the Human System as well as their major feedbacks and risks, and explicitly recognizes policies as a major driver of the full system.

ESMs have long been based on integrating their dynamical equations numerically (e.g., numerical weather prediction models used by Weather Services). For Human Systems, Dynamic Modeling is also a powerful tool that scientists have successfully applied to many systems across a range of economic, social, and behavioral modeling [113,186–189]. The ability of dynamic models to capture various interactions of complex systems, their potential to adapt and evolve as the real system changes and/or the level of the modelers’ understanding of the real system improves, their ability to model coupled processes of different temporal and spatial resolutions and scales, and their flexibility to incorporate and/or couple to models based on other approaches (such as agent-based modeling, stochastic modeling, etc.) render them as a versatile and efficient tool to model coupled Earth–Human Systems [190–193]. Since ESMs are already dynamic, a dynamic modeling platform would be a natural choice to model coupled Earth–Human Systems. Figure 7 is a schematic showing an example of a model to couple energy and water resources at the local and regional scales to human population.28

model schematic integrating water and energy resources with human population and health

Copyright © 2017 Oxford University Press

Figure 7.

An example of a model schematic integrating water and energy resources with human population and health at the local and regional scales, coupled to an ESM at the global scale.

An example of a model schematic integrating water and energy resources with human population and health at the local and regional scales, coupled to an ESM at the global scale.

A major challenge in implementing such a framework of a coupled Earth–Human System is that it requires tuning many parameters to approximately reproduce its past observed evolution. This task has become easier over the last decade with the development of advanced methods of Data Assimilation commonly used in atmospheric sciences to optimally combine a short forecast with the latest meteorological observations in order to create accurate initial conditions for weather forecasts generated several times a day by the National Weather Services (e.g., [194–198]). The most methods (known as 4D-Var and Ensemble Kalman Filter) are able to go over many years of observations using a dynamic forecasting model and estimate the optimal value of model parameters for which there are no observations (e.g., [199–206]).

Uncertainty is another important challenge in producing future behavior and scenarios with models. This problem has been addressed successfully in meteorology by using, instead of a single forecast, an ensemble of typically 20–200 model forecasts created by adding perturbations in their initial conditions and in the parameters used in the models. These perturbations are selected to be compatible with estimated uncertainties of the parameters and initial conditions [207–210]. The introduction of ensemble forecasting in the early 1990s provided forecasters with an important measure of uncertainty of the model forecasts on a day-to-day and region-to-region basis. This made it possible to extend the length of the US National Weather Services weather forecasts made available to the public from 3 days to 7 days (e.g., [196]). A similar approach, i.e., running ensembles of model projections with perturbed parameters, could be used with coupled Earth–Human System models to provide policymakers with an indicator of uncertainty in regional or global projections of sustainability associated with different policies and measures. Calibration and tuning of coupled Human–Earth System models, as indicated above, could take advantage of optimal parameter estimation using advanced Data Assimilation, rather than following the more traditional approach of tuning individual parameters or estimating them from available observations.

Our proposed framework, together with Data Assimilation techniques, allows developing, testing, and optimizing measures and policies that can be implemented in practice for early detection of critical or extreme conditions that will lead to major risks to society and failure of supporting systems and infrastructure, and aid in the estimation of irreversible thresholds or regime-shifting tipping points [211–213]. Moreover, it allows for detecting parameters and externalities (such as inadequate measures or policies) that may play a significant role in the occurrence of catastrophes and collapses [214–217]. By adjusting the values of those parameters found to be influential through numerical experiments and simulations, short-term and long-term policy recommendations that can keep the system within optimum levels or sustainable development targets (e.g., Millennium Development Goals or the Post-2015 Development Agenda) can be designed and tested. This approach would allow policies and measures that have been found to be successful in specific cases to be modeled under different conditions or more generally.29 Effective policies and measures are needed for all sectors of the Human–Earth System Model described above [184,218]. A dynamical model of such systems should be capable of testing the effects of various policy choices on the long-term sustainability of the system [215].

The track record until recently on climate change shows policy inaction is both dangerous and can be very costly [168,193,219]. And yet, despite a long history of scientific warnings (please see Footnote 30 for a detailed description30), the many current ecological and economic impacts and crises, the future risks and dangers, the large number of international meetings and conferences on the urgent need for climate policies and measures, and the adoption of some national and regional climate policies, growth in global CO2 emissions from fossil fuels and cement has not only remained strong but is actually accelerating. The average annual rate of growth from 2000–2009 (~2.9%) was almost triple the rate of the previous two decades, 1980–1999 (~1.1%), while the most recent years, 2010–2012, have been even higher (~3.4%) [127,220].

The sobering fact is that very little has been accomplished in establishing effective carbon reduction policies and measures based on science. The development and implementation of policies promoting sustainable technologies for the future (e.g., renewable energy sources) becomes even more challenging because of intervention and obstruction by a number of vested interests for continued and expanded use of their existing nonrenewable technologies and resources (e.g., some in the coal, oil, and gas industries). This is further complicated by some political rejection of science-based future climate projections and unwillingness to consider alternative economic development pathways to lowering the emission of carbon dioxide and other GHGs from the Human–Earth systems. However, the commitments to reduce carbon emissions by the vast majority of the world’s countries in the recent agreement from the COP21 in Paris could signify a major policy turning point.31

One of the anonymous reviewers pointed out that the failure of policymakers to react sufficiently to the scientific understanding of global warming is paralleled by a similar failure with regard to scientific knowledge about population trajectories. Family planning policies have been pushed off policy agendas for the last few decades, and the subject of population has even become taboo. There are numerous non-evidence-based barriers that separate women from the information and means necessary to plan childbirths. The total fertility rate remains high in many countries not because women want many children but because they are denied these technologies and information, and often even the right to have fewer children. Access to family planning information and voluntary birth control is a basic human right and should be treated as such by international institutions and policy frameworks. Similarly, there is also an urgent need for improved educational goals worldwide. While these goals are repeatedly supported by both researchers and policymakers, policy implementation continues to be lacking. As a recent report by the Royal Society (the UK National Academy of Sciences) explains, improved education (especially for women) and meeting the large voluntary family planning needs that are still unmet in both developed and developing countries are fundamental requirements of future economic prosperity and environmental sustainability. This again emphasizes the critical need for science-based policies [98,182,184,218,221–227].32

The importance and imminence of sustainability problems at local and global scales, the dominant role that the Human System plays in the Earth System, and the key functions and services the Earth System provides for the Human System (as well as for other species), all call for strong collaboration of earth scientists, social scientists, and engineers in multidisciplinary research, modeling, technology development, and policymaking. To be successful, such approaches require active involvement across all disciplines that aim to synthesize knowledge, models, methods, and data. To take effective action against existing and potential socio-environmental challenges, global society needs scientifically-based development of appropriate policies, education that raises collective awareness and leads to actions, investment to build new or improved technologies, and changes in economic and social structures. Guided by such knowledge, with enough resources and efforts, and only if done in time, human ingenuity can harness advances in science, technology, engineering, and policy to develop effective solutions for addressing the challenges of environment and climate, population, and development. Only through well-informed decisions and actions can we leave a planet for future generations in which they can prosper.

Acknowledgments
We benefited from helpful discussions with many colleagues including Professors Mike Wallace, Jim Carton, Inez Fung, Deliang Chen, Jim Yorke, Ross Salawitch, Margaret Palmer, Russ Dickerson, Paolo D’Odorico, and Steve Penny.
We thank Mr Ian Munoz and Dr Philippe Marchand for technical assistance in producing Fig. 1.
As is the case with all independent peer-reviewed research, the views and conclusions in the paper are those of the authors alone. The funding organizations did not have any role in writing the paper and did not review it.
FUNDING
This work was supported by the University of Maryland Council on the Environment 2014 Seed Grant (1357928). The authors would like to acknowledge the following grants and institutions: SM, KF, and KH: National Socio-Environmental Synthesis Center (SESYNC)—US National Science Foundation (NSF) award DBI-1052875; JR: The Institute of Global Environment and Society (IGES); GRA: Laboratory Directed Research and Development award by the Pacific Northwest National Laboratory, which is managed by the Battelle Memorial Institute for the US Department of Energy; MAC: Office of Naval Research, research grant MURI N00014-12-1-0911; FMW: NSF award CBET-1541642; VMY: The Institute for New Economic Thinking (INET).
1
Planet Earth has been the habitat of humans for hundreds of thousands of years. Human life depends on the resources provided by the Earth System: air from the Earth’s atmosphere; water from the atmosphere and rivers, lakes, and aquifers; fruits from trees; meat and other products from animals; and over the past 10,000 years, land for agriculture, and metals and other minerals from the Earth’s crust. Until about 200 years ago, we used renewable biomass as the major source of materials and energy, but over the course of the past two centuries, we have instead become heavily dependent on fossil fuels (coal, oil, and natural gas) and other minerals for both materials and energy. These nonrenewable resources made possible both of the revolutions which drove the the growth in consumption per capita and population: the Industrial Revolution and the Green Revolution. Our relationship with our planet is not limited to consuming its resources. Waste is an inevitable outcome of any production process; what is produced must return to the Earth System in some form. Waste water goes back to the streams, rivers, lakes, oceans, or into the ground; greenhouse and toxic gases go into the atmosphere, land, and oceans; and trash goes into landfills and almost everywhere else.
2
Using gross domestic product (GDP) per capita as a rough measure of consumption per capita, the extent of the impact of the Human System on the Earth System can be estimated from the total population and the average consumption per capita. This can be also seen from the defining equation for GDP per capita, i.e., GDP per capita = GDP/Population. One may rewrite this equation as GDP = Population × GDP per capita. By taking variations, we get:
dGDPGDP=dPopulationPopulation+dGDP per capitaGDP per capita
This equation simply means that the relative change in the total GDP is comprised of two components, i.e., the relative changes in population and GDP per capita. A graphical demonstration of this decomposition can be seen in the inset of Fig. 1. Data from [229], with updates from the Maddison Project, 2013 (for the underlying methodology of the updates see [228]). Population data for the inset from [230].
3
In fact, the UN’s 2015 Population Revision has already raised the global total in 2100 by 360 million to 11.2 billion just from the last estimate published in 2013 [1].
4
While there has been some reduction of the energy intensity and emissions intensity of economic growth in wealthy countries, one has to be cautious about extrapolating recent improvements, as small as they may be, because these improvements have been at least partly due to the outsourcing of energy-intensive sectors to poorer countries [122–130,231–233], and because there are basic physical limits to further efficiency improvements, especially in the use of water, energy, food, and other natural resources [113,114,234–236].
5
For example, between 1950 and 1984, the production of grains increased by 250% due to the use of fossil fuels for fertilization, mechanization, irrigation, herbicides, and pesticides [237]. These technological advances, together with the development of new seed varieties, are referred to as the ‘Green Revolution’ that allowed global population to double in that period [238].
6
Thus, while the rate of materials intensity of GDP growth has declined (very slowly: 2.5 kg/$ in 1950, 1.4 kg/$ in 2010), the per capita rate continues to increase. The only materials category whose per capita use has remained relatively stable is biomass [4,132], probably reflecting the physical limits of the planet’s regenerating natural resources to continue to provide humans with ever-growing quantities of biomass [239,240]. (See [134] for a conceptual model of regenerating natural resources.)
7
In some estimates, as much as 10–15 million years for carbon [241].
8
Rates of deforestation and agricultural expansion have accelerated in recent years with extensive new infrastructure providing conduits for settlement, exploitation, and development. Even within the remaining habitat, fragmentation is causing rapid species loss or alteration, and is producing major impacts on biodiversity, regional hydrology, and global climate, in particular in tropical forests, which contain over half of Earth’s biodiversity and are an important driver in the climate system. Ongoing worldwide habitat fragmentation, together with anthropogenic climate change and other human pressures, may severely degrade or destroy any remaining ecosystems and their wildlife [242–251]. For example, the Living Planet Index, which measures biodiversity based on 14,151 monitored populations of 3,706 vertebrate species, shows a staggering 58% decline in populations monitored between 1970 and 2012 [252]. Continuation of these trends could result in the loss of two-thirds of species populations by 2020 (only 4 years from now) compared to 1970 levels (i.e., in just half a century).
9
The US, one of the highest resource use per capita countries in the world (e.g., with an energy consumption per capita 4 times that of China and 16 times that of India in 2010 [87]), is projected to grow in population by ~50% from both natural increase and immigration, generating a very large increase in total resource use and waste generation for the US alone.
10
For example, while Africa’s population is projected to rise over 6-fold from 366 million in 1970 to 2.4 billion in 2050, the UN’s projection of annual net emigration from Africa remains about constant until 2050, at ~500,000, similar to the average from 1970 to 2000, thus the projected percentage emigrating declines sharply. (For comparison, between 1898 and 1914, ~500,000 people emigrated each year just from Italy alone, when its population was only 30–35 million.) Then, from 2050 to 2100, the annual net emigration is arbitrarily projected to smoothly decline to zero, even as Africa’s projected population continues rising to over 4 billion [3]. However, recent international migration has increased on average with global population [253] and net migration to the developed countries has increased steadily from 1960 to 2010 [3] and more explosively recently. Thus, the UN projection of emigration seems unrealistically low, both relative to its increasing population and in the context of a rapidly aging, and supposedly shrinking, population in the developed countries, as well as recent migration pattern alterations following conflicts and associated social disruptions. The United Nations High Commissioner on Refugees estimates that by the end of 2015, a total of 65.3 million people in the world were forcibly displaced, increasing at a rate of ~34,000 people per day. There are 21.3 million refugees worldwide, with more than half from Afghanistan, Syria, and Somalia [254]. Yet, this figure only reflects refugees due to persecution and conflict but does not include refugees as a result of climate change, famines, and sea level rise. Net migration in 2015 to Germany alone was 1.1 million [255].
The UN’s 2012 projections of migration for other regions are similarly arbitrary and unrealistic. Net annual emigration from Latin America is projected to decline from nearly 1.2 million in 2000–2010 to ~500,000 by 2050, and then decline to zero by 2100. Net annual immigration to Europe rose from 41,000 in 1960–1970 to almost 2 million in 2000–2010, and explosively today due to increasing social strife, and yet, the UN’s projection for 2010–2050 is a continuous decline in net immigration down to only 900,000 by 2050, and then a decline to zero by 2100 [3]. Even the UN Population Division itself admits, ‘We realize that this assumption is very unlikely to be realized but it is quite impossible to predict the levels of immigration or emigration within each country of the world for such a far horizon’ [3].
11
In order to limit the total increase in average global temperatures within the context of current climate change negotiations over carbon budgets, there is a maximum total amount of carbon that can be emitted globally, thus carbon emissions must be apportioned across countries and across time. Lower estimates of migration to developed countries means lower estimates of emissions in the future in developed countries, which means the developed countries are not required to make as much effort today to lower their own emissions [104].
12
These dramatic declines appear highly unlikely given that countries like Japan and Germany have not yet declined by more than ~1% [3], and already their governments have enacted a series of policies to encourage higher birth rates. In fact, there is evidence for the efficacy of various family policies in the recent fertility rebounds observed in several developed countries [97]. Similarly, Russia, which saw its fertility rate plunge after 1989 (reaching a low of 1.19 in 1999 from 2.13 in 1988), enacted pronatalist policies and liberalized immigration. The fertility rate has since rebounded, and the population decline reversed in 2009.
13
Despite widespread talk of population decline, it is important to emphasize that the only countries in the world to have experienced any declines beyond ~1% have all been associated with the special circumstances of the collapse of the former Soviet Bloc (and some microstates, such as a few island nations), and even in these cases, migration has played a major role in population changes. This is even true within Germany, where the only Länder (provinces) which have declined significantly in population are all from the former East Germany.
14
For example, the 1999 UN projections significantly overestimated the time it would take to add the next billion people [256]. Worse still, the 2015 estimate for the world’s total population in 2100 [1] has gone up by over 2 billion just since the 2004 estimate [257]. Even the 2010 UN projections had to be revised upwards in 2012 because previous estimates of total fertility rates in a number of countries were too low, and in some of the poorest countries the level of fertility appears to have actually risen in recent years [3,258,259], and the 2012 estimates have again been revised upwards in 2015 [1]. To put all this in perspective, the 2004 estimates projected a peak in world population of 9.22 billion, but in the 2015 projection, 9.22 billion will be reached as early as 2041, and will still be followed by at least another six decades of growth.
15
Poorer populations are expected to be more heavily impacted by climate change and other mounting environmental challenges despite having contributed much less to their causes [136,166,167,260]. International and internal migration is increasingly being seen as an important component of adaptation and resilience to climate change and a response to vulnerabilities from other environmental risks. Given the aging social structures in some parts of the world, policies that support increased migration could be important not only for environmental adaptation, but also for the realization of other socioeconomic and demographic goals. Thus, migration will help both the sending and receiving countries. Of course, given the scale of projected population growth, migration alone is unlikely to be able to balance the regional disparities.
16
The scientific community has urged limiting the global mean surface temperature increase relative to pre-industrial values to 2°C. The economic challenges of staying on a 2°C pathway are greater, the longer emission reductions are postponed [261]. Given the role that developed countries have played historically in the consumption of fossil fuels, and their much higher per capita carbon emissions today, it is imperative that the developed countries lead the way and establish a successful track record in achieving such reductions. Thus, it is positive that at the United Nations Framework Convention on Climate Change Conference of the Parties (COP21) in Paris in December 2015, governments crafted an international climate agreement indicating their commitments to emissions reduction for the near term (to 2025 or 2030). Most importantly, the US committed to reduce economy-wide GHG emissions below 2005 levels by 26%–28% in 2025, and the EU has committed to reduce 2030 GHG emissions relative to 1990 by 40% (excluding emissions from land-use changes). Assuming that the goals of the Intended Nationally Determined Contributions (INDCs) are fulfilled, the results in [262] and [261] show that a successful Paris agreement on near-term emissions reductions will be valuable in reducing the challenges of the dramatic long-term transformations required in the energy system to limit global warming to 2°C. The Paris framework will succeed even further if it enables development of subsequent pathways leading to the required additional global emissions reductions.
17
As another example, millions of metric tons of plastic enter the oceans every year and accumulate throughout the world’s oceans, especially in all subtropical gyres [263,264].
18
The effect of technological change can be observed in the transition from agrarian to industrial society. This industrialization raised agricultural yields largely due to increasing inputs, thereby allowing rapid Human System expansion, but it also generated a societal regime shift that greatly increased per capita resource use and waste generation [131].
19
One can generalize the definition of Carrying Capacity to any subsystem with different types of natural resources coupled with sociodemographic variables. For example, the subsystems for water, energy, and agriculture—each coupled bidirectionally to human sociodemographic variables and to each other—result in Water CC, Energy CC, and Agriculture CC. Water CC can be defined as the level of population that can be sustained at a particular per capita consumption and a given level of water sources and supply in the area under study. In general, this level depends on both human and natural factors. For example, Water CC is determined by the natural flow rate of water into and out of the area, precipitation and evaporation, withdrawal rate from water sources, dispensing technology, recycling capacity, etc. Moreover, Water, Energy, or Agriculture CC in a certain area can be imported from other regions to temporarily support a larger population and consumption [123,265,266]. Recent literature has emphasized the integrated nature of agricultural, energy, and water resources, and modeling the interactions of these subsystems is essential for studying the food–energy–water nexus [267–271]. In order to understand and model either Human or Earth Systems, we must model all these natural and human subsystems interactively and bidirectionally coupled.
20
A recent study focusing on the many collapses that took place in Neolithic Europe concluded that endogenous causes, i.e., overrunning CC and the associated social stresses, have been the root cause of these collapses [214,272].
21
Collapses could also happen due to rapid decline of CC as a result of environmental degradation. Droughts and climate change can decrease natural capacities and regeneration rates, which in turn lead to a decline of CC. (Motesharrei et al. [134] describe how to model these factors for a generic system, termed Nature Capacity and Regeneration Rate, and how they determine CC.) The resulting gap between total consumption and CC can lead to conflicts and the ensuing collapses. For example, see recent literature that shows the impacts of climate change on conflicts [159–161], a potential precursor to collapses.
22
For example, ‘Until 2008 not a single earthquake had ever been recorded by the U.S. Geological Survey from the Dallas-Fort Worth (DFW) area…Since then, close to 200 have shaken the cities and their immediate suburbs. Statewide, Texas is experiencing a sixfold increase in earthquakes over historical levels. Oklahoma has seen a 160-fold spike in quakes…In 2014 the state’s earthquake rate surpassed California’s’. [273].
23
The USGCRP report states that ‘Current and future climate impacts expose more people in more places to public health threats…Almost all of these threats are expected to worsen with continued climate change’.
24
Such interactions take place not just at a global scale but also at the ecosystem and local habitat scales. Ecosystems at the regional and local scales provide critical habitat for wildlife species, thus preserving biodiversity, and are also an essential source of food, fiber, and fuel for humans, and forage for livestock. Ecosystem health, composition, function, and services are strongly affected by both human activities and environmental changes. Humans have fundamentally altered land cover through diverse use of terrestrial ecosystems at the local scale, which then impact systems at the global scale. Additional changes have taken place as a result of climate change and variability [274]. Furthermore, human pressures are projected to have additional repercussions for species survival, biodiversity, and the sustainability of ecosystems, and in turn feedback on humans’ food security and economic development [141]. Thus, a key challenge to manage change and improve the resilience of terrestrial ecosystems is to understand the role that different human and environmental forces have on them, so that strategies that target the actual drivers and feedbacks of coupled components of change can be developed and implemented. Understanding how terrestrial ecosystems function, how they change, and what limits their performance is critically important to determine their Carrying Capacity for accommodating human needs as well as serving as a viable habitat for other species, especially in light of anticipated increase in global population and resource consumption for the rest of this century and beyond. Biodiversity and ecosystem services in forests, farmlands, grazing lands, and urban landscapes are dominated by complex interactions between ecological processes and human activities. In order to understand such complexity at different scales and the underlying factors affecting them, an integrated Human–Earth systems science approach that couples both societal and ecological systems is needed. Humans and their activities are as important to the changing composition and function of the Earth system as the environmental conditions and their natural variability. Thus, coupled models are needed to meet the challenges of overcoming mismatches between the social and ecological systems and to establish new pathways toward ‘development without destruction’ [242–248,250,251].
25
MIT: Massachusetts Institute of Technology; IGSM: Integrated Global System Modeling framework; US DOE: United States Department of Energy; GCAM: Global Change Assessment Model; IIASA: International Institute for Applied Systems Analysis; MESSAGE: Model for Energy Supply Systems And their General Environmental impact; EAA: Environmental Assessment Agency; IMAGE: Integrated Modelling of Global Environmental Change.
26
Furthermore, the use of GDP as a key measure and determinant in these future projections is itself highly problematic, because it is a very weak measure of human well-being, economic growth, or societal prosperity [275]. GDP neither accounts for the value of natural capital nor human capital, ignores income and wealth inequality, neglects both positive and negative externalities, and only captures social costs and environmental impacts to the extent that prices incorporate them. Any economic activity, whether deleterious or not, adds to GDP as long as it has a price. For example, labor and resources spent to repair or replace loss due to conflicts or environmental damages are counted as if they add to—rather than subtract from—total output. Alternative measures, such as the Genuine Progress Indicator, the Sustainable Society Indicator, the Human Development Index, and the Better Life Index of the Organization for Economic Co-operation and Development, have been developed [276–278]. These measures show that, especially since the 1970s, the large increases in GDP in the developed countries have not been matched by increases in human well-being. Integrated and coupled Human–Earth system models will allow for the development of much more accurate and realistic measures of the actual productivity of economic activity and its costs and benefits for human well-being. Such measures will allow for the valuation of both natural and human capital and for defining and developing sustainability metrics that are inclusive of the wealth of natural and human capital. They will also bring together the current disparate debates on environment/climate, economics, demographics, policies, and measures to put the Human–Earth System on a more sustainable path for current and future generations.
27
Input–Output analysis can account for the flows of resource inputs, intermediate and finished goods and services, and waste outputs along the production chain [279,280]. By accounting for the impacts of the full upstream supply chain, IO analysis has been used in life-cycle analysis [281] and for linking local consumption to global impacts along global supply chains [122,127,282,283]. IO models can be extended with environmental parameters to assess different environmental impacts from production and consumption activities, including water consumption [123,284,285], water pollution [285,286], carbon dioxide emissions [287–289], land-use and land-cover change [130,290,291], and biodiversity [283]). Such models have been developed and applied at various spatial and temporal scales. Since emissions embodied in trade have been growing rapidly, resulting in an increasing divergence between territorial-based and consumption-based emissions, territorial measures alone cannot provide a comprehensive and accurate analysis of the factors driving emissions nor the effectiveness of reduction efforts [292]. IO models can provide these consumption-based calculations and have been employed to identify and quantify key drivers for emissions and energy consumption (such as population growth, changes of consumption patterns, and technical progress [288,293,294]) as well as the environmental impacts of social factors (such as urbanization and migration) reflecting consumption patterns of different categories of households with high spatial detail [282,295]. IO analysis can also be used for simulating potential future states of the economy and the environment (e.g., [296]), through dynamically updating technological change and final demand, or employing recursive dynamics to explore explicit scenarios of change.
28
Many of the variables in such a model are affected by processes at the global scale, while decisions are often made at a local scale. Choosing variables from the subsystems depends on the specific goals of the model. Reconciling various scales spatially or temporally can be done through downscaling, aggregating, and averaging for variables defined at smaller scales. Moreover, the Human System strongly influences consumption even at these smaller local scales. For example, Srebric et al. [297] show that not only population size but also behavior of people at the community scale strongly affects local energy consumption. This example shows that coupled Human System models are needed at various scales to project consumption patterns, especially for energy and water. We thank the anonymous Reviewer No. 2 for emphasizing the importance of coupling across various scales, and for many other helpful comments.
29
There are numerous examples of policies successfully tackling many of the challenges identified in this paper. For example, the province of Misiones, Argentina, with policies for forest protection and sustaining local incomes, stands out by having very high, remotely sensed values of NDVI (vegetation index) compared to the neighboring regions [298]. The state of Kerala, India, despite a very low GDP per capita (under $300 until 1990s), through policies expanding access to education and medical care, enjoys higher life expectancy, lower birth rates, lower inequality, and superior education compared to the rest of India [299–301]. Formal primary, secondary, and tertiary education can also reduce societal inequalities and improve economic productivity [98,182,184,302]. Education itself can also be offered in other forms. For example, education through mass media can be influential for changing long-term cultural trends and social norms, as can be seen in the successful attempt to reduce fertility rates in Brazil using soap operas [303]. There have also been other extremely successful noncoercive family planning policies, e.g., in Thailand, Mexico, and Iran [94,99,183,304–308]. A recent paper in PNAS [309] showed that slowing population growth could provide 16%–29% of the emissions reductions needed by 2050 and by 37%–41% by 2100, and a study by Wire [310] shows family planning is four times as efficient as adopting low-carbon technologies for reducing carbon in the atmosphere and ocean. Successful local and regional policies on air quality include California sharply reducing NO2 in Los Angeles by 6.0 ± 0.7 % per year between 1996 and 2011 through strict policies on vehicular emissions [311]. Maryland succeeded in reducing SO2 emissions per unit energy produced from power plants by ~90% [312]. Regulations have reduced levels of SO2 and NO2 over the eastern US by over 40% and 80%, respectively, between 2005 and 2015. Over a similar period in India, these levels grew by more than 100% (SO2) and 50% (NO2), showing the possible dangers ahead absent effective policies [313].
30
Anthropogenic climate change driven by carbon emissions, water vapor, and surface albedo was theorized as early as the 19th century, and an empirical warming trend was measured by the 1930s. Scientists came to understand the fundamental mechanisms of climate change by the 1950s and 60s (e.g., [314,315]). The first international scientific Conference on the Study of Man’s Impact on Climate was held in 1971 and issued a report warning about the possibilities of melting polar ice, reduced albedo, and other unstable feedbacks that could lead to accelerated climate change ‘as a result of man’s activities’ [316]. By 1979, a US National Academy of Sciences panel, chaired by Jule Charney, issued a report confirming the findings of climate change models [317], and over the course of the 1980s, the empirical evidence confirming ongoing climate change grew very rapidly. In 1988, James Hansen testified before the US Congress about the near certainty of climate change. The first IPCC Assessment Report was completed in 1990, and the Fifth in 2014, with each report successively warning of the increasingly grave consequences of our current trajectory. The latest report warns that without new and effective policies and measures, increases in global mean temperature of 3.7°C–4.8°C are projected by 2100 relative to pre-industrial levels (median values; the range is 2.5°C–7.8°C) [318]. Other national and global institutions have also warned of the disastrous consequences of such warming (e.g., [274]). As a recent World Bank assessment [261, p. xvii] states, ‘The data show that dramatic climate changes, heat and weather extremes are already impacting people, damaging crops and coastlines and putting food, water, and energy security at risk…The task of promoting human development, of ending poverty, increasing global prosperity, and reducing global inequality will be very challenging in a 2°C [increase] world, but in a 4°C [increase] world there is serious doubt whether this can be achieved at all…the time to act is now’. Scientists have also long warned about the consequences of climate change for international security (e.g., [319–321]).
31
With more than 180 countries, covering ~90% of global emissions, having committed to submit INDCs, the Paris framework forms a system of country-level, nationally determined emissions reduction targets that can be regularly monitored and periodically escalated. While analysis [262] of the INDCs indicates that, if fully implemented, they can reduce the probability of reaching the highest levels of temperature change by 2100 and increase the probability of limiting global warming to 2°C, achievement of these goals still depends on the escalation of mitigation action beyond the Paris Agreement. Even if the commitments are fulfilled, they are not enough to stay below the 2°C pathway [261,262,322], but the Paris framework is an important start for an eventual transformation in policies and measures. Thus, the INDCs can only be a first step in a deeper process, and the newly created framework must form an effective foundation for further actions on emissions reductions.
32
We thank anonymous Reviewer No. 1 for guiding us to add all the important points in this paragraph and the associated citations.
© Medical Xpress 2011 – 2017, Science X network
Accessed at https://medicalxpress.com/news/2017-02-disease-superspreaders-ebola-epidemic.html on February 15, 2017.

.

Research for assessment, not deployment, of Climate Engineering: The German Research Foundation’s Priority

Fig 1 Update of the blob diagram of the Royal Society 2009 report

Copyright © 2017 Oxford University Press

Received 29 AUG 2016, Accepted 27 DEC 2016, Accepted article online 31 DEC 2016
ProgramSPP 1689
Andreas Oschlies1 and Gernot Klepper2
1GEOMAR Helmholtz Centre for Ocean Research Kiel, Kiel, Germany, 2Kiel Institute for theWorld Economy, Kiel, Germany

Abstract The historical developments are reviewed that have led from a bottom–up responsibility initiative of concerned scientists to the emergence of a nationwide interdisciplinary Priority Program on the assessment of Climate Engineering (CE) funded by the German Research Foundation (DFG). Given the perceived lack of comprehensive and comparative appraisals of different CE methods, the Priority Program was designed to encompass both solar radiation management (SRM) and carbon dioxide removal (CDR) ideas and to cover the atmospheric, terrestrial, and oceanic realm. First, key findings obtained by the ongoing Priority Program are summarized and reveal that, compared to earlier assessments such as the 2009 Royal Society report, more detailed investigations tend to indicate less efficiency, lower effectiveness, and often lower safety. Emerging research trends are discussed in the context of the recent Paris agreement to limit global warming to less than two degrees and the associated increasing reliance on negative emission technologies. Our results show then when deployed at scales large enough to have a significant impact on atmospheric CO2, even CDR methods such as afforestation—often perceived as “benign”—can have substantial side effects and may raise severe ethical, legal, and governance issues.

We suppose that before being deployed at climatically relevant scales, any negative emission or CE method will require careful analysis of efficiency, effectiveness, and undesired side effects.

1. Introduction

The ongoing rapid increase in atmospheric CO2 concentrations despite sound scientific evidence about its climatic impacts and despite all political efforts to reduce anthropogenic CO2 emissions at national and international levels has brought climate engineering (CE) ideas more and more into the focus of the scientific and political search for possible options of action. In the years following Paul Crutzen’s [2006] editorial, the debate intensified, and an increasing number of media reports, scientific reviews, and new scientific research appeared on the scene.

In Germany, early scientific research on CE included work on simulating solar radiation management, which later became part of the European Implications and risks of engineering solar radiation to limit climate change (IMPLICC) project [Schmidt et al., 2012] and the Geoengineering Model Intercomparison Project [GeoMIP, Robock et al., 2011]. At the Marsilius Kolleg at the University of Heidelberg, an interdisciplinary research project was established in 2009 with a cohort of PhD students from different disciplines investigating various aspects of CE. A group of oceanographers at the Alfred Wegener Institute in Bremerhaven engaged in small-scale field experiments in the Southern Ocean to study biogeochemical and ecological impacts of iron fertilization, the latest of which became known as the German-Indian LOHAFEX experiment carried out from the German research icebreaker RV Polarstern in early 2009. After protests from non-governmental organizations and concerned citizens, the relevant federal ministries called for a rapid independent scientific assessment regarding possible side effects and the legal situation before the experiment could be allowed to go ahead. This happened while the ship was already on its way from Cape Town into the Southern Ocean. The concentrated and intensive effort of writing these assessments within a few days brought together scientists from different disciplines to assess the justification and possible implications of the planned experiment and stimulated further work, leading to several interdisciplinary scientific publications regarding iron fertilization [Güssow et al., 2010; Oschlies et al., 2010; Rickels et al., 2010, 2012].

2. First Steps Towards a Coordinated Research Strategy

Apart from these early uncoordinated and often narrowly focused local research efforts, the German scientific community had been mostly reluctant, if not skeptical, to engage in research on geoengineering. Traditionally, the focus of research has been on mitigation, which has also been in the center of the public debate and societal efforts, such as the “Energiewende,” and many scientists did not even regard CE as a sensible option [Schellnhuber, 2011] worth studying. However, in response to the intensifying international debate, in summer 2008, the National Committee for Global Change Research, jointly funded by the German Research Foundation (DFG) and the Federal Ministry of Research (BMBF), initiated the first of a series of DFG-funded round table discussions that were open to all scientists eligible for DFG funding. This event, entitled “Geoengineering – the Role of the Sciences,” brought together about 40 concerned scientists from a wide range of scientific disciplines, including natural sciences, social sciences, international law, and ethics. The aim of this bottom–up approach was to collaboratively develop a responsible framework to identify the relevant scientific issues regarding CE and to discuss whether, and if so, what kind of research might be required to better inform the scientific and political debate surrounding CE.

The series of interdisciplinary round table discussions gained momentum over subsequent meetings held in Eisenach, Kiel, and Hildesheim between 2010 and 2012. In parallel, the BMBF commissioned a scoping study to describe the state of the debate on CE, which was written by an interdisciplinary group of authors, many of them part of the round table discussions [Rickels et al., 2011]. Following the earlier Royal Society report [Royal Society, 2009], the more detailed BMBF study provided a comprehensive survey of scientific, economic, political, social, ethical, and legal aspects of the CE debate. The various CE methods were categorized into radiation management (RM, which largely overlaps with solar radiation management [SRM] but also includes ideas to modify the long-wave radiation impacts of cirrus clouds) and carbon dioxide removal (CDR).

Initially, interdisciplinary discussions put a lot of weight on the intricacies of SRM, arguably the CE idea that had been discussed predominantly and most controversially in the international scientific community, including Crutzen’s [2006] paper. While a number of international studies had investigated the potential and some environmental impacts of idealized deployments of SRM, less scientific information was available on most CDR methods (with the possible exception of marine iron fertilization). Questions surrounding legal, economic, societal, and ethical issues of SRM and CDR were just beginning to be addressed by the scientific community. In the round table discussions, the aim therefore emerged to develop a thorough, critical, and unbiased analysis and assessment of CE across a broad range of scientific, environmental, economic, social, legal, political, ethical, and communicative dimensions. To facilitate the provision and exchange of information among the scientists (most of those participating in the round table discussions had not previously been engaged in CE research) and the interested public, the website www.climate-engineering.eu was established at the Kiel Earth Institute in summer 2011, which since then collects and provides all available national and international information regarding CE on a daily basis.

3. The Emergence of the DFG-Priority-Program on CE

In light of the intensifying international scientific debate about CE and personal concerns regarding the need to critically assess CE in an unbiased and transparent framework, members of the round table discussions decided to apply for a Priority Program, a special funding instrument offered by the DFG that aims at regionally distributed, dedicated interdisciplinary research activities. In this funding scheme, a first review process evaluates a framework proposal and subsequently approves the installation and the funding volume of such a Priority Program. In the second round, an open call for individual proposals is issued, and all DFG-eligible scientists can bid into this funding envelope, again in a bottom–up fashion. A scientific review panel that is independent of the Priority Program’s coordinator(s) then evaluates these proposals, ensures scientific quality, and eventually decides about the composition of the individual projects funded within the Priority Program. The open call and independent review process ensured a fair and unbiased science-driven bottom–up procedure and successfully brought in a number of scientists not previously involved in CE research.

A first attempt to install such a Priority Program in 2011 failed since the DFG Senate was not convinced that research on CE should be funded at all by the DFG. The Senate therefore asked the National Committee for Global Change Research as well as the two DFG senate commissions for oceanography and for future tasks of the geosciences to prepare a statement specifying whether research on CE should be supported by the DFG and, if so, suggesting fundamental research needs regarding CE. This statement, available only in German (dfg.de/dfg_magazin/forschungspolitik_standpunkte_perspektiven/climate_engineering/index.html), acknowledges substantial research needs addressing scientific questions regarding CE and recommends multidisciplinary research for assessment and not deployment of CE, specifically with respect to evaluating effects and side effects considering the scientific, technical, political, legal, societal, and ethical dimensions.

The Senate of the DFG endorsed the statement and opened the way for a reconsideration of the research proposal.

A revised proposal for setting up the Priority Program addressed these recommendations and was submitted by the consortium of concerned scientists in autumn 2011 and approved in spring 2012 under the name “SPP 1689: Climate Engineering – Risks, Challenges, Opportunities?” with a funding envelope of about 5 Mio Euros for each of the two 3-year funding periods. The main scientific aims put forward in this framework proposal were:

  • Assessing the potential effects, uncertainties, and challenges of CE
  • Evaluating the legal, moral, and public acceptability issues of potential CE measures

To organize the research, three classes of exemplary CE measures were chosen to address atmospheric, oceanic, and terrestrial measures: aerosol injection into the troposphere and stratosphere, ocean alkalinization, and terrestrial biomass CDR. These three classes were selected because they differed not only in approach and deployment but also in response timescale, climate-change potential, and likely extent of potential side effects. They were also considered to cover a large part of the spectrum of ethical, cultural, legal, political, and other societal issues.

Individual proposals had to address at least one of the scientific aims and one of the exemplary CE measures. They also had to be genuinely interdisciplinary or be closely linked to a partner proposal from a different discipline. After careful evaluation by an international review panel, the first phase of the Priority Program started in spring 2013 with nine scientific projects and one coordination project. Altogether, 19 PhD-students and 7 PostDocs were funded at 16 institutions distributed all over Germany with partners in Austria and France. In the spirit of the bottom–up initiative of concerned scientists that led to the Priority Program, all interested scientists who could not be funded within the limited funding envelope were invited to become associated members of the Program. They were asked to submit a brief statement of interest that had to be approved by the Priority Program’s executive board consisting of one voting member per participating institute. All members and associated members have full access to unpublished data and model output generated within the Program; they are also invited to all project workshops. Workshops are held about twice a year to foster collaboration across disciplines and among the different participating institutions. Additional workshops have been set up among the PhD students to inform each other about their respective disciplinary research approaches. A complete list of projects, activities, and publications can be found at www.spp-climate-engineering.de/ .

4. Key Findings of the First Phase of the Priority Program

Key findings of the first phase include a first comparison of different CE measures within a single, intermediate-complexity Earth system model [Keller et al., 2014],which revealed that even under optimistic assumptions about large-scale deployment, the CDR methods investigated are unlikely to have sufficient potential to turn the climate of a high-emission scenario (IPCC’s Representative Concentration Pathway [RCP] 8.5 scenario) into one resembling a medium mitigation one (RCP 4.5). An unexpected side effect of simulated SRM was a substantial increase in terrestrial carbon storage, predominantly in soils, resulting from reduced respiration at lower temperatures. (We refer to side effects as any effect that is not the primary target of the respective CE method. There is no value judgment in this term. One person could view an effect positive that another one may perceive as negative.) For the simulated SRM intensity, the associated drawdown in atmospheric CO2 was even larger than for any of the simulated massive CDR deployments [Keller et al., 2014]. After termination of SRM, most of the carbon stored was quickly released back to the atmosphere, resulting from enhanced respiration in response to the rapid warming after termination known from earlier studies [Matthews and Caldeira, 2007]. Large-scale afforestation generated a substantial

Figure 1. Update of the “blob” diagram of the Royal Society [2009] report that assessed individual CE ideas with respect to the dimension’s effectiveness (vertical), affordability (horizontal), timeliness (blob size), and safety (color). Arrows indicate the direction of change in the assessment by new studies since the Royal Society report. Abbreviations in blue refer to the first letters of the first author’s name and year of the respective publication of the Priority Program [Ilyina et al., 2013; Humpenöder et al., 2014, 2015; Moosdorf et al., 2014; Aswathy et al., 2015; Mengis et al., 2015, 2016; Saxler et al., 2015; Bonsch et al., 2016; Sonntag et al., 2016 in addition to references cited elsewhere in the paper], all provided in the reference list of this article.

albedo modification in the model that even led to a net warming of the planet. The simulated massive deployment of different CE measures in a single model showed that at scales large enough to have a significant climate impact in a high-emission world, CDR and SRM methods may be accompanied by side effects and raise governance issues, which are more similar than assumed previously (as, e.g., reflected in the separation of the National Academy of Sciences’ report on Climate Intervention into CDR and SRM parts).

Other studies performed in the framework of the Priority Program found that the costs of CE often tended to be larger [Klepper and Rickels, 2014] and efficiencies lower [Niemeier and Timmreck, 2015] than previous studies had estimated. Overall, results of the Priority Program obtained so far tend to indicate that the closer one looks, the less efficient CE becomes. This is illustrated by an update of the earlier schematic “blob diagram” of the Royal Society [2009] report (Figure 1), which had been developed to provide a visual image of four dimensions of various CE measures.

The figure shows, on the vertical and horizontal axes, effectiveness and affordability of the different CE options using a qualitative scale. The size and the color of the dots represent their timeliness and safety.

Note that there is no well-defined scale for any of the dimensions nor are these the only issues relevant for CE. They also include ethical, social, and political aspects; governance issues; and public perception and acceptance (see, e.g., Kruger [2015] for a critique of this diagram). Here, we show relative changes in the assessment of different CE ideas inferred from studies published as part of the Priority Program, acknowledging that other dimensionswere also investigated. Studies on ethical aspects [Sillmann et al., 2015; Baatz et al., 2016] proposed that far-reaching mitigation is morally obligatory prior to any engagement into SRM [Baatz, 2016], and the examination of public perception and acceptance of CE revealed that knowledge about the possibility to lower global temperature through aerosol injection does not reduce individual mitigation efforts [Merk et al., 2016].

Overall, “blobs” have moved predominantly downward toward lower effectiveness but also to the left, i.e., indicating also higher cost of deployment. Many colors have moved into the direction of red, indicating lowered estimates of safety. The size, i.e., timeliness of one “blob,” “CCS at source,” has decreased because of new concerns about the speed of developing access to large-enough CO2 storage sites in time [Scott et al., 2015]. None of the changes in assessment were toward higher effectiveness, high affordability, higher safety, or greater timeliness. The fact that the majority of changes with respect to the original Royal Society diagram have occurred along the vertical “effectiveness” axis is likely due to the timing of the studies. Many initial studies of the Priority Program had a predominantly natural sciences focus since they provided scenario results for other disciplines in their assessment of CE. Research on other dimensions such as the cost of CE measures (the axis affordability) have often started later and are still in progress, and some are not yet published.

5. Current Trends in CE Research

The intensive research activities and their results obtained during the last few years have begun to change the view of scientists on CE. The results of careful and critical scientific assessments of the sometimes visionary technological possibilities for a targeted manipulation of the earth system have reduced the enthusiasm about CE as a potential tool to combat undesirable climate effects of the fossil fuel-based world economy.

At the same time, the Fifth Assessment Report (AR5) of the IPCC concluded that the objective of keeping global warming below 2∘C within this century would most likely require substantial negative emissions in the second half or toward the end of this century. Creating negative emissions is more or less equivalent to using CDR technologies. Since all states at the 21st Conference of the Parties (COP21) of the United Nations Framework Convention on Climate Change in Paris at the end of 2015 have agreed to try to keep global warming well below 2∘, CDR has essentially been added to the policy agenda for introducing measures that aim at reducing CO2 emissions as far as possible and even move toward negative emissions. Current scenarios that meet these ambitious climate goals require negative emissions particularly in the electricity sector and in different land use practices, such as in forestry. Interestingly, the Paris agreement has been carefully crafted without CDR being mentioned. On the other hand, SRM has not entered the debate in similar proportions as the focus in policy is still predominantly on controlling carbon emissions and maintaining or enhancing the “natural” carbon sinks instead of controlling temperature directly.

In that sense, 10 years after Crutzen’s [2006] paper that concentrated on SRM as the only CE option to complement mitigation in case of a “climate emergency,” CDR is now getting increasing attention as a means for complementing insufficient mitigation efforts. This is especially the case since the AR5 contains many scenarios with negative emissions. Despite this, AR5 has not explained in detail how the negative emissions could be achieved in practice. The feasibility and the economic cost of different CDR measures are still poorly known. Most importantly, even for those CDR technologies for which the practical functioning is better known, it is not clear how such technologies can be scaled up to levels necessary to reach the desired quantities of CO2 to be removed from the atmosphere. These uncertainties refer to the quantities of material inputs or natural resources such as land or water that need to be devoted to CDR activities. Even methods perceived as “green,” such as afforestation or bioenergy, may not appear green once scaled up to have a substantial climate impact [e.g., Heck et al., 2016a, 2016b]. Uncertainties also exist with respect to potential unintended ecological side effects of globally unprecedented CDR activities. The combination of economic and ecological challenges for large-scale CDR activities is still not sufficiently investigated. Finally, the societal and political support is currently not obvious. Using the oceans as carbon sinks is legally not allowed, underground storage of CO2 onshore or in submarine reservoirs is not supported politically, and it is economically not viable at current explicit or implicit carbon prices. A consensus about the necessity or desirability of a large-scale, possibly global, manipulation of terrestrial or marine ecosystems is not in sight. The planned IPCC’s special report on 1.5∘C warming will likely discuss negative emission strategies in greater detail, and ongoing research on CDR, such as that performed as part of the Priority Program, is expected to be highly relevant also in this perspective.

The second 3-year phase of the Priority Program that starts right now will pay specific attention to the new developments in the direction of CE research. Although all the proposals of the approved 10 projects (six of which are extensions of first-phase projects) had to be submitted just before the Paris COP21 meeting, there was a larger emphasis on negative emission technologies compared to SRM than in the first phase.

While the call for proposals for the first and second phases of the Priority Program was virtually identical and equally open to proposals addressing CDR and SRM, the somewhat larger emphasis on CDR in the second phase reflects the scientific interest in the bottom–up process of individual scientists submitting proposals addressing research questions they find interesting and challenging. These individual decisions may also be influenced by the current policy dynamic with its focus on negative emissions and CDR, but they are not imposed by the Priority Program itself. The emphasis on CDR is, however, not exclusive. One new project on RM will, for example, investigate a relatively new proposal of altering polar-winter cirrus clouds to affect long-wave radiation, which could turn out as a RM option that may yield climatic effects more similar to those of CO2 reduction than previously investigated SRM approaches. No field experiments will be performed within the Priority Program, although we envisage that small-scale feasibility studies and full life-cycle assessments may be addressed by possible follow-on projects funded by different schemes.

The coordination project of the Priority Program has successfully applied for flexible funds to carry out a few thematic workshops, which can be used to react to new developments, help bring together different participating projects and the international scientific community, and also bring together scientists and stakeholders. The first such workshop, entitled “On the 1.5∘C target and climate engineering,” has just taken place in Kiel with some 80 scientists and stakeholders (http://www.spp-climate-engineering.de/SPP1689WorkshopKielNov16.html ). Through such activities, the Priority Program aims to move from scientifically driven fundamental research toward answering policy-relevant research questions. Special research foci currently discussed by several projects of the Program are the investigation of trade offs between different CE schemes and mitigation and the necessary development of appropriate indicators and metrics as well as decision analysis frameworks (see article by Oschlies et al., 2016, in this special issue). Thanks to its interdisciplinary culture and its ambition for transparent research, the Priority Program is viewed as a promising tool to engage the scientific disciplines relevant for a comprehensive assessment of CE and to constructively engage in the discussion with stakeholders and policymakers.

© 2016 The Authors.
This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
OSCHLIES AND KLEPPER RESEARCH FOR ASSESSMENT, NOT DEPLOYMENT 1
Earth’s Future 10.1002/2016EF000446
Accessed at http://agupubs.onlinelibrary.wiley.com/hub/issue/10.1002/(ISSN)2328-4277.GEOENGIN1/ on February 7, 2017.

.

Current World Population 7.4 billion

The chart above illustrates how world population has changed throughout history

© Copyright Worldometers.info – All rights reserved

Current World Population 7,484,849,144
Births today 174,625
Deaths today 72,186
Population Growth today 102,439
Births this year 17,912,372
Deaths this year 7,404,557
Population Growth this year 10,507,816

World Population sections

Top 20 Largest Countries by Population (live)

1 China 1,386,053,011 2 India 1,336,703,891
3 U.S.A. 325,604,466 4 Indonesia 262,427,403
5 Brazil 210,624,560 6 Pakistan 195,292,027
7 Nigeria 190,035,555 8 Bangladesh 164,119,024
9 Russia 143,398,882 10 Mexico 129,634,554
11 Japan 126,147,728 12 Ethiopia 103,420,080
13 Philippines 103,224,393 14 Vietnam 95,056,018
15 Egypt 94,536,306 16 D.R. Congo 81,305,269
17 Germany 80,653,149 18 Iran 80,612,072
19 Turkey 80,123,592 20 Thailand 68,241,909

World Population: Past, Present, and Future

.

The Looming Coffee Shortage

Coffee prices surge as severe drought and floods hit world’s largest coffee producers

Due to the poor global harvest of robusta beans coffee prices are rising 2015 March 11

Courtesy of Birch Coffee

Due to the poor global harvest of robusta beans, coffee prices are rising. (Courtesy of Birch Coffee)

By Emel Akan, Epoch Times | January 26, 2017 AT 4:52 AM Last Updated: January 30, 2017 1:42 pm

This year will be the third consecutive year of supply shortage in the global coffee market, according to the International Coffee Organization (ICO).

Due to a poor global harvest of robusta beans, coffee prices are rising. Bad weather caused by El Niño hurt robusta bean production by the world’s top two producers, Brazil and Vietnam.

“Both countries had a very poor robusta coffee crop last year due to drought,” said Shawn Hackett, CEO of Hackett Financial Advisors, a commodity broker.

And recently, excessive rainfall in Vietnam disrupted the harvest of this year’s crop, he said.

The price for Robusta coffee in the London futures market surged nearly 60 percent in the last year and reached a five-year high in January. That move helped drive the price of Arabica coffee in the New York futures market up 30 percent.

But those price increases may not be entirely justified, says one of the world’s major coffee roasting companies.

“Markets do not always follow the fundamentals, because there is so much speculation, so much volatility on the exchange,” said Andrea Illy, president and CEO of Italian coffee company illycaffè.

According to Illy, the ICO market report is not as alarming as it may seem at first glance because it is based on potentially manipulated official data coming from the coffee producing countries.

“Unfortunately, these official statistics are no longer reliable. … The governments tend to be a little biased in the data they provide, because they want to influence the prices,” Illy said.

According to ICO, the total output for Arabica and Robusta beans will be 151.6 million bags for the current production year, which began in October 2016. The total global consumption, however, will be at 155.1 million bags, hence creating a supply shortage of 3.5 million bags of coffee.

Though production of Arabica beans will rise to record levels in 2016–2017, a 6 percent drop in Robusta bean production will offset the surge in Arabica output, stated the ICO report.

Illy said that if that report reflected accurate production data, which is higher, it would reveal a balance or supply surplus of beans.

Biggest Threat

Despite that short-term balance, Illy said there is a real threat to coffee production—climate change.

Conditions like droughts and excessive rains are already hurting the coffee crop, he said. “Climate change is accelerating dramatically. It is not only a threat but a reality.”

Meanwhile, coffee consumption is increasing thanks to the growing popularity of coffee in traditionally non-coffee drinking countries, like China and India.

In order to meet the rising demand, current production levels need to be doubled by 2050, according to Illy.

“That means we need to increase productivity enormously,” he said.

Robusta Versus Arabica

Robusta beans are lower-quality beans and are higher in caffeine and in acidity compared to Arabica beans. They are primarily used in instant coffees or blended with Arabica beans.

“Roasters will use a blend of Arabica and Robusta because the Robusta will have a heavier crema and more meaty flavors. … The other reason one would blend the two coffees is because of the drastic difference in price between the two types of coffee,” said Paul Schlader, co-owner and head roaster at Birch Coffee in New York.

It is odd because normally the market maker is Arabica. In this case, it is the opposite. A deficit in Robusta is also impacting the Arabica price.

— Andrea Illy, president and CEO, illycaffè

If you are using Robusta along with Arabica coffees, you can reduce your costs significantly, he said.

According to Illy, the surge in coffee prices is affecting the “cheap coffees” that mainly rely on the Robusta beans and the “mass-market coffees” that blend both beans.

“It is odd because normally the market maker is Arabica. In this case, it is the opposite. … Arabica is not only the most expensive but is also most abundant. It represents 60 percent of the coffee production,” Illy said.

Higher Robusta prices led coffee roasters to switch to low-quality Arabica beans, driving up the demand and the price for Arabica beans.

Premium coffee brands like Illy use 100 percent high-quality Arabica beans and have not yet experienced a significant cost increase.

Andrea Illy, president and CEO of illycaffè S.p.A. (Courtesy of Illy)

“We already pay a premium, which is an average of 30 percent on top of the market prices,” Illy said.

Boutique roasters like Birch also do not use Robusta beans. Hence they have not been adversely affected by the sharp rise in coffee prices.

Many high-end roasters have long-term relationships with producers, or are insulated from short-term price fluctuations by previous purchase contracts.

Impact on the Retail Market

U.S. companies have started responding to the poor harvest and surges in coffee costs.

The J.M. Smucker Co. (Ticker symbol: SJM) announced in January that it would raise the prices of its coffee products by about 6 percent in the U.S. retail stores. The company owns coffee brands including Dunkin’ Donuts, Folgers, Millstone, and Cafe Bustelo.

Starbucks (SBUX) raised its prices twice last year.

Coffee is the most traded commodity in the world, following crude oil.

Brazil is the biggest coffee producer, producing one-third of the world’s coffee. And Europe is the largest importer of coffee, accounting for one-third of world’s coffee consumption.

The Epoch Times Copyright © 2000-2017
Accessed at http://www.theepochtimes.com/n3/2213928-the-looming-coffee-shortage-2/ on February 15, 2017.

.

North Korea test-fires missile, apparently challenging Trump

Wikipedia

Missile launch on 12 Feb 2017 North Korea

Associated Press writers Kim Tong-Hyung in Seoul, South Korea, and Jill Colvin in Palm Beach, Florida, contributed to this report.

Japan North Korea Kor_Leak.jpg

By ERIC TALMADGE, Associated Press, Sunday, February 12th 2017, PYONGYANG, North Korea (AP) —

In an implicit challenge to President Donald Trump, North Korea appeared to fire a ballistic missile early Sunday in what would be its first such test of the year.

After receiving word of the launch, Trump stood at his south Florida estate with Japanese Prime Minister Shinzo Abe, who called the move “intolerable.”

There was no immediate confirmation on the launch from the North, which had warned recently that it was ready to test its first intercontinental ballistic missile. The U.S. Strategic Command, however, said it detected and tracked what it assessed to be a medium- or intermediate-range missile.

North Korean media are often slow to announce such launches, if they announce them at all. As of Sunday evening, there had been no official announcement and most North Koreans went about their day with no inkling that the launch was major international news.

The reports of the launch came as Trump was hosting Abe and just days before the North is to mark the birthday of leader Kim Jong Un’s late father, Kim Jong Il.

Appearing with Trump at a news conference at Trump’s estate, Abe condemned the missile launch as “absolutely intolerable.”

Abe read a brief statement in which he called on the North to comply fully with relevant U.N. Security Council resolutions. He said Trump had assured him of U.S. support and that Trump’s presence showed the president’s determination and commitment.

Trump followed Abe with even fewer words, saying in part: “I just want everybody to understand and fully know that the United States of America stands behind Japan, its great ally, 100 percent.”

Stephen Miller, Trump’s chief policy adviser, said Trump and Abe had displayed “an important show of solidarity” between their nations.

“The message we’re sending to the world right now is a message of strength and solidarity; we stand with Japan and we stand with our allies in the region to address the North Korean menace,” Miller said during an interview Sunday with ABC’s “This Week.”

South Korea’s Joint Chiefs of Staff said in a statement that the missile was fired from around Banghyon, North Pyongan Province, which is where South Korean officials have said the North test-launched its powerful midrange Musudan missile on October 15 and 20, 2016.

The military in Seoul said that the missile flew about 500 kilometers (310 miles). South Korea’s Yonhap News Agency reported that while determinations were still being made, it was not believed to be an intercontinental ballistic missile.

The missile splashed down into the sea between the Korean Peninsula and Japan, according to the U.S. Strategic Command. Japanese Chief Cabinet Secretary Yoshihide Suga told reporters that the missile did not hit Japanese territorial seas.

The North conducted two nuclear tests and a slew of rocket launches last year in continued efforts to expand its nuclear weapons and missile programs. Kim Jong Un said in his New Year’s address that the country had reached the final stages of readiness to test an ICBM, which would be a major step forward in its efforts to build a credible nuclear threat to the United States.

Though Pyongyang has been relatively quiet about the transfer of power to the Trump administration, its state media has repeatedly called for Washington to abandon its “hostile policy” and vowed to continue its nuclear and missile development programs until the U.S. changes its diplomatic approach.

Just days ago, it also reaffirmed its plan to conduct more space launches, which it staunchly defends but which have been criticized because they involve dual-use technology that can be transferred to improve missiles.

“Our country has clearly expressed its standpoint, that we will continue to build up our capacity for self-defense, with nuclear forces and a pre-emptive strike capability as the main points, as long as our enemies continue sanctions to suppress us,” Pyongyang student Kim Guk Bom said Sunday. “We will defend the peace and security of our country at any cost, with our own effort, and we will contribute to global peace and stability.”

Kim Dong-yeop, an analyst at the Institute for Far Eastern Studies in Seoul, said the missile could be a Musudan or a similar rocket designed to test engines for an intercontinental ballistic missile that could hit the U.S. mainland. Analysts are divided, however, over how close the North is to having a reliable long-range rocket that could be coupled with a nuclear warhead capable of striking U.S. targets.

South Korean Prime Minister Hwang Kyo-ahn, who is also the acting president, said his country would punish North Korea for the missile launch. The Foreign Ministry said South Korea would continue to work with allies, including the United States, Japan and the European Union, to ensure a thorough implementation of sanctions against the North and make the country realize that it will “never be able to survive” without discarding all of its nuclear and missile programs.

___

Associated Press writers Kim Tong-Hyung in Seoul, South Korea, and Jill Colvin in Palm Beach, Florida, contributed to this report.
Accessed at http://wjla.com/news/nation-world/north-korea-test-fires-missile-apparently-challenging-trump-02-12-2017 on February 12, 2017.

.

Russian spy ship spotted patrolling just 70 miles off the coast of Delaware

The Vishnya-class spy boat was also spotted near the U.S. nuclear missile submarine base in Kings Bay Georgia in 2014

Wikipedia_The Vishnya-class spy boat was also spotted near the U.S. nuclear missile submarine base in Kings Bay Georgia in 2014

  • Russian Vishnya-class Viktor Leonov has been operating beyond U.S. territorial waters, 70 miles off the coast of Delaware
  • Officials say they are keeping a close eye on the spy ship which is armed with surface-to-air missiles but say it’s ‘not a huge concern’
  • The SSV-175 boat, measuring 300 feet long and 47.5 feet wide, is a Russian intelligence-gathering ship
  • It was last spotted off the coast of Florida in 2015, and before then near the U.S. Navy ballistic missile submarine base in Kings Bay, Georgia
  • It is the first such move by Russian military under Trump’s presidency
By Hannah Parry For Dailymail.com, Published: 12:13 EST, 14 February 2017 | Updated: 16:13 EST, 14 February 2017

A Russian intelligence-gathering ship has been spotted roaming the waters off the East Coast.

The SSV-175 Viktor Leonov ship was 70 miles off the coast of Delaware – in international waters – heading north, officials told Fox News.

Armed with surface-to-air missiles, the ship is capable of intercepting communications and can measure U.S. Navy sonar capability, an official said.

‘It’s not a huge concern, but we are keeping our eyes on it,’ they added.

It was not the first such move by Russian military under Trump’s presidency.

Four Russian military aircraft conducted low passes against a U.S. destroyer in the Black Sea just days prior to the spy ship was seen.

The sighting came as news broke that White House national security adviser Michael Flynn had been forced to resign amid controversy over his Russia contacts.

The Viktor Leonov, which measures 300 feet long and 47.5 feet wide, has a crew of 200 sailors carries high-tech electronic surveillance equipment and weaponry, AK-630 rapid-fire cannons and surface-to-air missiles.

It has been spotted loitering off the East Coast on a number of occasions in recent years.

The Vishnya or Meridian-class intelligence ship patrolled near the U.S. nuclear missile submarine base in Kings Bay, Georgia, in 2014 in what the Department of Defense suspect may have been part of an intelligence-gathering operation.

THE RUSSIAN SPY VESSEL VIKTOR LEONOV

The Vishnya class intelligence-gathering ship went into service in the Black Sea in 1988 before it was transferred seven years later to the northern fleet.

It named after Second World War Soviet sailor Viktor Leonov.

Ship measures 300 feet long and 47.5 feet wide.

It has a crew of 200 sailors carries high-tech electronic surveillance equipment and weaponry, AK-630 rapid-fire cannons and surface-to-air missiles

In a throwback to the Cold War, the spy ship also caused a stir after unexpectedly docking in Havana on the eve of historic talks between the U.S. and Cuba the following year.

There was nothing stealthy about the arrival of the Leonov, which was moored to a pier in Old Havana where cruise ships often dock.

But the visit was not officially announced by Cuban authorities.

The timing also raised eyebrows as it came on the eve of historic U.S-Cuba talks aimed at normalizing diplomatic relations.

U.S. officials in Washington played down the presence of the Russian vessel, saying it was perfectly legal and not at all out of the ordinary.

‘It’s not unprecedented. It’s not unusual. It’s not alarming,’ a defense official told AFP news agency.

The ship went into service in the Black Sea in 1988 before it was transferred seven years later to the northern fleet, according to Russian media.

It was named after Viktor Leonov – a Soviet sailor in the Second World War who was awarded two Hero of the Soviet Union medals.

The vessel previously docked in Havana in February and March 2014, staying there for a few days.

At the time, neither Cuba nor Russia acknowledged or explained the presence of the spy ship in Havana.

The vessel has also been detected in the vicinity of the U.S. Naval Station in Mayport, Florida.

During the Cold War, Russian intelligence gathering ships routinely parked off U.S. submarine bases along the East Coast.

The Vishnya-class spy boat was also spotted near the U.S. nuclear missile submarine base in Kings Bay, Georgia, in 2014.

Published by Associated Newspapers Ltd
Part of the Daily Mail, The Mail on Sunday & Metro Media Group
© Associated Newspapers Ltd
Accessed at http://www.dailymail.co.uk/news/article-4224208/Russian-spy-ship-spotted-coast-Delaware.html on February 15, 2017.

.

Biosecurity:   Superbug

Health workers trying to control rare strep outbreak in homeless population

Age of Brother Francis Shelter Guests

© 2017 Catholic Social Services – Alaska

© 2017 Catholic Social Services – Alaska

By Anne Hillman, Alaska Public Media – February 13, 2017

There’s an outbreak of a newly identified, rare strain of Group A Streptococcus bacteria in Anchorage that’s mostly impacting people who are experiencing homelessness. Public health workers will be visiting Brother Francis Shelter, Beans’ Cafe, and other services this week to distribute antibiotics and antiseptics to try to stop the spread of the disease.

State epidemiologist Joe McLaughlin said the bacteria is fairly common. Most healthy people don’t show any symptoms or just have mild skin or throat infections. But if a person’s immune system is already compromised, then the bacteria can enter into a small cut and lead to major infections, toxic shock syndrome, or amputations.

“The homeless population is more at risk in general for these types of invasive diseases because they’re at higher likelihood to have other chronic conditions,” he explained during a phone interview. “They might also have decreased access to care and decreased opportunities for self-care and personal hygiene.”

The outbreak started in Fairbanks over the summer then died down. It spread to Anchorage in the fall. There have been seven to 10 severe cases per month since October. Ninety percent of the patients were homeless. Four people have died.

Epidemic Intelligence Service Officer Emily Mosites, who is with the Center for Disease Control in Anchorage, said this outbreak has been hard to stop. The bacteria spreads through human contact, like coughing near someone or touching, and it’s impacting a very mobile population. But she stressed that most people should not worry.

“The main thing to keep in mind here is the risk in the general population is very low,” she said. “Right now we’re still seeing it in the homeless population, and we’re taking as many steps as we can to try to control it there.”

The newest step is going to different service providers this week to hand out single-dose antibiotics and antiseptic washes, to kill the bacteria before it causes more severe infections.

Mosites said the epidemiologists have been working with the clinic at Brother Francis Shelter since November to identify and treat cases early on, and to contact people who could be at risk for infection.

© Alaska Public Media 2016. All rights reserved.
Accessed at http://www.alaskapublic.org/2017/02/13/health-workers-trying-to-control-rare-strep-outbreak-in-homeless-population/ on February 15, 2017.

.

‘Noxious Weed’ May Provide New Way to Fight Superbugs

Brazilian Peppertree Schinus terebinthifolius

Maggie Fox
Topics Health news, Science, Tech News, U.S. news
First Published Feb 10 2017, 1:04 pm ET

Feb 10 2017, 1:04 pm ET by Maggie Fox

A noxious weed that plagues homeowners across Florida may hold the secret to a new way to fight some antibiotic-resistant superbugs, researchers reported Friday.

A compound made by the red berries of the Brazilian peppertree disarms methicillin-resistant Staphylococcus aureus (MRSA), the team at Emory University reports.

Schinus terebinthifolia Raddi is classified as a Category I pest plant by the Florida Exotic Pest Plant Council Scientific Reports

What’s unusual is the berries don’t kill the bacteria, but simply stop them from doing harm, says Cassandra Quave, who studies ethnobotany and dermatology at Emory University in Atlanta.

“Traditional healers in the Amazon have used the Brazilian peppertree for hundreds of years to treat infections of the skin and soft tissues,” said Quave, who reported her team’s findings in the journal Scientific Reports.

Her lab broke it down and tested it on bacteria, human skin cells and in animals.

It works it an unexpected way — as a so-called quorum quencher.

“It’s stopping communication among the bacteria,” Quave told NBC News.

“It kind of tricks them into believing they are alone. When they are alone, they behave differently than when they are in a group.”

““This remedy, which been used for hundreds of years … works not by killing staph but by basically disarming it.””

MRSA bacteria can live harmlessly in the body, and the Centers for Disease Control and Prevention estimates that 2 percent of all healthy people just carry it in or on their bodies. But when they cause an infection, they do all sorts of nasty things — releasing toxins, blasting open red blood cells and tearing apart skin cells.

“They are used by staph to better invade the host and disseminate in the host,” Quave said.

The little red berries of the Brazilian peppertree produces a group of compounds that stops this.

“This remedy, which been used for hundreds of years … works not by killing staph but by basically disarming it,” Quave said.

“This could potentially open up new doors in the way that we treat staph infections in the future.”

It’s the second natural quorum quencher that Quave’s team has found. In 2015 they reported that the leaves of the European chestnut tree produce a different group of compounds that similarly disarm staph bacteria.

The Brazilian peppertree is often also called the Florida holly or broad leaf peppertree. Its scientific name is Schinus terebinthifolia and it’s spread from its native South America to Florida, Alabama, Georgia, Texas and California.

“This is a noxious weed that is really hated in Florida,” Quave said. She finds it ironic that a weed that flourishes so well and is so hated that homeowners deploy Roundup and weed-whackers to kill it could have lifesaving properties.

Years of testing remain before the compound in the berries could be tested as treatments.

“I don’t want people to go out and try putting random berries on their skin,” Quave said.

“Natural is not always safe. There are lots of things in the natural world that can harm you if you use them improperly .”

Nature is a rich source of antibiotics and other drugs. Penicillin, the original antibiotic, is derived from mold. Researchers are looking for new drugs in dirt, tropical plants and the sea.

President Barack Obama ordered his administration to hurry up and do something about drug-resistant bacteria. It’s not clear what the new Trump administration’s policies will be.

Infections caused by antibiotic-resistant bacteria kill 23,000 people every year, make 2 million more sick and cost $35 billion in productivity lost to sick days, the CDC says.

Maggie Fox
Topics Health news, Science, Tech News, U.S. news
First Published Feb 10 2017, 1:04 pm ET
Accessed at http://www.nbcnews.com/health/health-news/noxious-weed-may-provide-new-way-fight-superbugs-n719421 on February 10, 2017.

 

.

Biosecurity: Poison

The cyanide gland of the greenhouse millipede, Oxidus gracilis (Polydesmida: Paradoxosomatidae)

The cyanide gland of the greenhouse millipede

Kirsten Pearsons, István Mikó, John F Tooker
Research Ideas and Outcomes 3: e12249 (14 Feb 2017)
https://doi.org/10.3897/rio.3.e12249

Cyanide Gland Pore Screen Shot 2016-11-27 at 16.33.47.png

Keywords: Oxidus gracilis, cyanide gland, Diplopoda, Polydesmida, CLSM
Kirsten Pearsons, István Mikó, John F Tooker
Research Ideas and Outcomes 3: e12249 (14 Feb 2017)
https://doi.org/10.3897/rio.3.e12249

Abstract

Although the greenhouse millipede, Oxidus gracilis, is distributed worldwide, there is little work using modern tools to explore its morphology. We used confocal laser scanning microscopy (CLSM) to image the cyanide glands of Oxidus gracilis. Glands from adult millipedes were dissected out before imaging, and we were able to image glands of juveniles through the cuticle due to the strong autofluorescence of the gland extract. We can report that CLSM is a promising technique to non-invasively investigate the development and mechanisms of polydesmid cyanide glands.

Background

The following work is the result of the “Know Your Insect” graduate course in the Department of Entomology at the Pennsylvania State University, taught during the fall semester of 2016. In short, the course provides the opportunity for a small group of students to learn more about the insects they are studying. Each student focuses on an anatomical feature of interest and leads the class in a lecture, discussion, and live dissection. The students also have the opportunity to image their anatomical feature of interest using confocal laser scanning microscopy (CLSM), relying on the autofluorescence of arthropod tissue (Haug et al. 2011).

Through work on decomposer communities in Pennsylvanian agricultural fields, KP came across the greenhouse millipede, Oxidus gracilis. O. gracilis was the only millipede species she collected in corn and soybean fields in the summer of 2016, and they dominated her samples (over 35% of macroinvertebrates collected in pitfall traps). Because O. gracilis is an invasive species, KP was curious to learn more about their invasion, the morphological and ecological mechanisms that allow them to reach such high densities, and implications of their presence for decomposition and nutrient cycling.

Overall KP’s case was unique for this course, as her “insect” is not an insect at all but rather a diplopod; while working with a diplopod provided some challenges, we encountered some exciting surprises.

New information

As with many other arthropod groups, interest in studying millipede morphology has varied over time. Overall, diplopods have received little attention compared to other arthropod groups (Sierwald and Bond 2007). Despite its global distribution, it is somewhat surprising that O. gracilis is no exception. Research on the cyanide glands of O. gracilis has focused on the exudate chemistry of adults (Duffey et al. 1977, Taira et al. 2003). We could not find any morphological studies on the cyanide glands of O. gracilis, and while other Polydesmid cyanide glands have been dissected out and studied (Eisner et al. 1963, Marek and Moore 2015, Duffey 1981), this appears to be the first use of confocal laser scanning microscopy (CLSM) to image Polydesmid cyanide glands.

We are also excited to report that, at least up to the 5th larval stadium, the cuticle is transparent enough to image the highly auto-fluorescing storage chamber without dissection. We believe that non-invasive imaging of the glands has multiple potential applications; it could be used to do live imaging of when glands are activated, to observe the growth and development of the glands across stadia, and to determine the amount of defensive secretions released under different stressors.

During dissection of the adult specimens, we noticed heavy infestation nematodes. It would be interesting to know if these nematodes act as a significant top-down control on O. gracilis populations, although high millipede densities in our sample site would lead us to hypothesize they have little control.

Overview of the cyanide glands in millipedes

Cyanide glands (=repugnatorial glands, Minelli (2015)) are found in the order Polydesmida and are just one of three defensive gland types found in diplopods. Paul Marek and Tom McCoy (Marek and McCoy 2016) have produced some light microscopy images of a beautifully dissected cyanide gland from Pachydesmus crassicutis, which provides a good representation of polydesmid gland morphology.

Cyanide glands are usually present in just over half the body segments – 5, 7, 9, 10, 12, 13, 15-19 – and are located in the paired, flange-like pleural keels (Shear 2015). The glands consist of a large reservoir containing the precursor compound (e.g. mandelonitrile) connected to a smaller reaction chamber via a muscularized valve. It is in this smaller reaction chamber where the precursor compound is mixed with enzymes which break down the mandelonitrile into hydrogen cyanide and benzaldehyde (Duffey 1981). The reaction products are released through ozopores in the cuticle surface. In addition to hydrogen cyanide and benzaldehyde, O. gracilis produces various phenolic compounds which may serve defensive functions against specific predators. For a more thorough review of the biochemistry, physiology, and ecology of millipede chemical defenses, see Shear (2015).

Methodology

Field collection and rearing of Oxidus gracilis

We collected adult Oxidus gracilis millipedes from a corn field at the Russell E. Larson Agricultural Research Center at Rock Springs, in Centre County Pennsylvania, USA, during summer and and fall of 2016.

After summer storms, moist soil and high humidity provided for easy surface collection of adult Oxidus gracilis. We collected approximately 100 adults on 28 July 2016; once in the lab, we kept the millipedes in a clear plastic container with a 1-2 cm layer of field soil and a handful of moistened straw. We maintained the colony of 100 adults for four weeks; after three weeks (18 August 2016) we observed clusters of eggs and second instar larvae. However, it was difficult to maintain the colony under proper humidity – too dry and the colony desiccated, too wet and fungus overgrew the colony. The majority of the colony was lost by mid-September; interestingly, this coincided when adult field populations began to decline. In their hypothesized native range of Japan, Oxidus gracilis has been observed to have a similar life history – millipedes mature in summer, lay eggs in fall, and overwinter as late larval stages (Murakami 1962, Murakami 1966). It is probable that their annual life cycle means field-collected adults will die in the fall, regardless of rearing conditions. In early October, although populations were much sparser, we collected additional adults and juveniles for microscopy and to re-start the lab colony.

Specimen preparation and dissection

Adult specimens collected in September were fixed in 80% ethanol. Upon dehydration in ethanol, the cyanide gland became impossible for us to remove – likely due to the gland tissue collapsing. Because of this observation, adult specimens collected in October were kept alive until dissection. At the time of dissection, we placed the millipedes in a petri dish containing 0.1 M dibasic phosphate buffer (ph=7.4) and quickly severed the head from the body using dissecting scissors. Timing of this process seems to be critical; if the millipede is disturbed too much before the dissection, the glands’ storage chambers may be emptied which makes it hard to see the clear gland tissue against the fat body.

After decapitation, the body rings were carefully separated from each other and from the digestive system. Only the segments containing cyanide glands (rings 5, 7, 9, 10, 12, 13, and 15-19) were dissected further under light microscopy; before each ring dissection we checked for the gland ozopore on each keel to ensure we were dissecting rings with cyanide glands (Suppl. material 1).

For each ring dissection, the dorsal tergite and ventral sternite were bisected so each keel could be dissected independently. Care was taken to remove fat body and cuticle to reveal the storage and reaction chambers of each gland.

We use two, fifth stadium juvenile millipedes for additional imaging. Noticing how transparent the cuticle was, and knowing how fluorescent the storage chambers were from the adult dissections, we did not attempt to fully dissect out the glands. From one juvenile we isolated body rings with ozopores and we left the other juvenile largely intact, only severing the head to reduce movement.

Confocal Laser Scanning Microscopy

Specimens were examined with an Olympus FV10i Confocal Laser Scanning Microscope using two excitation wavelengths: 473 nm, and 559 nm. We detected autofluorescence using two channels with emission ranges of 490–590 nm (green pseudocolor), and 570–670 nm (red pseudocolor), respectively. We generated volume rendered micrographs and media files with ImageJ (Schneider et al. 2012) using maximum intensity projection.

Observations on the cyanide glands using dissections and Confocal Laser Scanning Microscopy

Adults

Some glands were easily identified under a light microscope by a characteristic brownish-yellow droplet suspended within the storage chamber. Droplets like this have been observed across polydesmids and are presumed to be the precursor mandelonitrile (Duffey 1981); interestingly, it is still unknown how exactly these droplets are stablized. The quick release required for successful defense limits the likelihood that the percursor is stored as either a glycoside or emulsion (Duffey 1981); but without stabilization, the madelonitrile could react and internally generate cyanide and cause damage to gland tissue. Possibly, the hydrophobicity of mandelonitrile is enough to stabilize the droplet in an aqueous environment, or other compounds (benzoic acid, fatty acids, and fatty acid esters) may play a role in stabilization.

Most storage chambers were difficult to see or not found at all, likely due to destruction of the delicate storage chambers during dissection, or the millipedes evacuating their storage chambers upon disturbance.

We successfully dissected out a storage chamber from one cyanide gland which we imaged on the CLSM using autofluorescense (Figs 1, 2). As expected, the millipede exoskeleton brightly fluoresced and we were able to observe the thin wall of the storage chamber. We were initially surprised to see how brightly the gland extract fluoresced because it is thought to be a mixture of only of small molecules, while arthropod autofluorecense is usually attributed to large, complex molecules (Haug et al. 2011). It is possible that the strong autofluorescence of the gland extract is based on the aromaticity and cyanogenic structure of mandelonitrile and its derivatives (Duffey 1981).

Fig 1 CLSM volume rendered micrograph showing the cyanide gland

Kirsten Pearsons, István Mikó, John F Tooker
Research Ideas and Outcomes 3: e12249 (14 Feb 2017)
https://doi.org/10.3897/rio.3.e12249

Figure 1. CLSM volume rendered micrograph showing the cyanide gland of Oxidus gracilis (arrows pointing the wall of the cyanide gland, ex=strongly autofluorescing gland extract).

Figure 2. CLSM volume rendered media file showing the cyanide gland of Oxidus gracilis (gland extract is the overexposed droplet).

Kirsten Pearsons, István Mikó, John F Tooker
Research Ideas and Outcomes 3: e12249 (14 Feb 2017)
https://doi.org/10.3897/rio.3.e12249

Figure 2.

CLSM volume rendered media file showing the cyanide gland of Oxidus gracilis (gland extract is the overexposed droplet).

Juveniles

We were unable to observe the cyanide glands from the dissected juvenile millipede; it is possible the gland structures are even more delicate in juveniles, and therefore more prone to tearing during dissection. However, because the cuticle is transparent in the 5th larval stadium, we successfully imaged the strongly fluorescing storage chambers through the cuticle of the intact juvenile millipede (Figs 3, 4). In one case, we were also able to observe the reaction chamber and valve (Fig. 4).

Figure 3. Top view of the juvenile millipede; the bright field in the lower flange is a gland storage chamber.

Kirsten Pearsons, István Mikó, John F Tooker
Research Ideas and Outcomes 3: e12249 (14 Feb 2017)
https://doi.org/10.3897/rio.3.e12249

Figure 3. Top view of the juvenile millipede; the bright field in the lower flange is a gland storage chamber.

Fig 4 Top-rotated view of the juvenile millipede

Kirsten Pearsons, István Mikó, John F Tooker
Research Ideas and Outcomes 3: e12249 (14 Feb 2017)
https://doi.org/10.3897/rio.3.e12249

Figure 4. Top/rotated view of the juvenile millipede. SC = storage chamber, RC = reaction chamber, MV = muscularized valve connecting the two chambers.

Nematode infestation in adults

Upon dissection, we observed some of the adult specimens contained numerous nematodes (Suppl. material 1). O. gracilis can succumb to nematode infection, but have been shown to survive low level infections of an unidentified rhabditid nematode (Poinar and Thomas 1985).

Relevance to ongoing research

One of us (KP) is studying the effects of insecticides on macrodecomposer activity in corn and soybean fields in Pennsylvania. Before this field season, her advisor mentioned the prevalence of millipedes in some fields, and her fieldwork this summer confirmed this – over 35% of specimens in pitfall traps (in a maize and soy field at our University’s research farm) were millipedes (KP, pers. obs.). Of these, virtually all of them were Oxidus gracilis. With such a dominant millipede species, we were interested, although not entirely surprised, to discover it was an invasive species.

Invasive decomposers are prevalent – invasive earthworms are often dominate agricultural and forest decomposer communities throughout the United States (Baker et al. 2006). Researchers have documented the ecological changes associated with earthworm invasion fronts in forest systems – loss of the O horizon (organic litter layer), reduced soil carbon, and altered nutrient distribution in the soil horizon, all of which can dramatically disrupt nutrient cycling (Resner et al. 2015, Jennings and Watmough 2016). As invasive earthworms may be able to out compete native millipede species (Snyder et al. 2013), we hypothesize that an invading millipede species, especially one that reaches extraordinary population densities, may similarly influence invertebrate communities and soil structure and function much like invading earthworms.

Decomposer food web

Of course, agricultural systems aren’t pristine – they’ve been invaded by both destructive species (slugs and other crop pests) and beneficial species (carabid beetles, earthworms). We are interested in how O. gracilis fits into this novel food web – how do they interact with pests, predators, and other decomposers? Some farmers worry that the millipedes may be damaging their crops, although O. gracilis refused to eat live plants in laboratory experiments (Causey 1943).

We chose to image the cyanide glands of Oxidus gracilis because the defenses of this invasive species potentially tie into their interactions with generalist predators found in central Pennsylvania. Around 90% of plant biomass is recycled through decomposers, meaning the bulk of plant-based energy travels to predators not through herbivores, but through decomposers (Allison 2006, Polis and Strong 1996). More and more, ecologists recognize the importance of how ‘brown food webs’ link to the more frequently studied ‘green food webs’ (Wolkovich et al. 2014). In an agricultural context, it is important to understand how generalist predators interact with decomposers; if generalist predators can depend on decomposers as alternative prey, then the decomposer community can play an indirect role in controlling pest populations (Symondson et al. 2000).

If decomposers are acceptable, but less desirable prey, they can sustain predator populations when pest pressures are low, but not interfere with control when pest populations increase. Unfortunately, it is difficult to predict how valuable decomposers are for pest management because interactions between generalist predators and decomposers are poorly documented. The generalist predator community in Pennsylvania field crops is dominated by carabid beetles and spiders. Some carabid beetles reject earthworms and isopods (Best and Beegle 1977), while other species readily accept earthworms or juvenile millipedes as alternative prey (Brunke et al. 2009, Symondson et al. 2000). Certain spiders have been shown to heavily rely on decomposers, namely collembolans (Agusti et al. 2003). For millipedes especially, predation has rarely been studied, and most references to predators are based on casual observation and anecdotes (Sierwald and Bond 2007). More studies on millipede-predator interactions, and the defensive mechanisms involved, can aid our understanding of the role of decomposers in pest management.

Accessed at http://riojournal.com/articles.php?id=12249 on February 15, 2017.

.

Interspecies Chimerism with Mammalian Pluripotent

Human stem cells into both cattle and pig blastocysts

*Correspondence: belmonte@salk.edu
http://dx.doi.org/10.1016/j.cell.2016.12.036

Graphical Abstract Stem Cells

Highlights

  • Naive rat PSCs robustly contribute to live rat-mouse chimeras
  • A versatile CRISPR-Cas9 mediated interspecies blastocyst complementation system
  • Naive rodent PSCs show no chimeric contribution to postimplantation pig embryos
  • Chimerism is observed with some human iPSCs in postimplantation pig embryos

In Brief

Human pluripotent stem cells robustly engraft into both cattle and pig preimplantation blastocysts, but show limited chimeric contribution to postimplantation pig embryos.

SUMMARY

Interspecies blastocyst complementation enables organ-specific enrichment of xenogenic pluripotent stem cell (PSC) derivatives. Here, we establish a versatile blastocyst complementation platform based on CRISPR-Cas9-mediated zygote genome editing and show enrichment of rat PSC-derivatives in several tissues of gene-edited organogenesisdisabled mice. Besides gaining insights into species evolution, embryogenesis, and human disease, interspecies blastocyst complementation might allow human organ generation in animals whose organ size, anatomy, and physiology are closer to humans. To date, however, whether human PSCs (hPSCs) can contribute to chimera formation in non-rodent species remains unknown. We systematically evaluate the chimeric competency of several types of hPSCs using a more diversified clade of mammals, the ungulates.

We find that naı¨ve hPSCs robustly engraft in both pig and cattle pre-implantation blastocysts but show limited contribution to post-implantation pig embryos. Instead, an intermediate hPSC type exhibits higher degree of chimerism and is able to generate differentiated progenies in post-implantation pig embryos.

INTRODUCTION

Embryonic pluripotency has been captured in vitro at a spectrum of different states, ranging from the naive state, which reflects unbiased developmental potential, to the primed state, in which cells are poised for lineage differentiation (Weinberger et al., 2016; Wu and Izpisua Belmonte, 2016). When attempting to introduce cultured pluripotent stem cells (PSCs) into a developing embryo of the same species, recent studies demonstrated that matching developmental timing is critical for successful chimera formation. For example, naive mouse embryonic stem cells (mESCs) contribute to chimera formation when injected into a blastocyst, whereas primed mouse epiblast stem cells (mEpiSCs) efficiently engraft into mouse gastrula-stage embryos, but not vice versa (Huang et al., 2012; Wu et al., 2015).

Live rodent interspecies chimeras have also been generated using naive PSCs (Isotani et al., 2011; Kobayashi et al., 2010; Xiang et al., 2008). However, it remains unclear whether naive PSCs can be used to generate chimeras between more distantly related species.

The successful derivation of human PSCs (hPSCs), including ESCs from pre-implantation human embryos (Reubinoff et al., 2000; Thomson et al., 1998), as well as the generation of induced pluripotent stem cells (iPSCs) from somatic cells through cellular reprograming (Takahashi et al., 2007; Park et al., 2008; Wernig et al., 2007; Yu et al., 2007; Aasen et al., 2008), has revolutionized the way we study human development and is heralding a new age of regenerative medicine. Several lines of evidence indicate that conventional hPSCs are in the primed pluripotent state, similar to mEpiSCs (Tesar et al., 2007; Wu et al., 2015). A number of recent studies have also reported the generation of putative naive hPSCs that molecularly resemble mESCs (Gafni et al., 2013; Takashima et al., 2014; Theunissen et al., 2014). These naive hPSCs have already provided practical and experimental advantages, including high single-cell cloning efficiency and facile genome editing (Gafni et al., 2013). Despite these advances, it remains unclear how the putative higher developmental potential of naive hPSCs can be used to better understand human embryogenesis and to develop regenerative therapies for treating patients.

Cell 168, 473–486, January 26, 2017 ª 2017 Elsevier Inc. 473

.

Biosecurity: Zika

Zika Virus Persists in Infants’ Brain After Birth

Zika Entry Candidate AXL Enriched in Neural Stem Cells

http://www.cell.com/cms/attachment/2051994628/2059552055/fx1.jpg

Medscape Medical News
Troy Brown, RN December 13, 2016

Zika virus can persist for much longer than expected in the brains of infants and fetuses and in the placentas of pregnant women, a new study from the Centers for Disease Control and Prevention (CDC) has found. The findings could explain how the virus causes catastrophic birth defects and pregnancy losses even in women who have been only mildly ill.

Julu Bhatnagar, PhD, lead of the molecular pathology team at the CDC’s Infectious Diseases Pathology Branch, and colleagues report their findings in an article published online December 13 and in the January 2017 issue of Emerging Infectious Diseases.

“Our findings show that Zika virus can continue to replicate in infants’ brains even after birth, and that the virus can persist in placentas for months — much longer than we expected,” Dr Bhatnagar said in a CDC news release. “We don’t know how long the virus can persist, but its persistence could have implications for babies born with microcephaly and for apparently healthy infants whose mothers had Zika during their pregnancies. More studies are needed to fully understand how the virus can affect babies.”

The study is the first to demonstrate the replication of Zika virus RNA in brain tissues of infants born with microcephaly who subsequently died and in placentas of women who experienced pregnancy losses.

The researchers used reverse transcription polymerase chain reaction (RT-PCR) testing and in situ hybridization to test tissues from 52 case patients with clinically and epidemiologically suspected Zika virus infection. Testing included placental and umbilical cord tissue and fetal tissues (from pregnancy loss) from 44 women suspected of having prenatal Zika virus infection, and brain (including cerebral cortex, pons, medulla), kidney, liver, spleen, lung, and heart tissues from eight infants with microcephaly who died after birth. Of the 44 women, 30 experienced adverse pregnancy or birth outcomes (miscarriage, elective termination, stillbirth, or babies born with microcephaly) and 22 delivered infants who appeared healthy.

RT-PCR testing revealed Zika virus infection in brain tissues from the brains of the eight infants with microcephaly and placental/fetal tissues from 24 women.

Sixteen of those women were among the 22 women who experienced an adverse pregnancy or birth outcome. All 16, as well as the mothers of the eight infants who died from microcephaly, were infected with Zika virus during the first trimester of pregnancy.

In contrast, 8 women whose placental tissue showed evidence of Zika infection but delivered healthy-appearing infants were infected with Zika virus during the third trimester of pregnancy. All their infants tested negative for the virus after birth.

The results show that infection during the first trimester of pregnancy is more harmful for the fetus and infant than infection during the third trimester.

The investigators used in situ hybridization to test for both viral genomic RNA and replicative RNA intermediates. They localized the replicative RNA in Hofbauer cells of placenta and in neural cells and neurons in the brains of infants who had died. The authors suggest that together with other data, “these findings suggest that Hofbauer cells, which have access to fetal blood vessels, may facilitate transfer or dissemination of the virus to the fetal brain.”

“[O]ur findings further support the linkage of Zika virus with microcephaly, suggest its association with adverse pregnancy outcomes, and demonstrate evidence of Zika virus replication and persistence in fetal brain and placenta,” the researchers write.

“Our molecular tests for tissues extend the timeframe to detect Zika virus,” Dr Bhatnagar said in the news release. “For women who contracted Zika virus during early pregnancy but were never diagnosed, these tests could help determine whether Zika virus may have caused their miscarriage, pregnancy loss, or adverse birth outcome.”

The authors have disclosed no relevant financial relationships.
Emerg Infect Dis. Published online December 13, 2016. Full text
All material on this website is protected by copyright, Copyright © 1994-2017 by WebMD LLC. This website also contains material copyrighted by 3rd parties.
Accessed at http://www.medscape.com/viewarticle/873262 on February 16, 2017.

.

Zika Can Persist in Semen for 3 Months

human-sperm-egg

http://www.medscape.com/resource/zika-virus

Heather Boerner February 15, 2017 SEATTLE —

Zika RNA can linger in semen for up to 3 months, but clears from vaginal fluid almost immediately, according to interim results from the first prospective study of patients infected with the virus.

“These findings support the CDC recommendations for men to abstain from sex or use condoms for 6 months,” said investigator Gabriela Paz-Bailey, MD, PhD, from the Centers for Disease Control and Prevention.

Doctors should also ask pregnant women “about travel to areas where there is active transmission and offer testing,” she told Medscape Medical News.

The CDC issued guidance for physicians on Zika testing and prevention in July 2016, but the basis for that instruction was best guesses and individual case reports. Officials were “really operating in the dark,” Dr Paz-Bailey explained.

To determine the persistence of the Zika virus in bodily fluids, Dr Paz-Bailey and her colleagues identified 150 residents of Puerto Rico — 66 women and 84 men — using a pre-existing arboviral reporting system. The team followed the participants from the first report of symptoms or diagnosis, and collected saliva, urine, blood serum, semen, and vaginal secretions at baseline and at 2, 4, and 6 months.

Dr Paz-Bailey presented the results here at the Conference on Retroviruses and Opportunistic Infections 2017. A preliminary report of the data was published online in the New England Journal of Medicine to coincide with the presentation.

Viral Particles in Bodily Fluid

The duration of viral particles varied by bodily fluid. Only one vaginal swab came back positive for Zika RNA during the 6-month study, putting vaginal fluid on par with saliva for Zika persistence.

However, half the semen samples tested positive for Zika particles at 1 month, and 5% tested positive at 81 days. In previous case reports, Zika has been documented in the semen of two men at 6 months.

There were also surprises in blood samples. Half showed the presence of Zika RNA in blood serum 2 weeks after the start of symptoms, and 5% showed that presence at 54 days.

In urine, however, the virus was less persistent. Zika particles were detected in half the samples at 1 week, and in 5% at 39 days.

“These findings suggest that serum may be better for diagnosing Zika virus,” said Dr Paz-Bailey.

This differs from previous findings, which suggested that Zika lingers longer in urine than in blood serum. But that difference could be related to the prospective nature of the study by the CDC team. Cross-sectional and incident studies “are not designed to look at duration,” Dr Paz-Bailey pointed out.

Still, the findings were a surprise to David Thomas, MD, from the Johns Hopkins University School of Medicine in Baltimore. Usually, it’s the urine where viruses persist the longest, he explained.

“I think that’s something that will need to be verified,” he told Medscape Medical News. “I don’t think that’s a critical part of the abstract, though. The CDC recommendations are continuing to be verified. I think that’s the take-home message.”

Persistence and Infectiousness

These data are important because they suggest how long a person can remain infectious. However, it’s too soon to know. After all, RNA fragments can persist for a long time without the capacity to infect another person, said Dr Paz-Bailey.

She said she expects that the final readout of the data — which the researchers hope will include 300 people — will provide evidence to guide clinical practice. Once they have that, Dr Paz-Bailey said she and the team will re-do the analysis to see if anything has changed.

“It’s taken many years of HIV research to determine what amount of virus is needed to result in transmission,” she told Medscape Medical News. “In Zika, there are still so many unknowns.”

This study was funded by the Centers for Disease Control and Prevention. Staffing support was provided by the Center for AIDS Research at Emory University. Dr Paz-Bailey and Dr Thomas have disclosed no relevant financial relationships.
Conference on Retroviruses and Opportunistic Infections (CROI) 2017: Poster 1055LB. Presented February 14, 2017.
All material on this website is protected by copyright, Copyright © 1994-2017 by WebMD LLC. This website also contains material copyrighted by 3rd parties.
Accessed at http://www.medscape.com/viewarticle/875832 on February 16, 2017.

.

Disease ‘superspreaders’ were driving cause of 2014 Ebola epidemic

The virus was isolated on Vero cells in a BSL-4 suite at Rocky Mountain Laboratories

Credit NIAID

The Ebola virus, isolated in November 2014 from patient blood samples obtained in Mali. The virus was isolated on Vero cells in a BSL-4 suite at Rocky Mountain Laboratories. Credit: NIAID

February 13, 2017

A new study about the overwhelming importance of “superspreaders” in some infectious disease epidemics has shown that in the catastrophic 2014-15 Ebola epidemic in West Africa, about 3 percent of the people infected were ultimately responsible for infecting 61 percent of all cases.

The issue of superspreaders is so significant, scientists say, that it’s important to put a better face on just who these people are. It might then be possible to better reach them with public health measures designed to control the spread of infectious disease during epidemics.

Findings were reported this week in Proceedings of the National Academy of Sciences.

The researchers concluded that Ebola superspreaders often fit into certain age groups and were based more in the community than in health care facilities. They also continued to spread the disease after many of the people first infected had been placed in care facilities, where transmission was much better controlled.

If superspreading had been completely controlled, almost two thirds of the infections might have been prevented, scientists said in the study. The researchers also noted that their findings were conservative, since they only focused on people who had been buried safely.

This suggests that the role of superspreaders may have been even more profound than this research indicates.

The research was led by Princeton University, in collaboration with scientists from Oregon State University, the London School of Hygiene and Tropical Medicine, the International Federation of Red Cross and Red Crescent Societies, the Imperial College London, and the National Institutes of Health.

The concept of superspreaders is not new, researchers say, and it has evolved during the 2000s as scientists increasingly appreciate that not all individuals play an equal role in spreading an infectious disease.

Superspreaders, for instance, have also been implicated in the spread of severe acute respiratory syndrome, or SARS, in 2003; and the more recent Middle East respiratory syndrome in 2012.

But there’s less understanding of who and how important these superspreaders are.

“In the recent Ebola outbreak it’s now clear that superspreaders were an important component in driving the epidemic,” said Benjamin Dalziel, an assistant professor of population biology in the College of Science at Oregon State University, and co-author of the study.

“We now see the role of superspreaders as larger than initially suspected. There wasn’t a lot of transmission once people reached hospitals and care centers. Because case counts during the epidemic relied heavily on hospital data, those hospitalized cases tended to be the cases we ‘saw.’

“However, it was the cases you didn’t see that really drove the epidemic, particularly people who died at home, without making it to a care center. In our analysis we were able to see a web of transmission that would often track back to a community-based superspreader.”

Superspreading has already been cited in many first-hand narratives of Ebola transmission. This study, however, created a new statistical framework that allowed scientists to measure how important the phenomenon was in driving the epidemic. It also allowed them to measure how superspreading changed over time, as the epidemic progressed, and as control measures were implemented.

The outbreak size of the 2014 Ebola epidemic in Africa was unprecedented, and early control measures failed. Scientists believe that a better understanding of superspreading might allow more targeted, and effective interventions, instead of focusing on whole populations.

“As we can learn more about these infection pathways, we should be better able to focus on the types of individual behavior and demographics that are at highest risk for becoming infected, and transmitting infection,” Dalziel said.

Researchers pointed out, for instance, that millions of dollars were spent implementing message strategies about Ebola prevention and control across entire countries. They suggest that messages tailored to individuals with higher risk and certain types of behavior may have been more successful, and prevented the epidemic from being so persistent.

Explore further: Control of emerging Ebola infections could be aided by new monitoring method
More information: Spatial and temporal dynamics of superspreading events in the 2014–2015 West Africa Ebola epidemic, PNAS, www.pnas.org/cgi/doi/10.1073/pnas.1614595114
Journal reference: Proceedings of the National Academy of Sciences search and more info website
Provided by: Oregon State University search and more info website
Read more at: https://medicalxpress.com/news/2017-02-disease-superspreaders-ebola-epidemic.html#jCp
© Medical Xpress 2011 – 2017, Science X network
Accessed at https://medicalxpress.com/news/2017-02-disease-superspreaders-ebola-epidemic.html on February 15, 2017.

 

Biosecurity: Chimeras CRISPR-Cas9

Interspecies Chimerism with Mammalian Pluripotent

Human stem cells into both cattle and pig blastocysts

*Correspondence: belmonte@salk.edu
http://dx.doi.org/10.1016/j.cell.2016.12.036

Graphical Abstract Stem Cells

Highlights

  • Naive rat PSCs robustly contribute to live rat-mouse chimeras
  • A versatile CRISPR-Cas9 mediated interspecies blastocyst complementation system
  • Naive rodent PSCs show no chimeric contribution to postimplantation pig embryos
  • Chimerism is observed with some human iPSCs in postimplantation pig embryos

In Brief

Human pluripotent stem cells robustly engraft into both cattle and pig preimplantation blastocysts, but show limited chimeric contribution to postimplantation pig embryos.

SUMMARY

Interspecies blastocyst complementation enables organ-specific enrichment of xenogenic pluripotent stem cell (PSC) derivatives. Here, we establish a versatile blastocyst complementation platform based on CRISPR-Cas9-mediated zygote genome editing and show enrichment of rat PSC-derivatives in several tissues of gene-edited organogenesisdisabled mice. Besides gaining insights into species evolution, embryogenesis, and human disease, interspecies blastocyst complementation might allow human organ generation in animals whose organ size, anatomy, and physiology are closer to humans. To date, however, whether human PSCs (hPSCs) can contribute to chimera formation in non-rodent species remains unknown. We systematically evaluate the chimeric competency of several types of hPSCs using a more diversified clade of mammals, the ungulates.

We find that naı¨ve hPSCs robustly engraft in both pig and cattle pre-implantation blastocysts but show limited contribution to post-implantation pig embryos. Instead, an intermediate hPSC type exhibits higher degree of chimerism and is able to generate differentiated progenies in post-implantation pig embryos.

INTRODUCTION

Embryonic pluripotency has been captured in vitro at a spectrum of different states, ranging from the naive state, which reflects unbiased developmental potential, to the primed state, in which cells are poised for lineage differentiation (Weinberger et al., 2016; Wu and Izpisua Belmonte, 2016). When attempting to introduce cultured pluripotent stem cells (PSCs) into a developing embryo of the same species, recent studies demonstrated that matching developmental timing is critical for successful chimera formation. For example, naive mouse embryonic stem cells (mESCs) contribute to chimera formation when injected into a blastocyst, whereas primed mouse epiblast stem cells (mEpiSCs) efficiently engraft into mouse gastrula-stage embryos, but not vice versa (Huang et al., 2012; Wu et al., 2015).

Live rodent interspecies chimeras have also been generated using naive PSCs (Isotani et al., 2011; Kobayashi et al., 2010; Xiang et al., 2008). However, it remains unclear whether naive PSCs can be used to generate chimeras between more distantly related species.

The successful derivation of human PSCs (hPSCs), including ESCs from pre-implantation human embryos (Reubinoff et al., 2000; Thomson et al., 1998), as well as the generation of induced pluripotent stem cells (iPSCs) from somatic cells through cellular reprograming (Takahashi et al., 2007; Park et al., 2008; Wernig et al., 2007; Yu et al., 2007; Aasen et al., 2008), has revolutionized the way we study human development and is heralding a new age of regenerative medicine. Several lines of evidence indicate that conventional hPSCs are in the primed pluripotent state, similar to mEpiSCs (Tesar et al., 2007; Wu et al., 2015). A number of recent studies have also reported the generation of putative naive hPSCs that molecularly resemble mESCs (Gafni et al., 2013; Takashima et al., 2014; Theunissen et al., 2014). These naive hPSCs have already provided practical and experimental advantages, including high single-cell cloning efficiency and facile genome editing (Gafni et al., 2013). Despite these advances, it remains unclear how the putative higher developmental potential of naive hPSCs can be used to better understand human embryogenesis and to develop regenerative therapies for treating patients.

Cell 168, 473–486, January 26, 2017 ª 2017 Elsevier Inc. 473

.

Scientists pollinate flowers with insect-sized drones coated in sticky gel

Scientists pollinate flowers with insect-sized drones coated in sticky gel

An illustration shows a drone depositing pollen in a lily. Photo by Eijiro Miyako/Cell Press

An illustration shows a drone depositing pollen in a lily. Photo by Eijiro Miyako/Cell Press

By Brooks Hays   |   Feb. 9, 2017 at 4:55 PM Feb. 9 (UPI) —

Scientists in Japan found a way to pollinate flowers without the help of insects.

Engineers at the National Institute of Advanced Industrial Science and Technology picked up pollen from one flower and deposited it in another using insect-sized drones coated with a special ionic gel.

The gel was originally developed in 2007 as a electrical conductor. It was deemed a failure by its creator Eijiro Miyako, a chemist at the AIST Nanomaterial Research Institute, but it may have found a new life as a pollen collecting agent.

Miyako found in the gel in a storage closet while cleaning out his lab.

“This project is the result of serendipity,” said Miyako. “We were surprised that after 8 years, the ionic gel didn’t degrade and was still so viscous. Conventional gels are mainly made of water and can’t be used for a long time, so we decided to use this material for research.”

Miyako and fellow researcher Svetlana Chechetka bought a drone to test the gel with, but soon realized they could just smear the gel on the craft’s underside. AIST colleagues Masayoshi Tange and Yue Yu helped the Miyako and Chechetka affix horse hair to the drone.

The gel-coated hairs created more surface area with which to pick up pollen. A slight electrical charge running through the bristles also helped hold the pollen grains during flight.

Scientists used their creation to successfully pollinate pink-leaved Japanese lilies, Lilium japonicum. They described their feat in the journal Chem.

“The findings, which will have applications for agriculture and robotics, among others, could lead to the development of artificial pollinators and help counter the problems caused by declining honeybee populations,” Miyako said. “We believe that robotic pollinators could be trained to learn pollination paths using global positioning systems and artificial intelligence.”

Copyright © 2017 United Press International, Inc. All Rights Reserved.
Accessed at http://www.upi.com/Science_News/2017/02/09/Scientists-pollinate-flowers-with-insect-sized-drones-coated-in-sticky-gel/8641486674376/?utm_source=fp&utm_campaign=ls&utm_medium=4 on February 10, 2017.

.

Genome Editing Criteria Outlined in New Report

A new report highlights the need to move forward with genome editing

Advantage Business Media © Copyright 2017 Advantage Business Media

https://abm-website-assets.s3.amazonaws.com/rdmag.com/s3fs-public/styles/content_body_image/public/embedded_image/2017/02/genome.jpg?itok=AVu1cXE8
Wed, 02/15/2017 – 10:12am by Kenny Walter – Digital Reporter – @RandDMagazine

A new report highlights the need to move forward with genome editing, while also ensuring there is strict oversight.

Genome editing to treat or prevent diseases or disabilities could become a reality, despite some ethical concerns.

A new report has highlighted the need for clinical trials for genome editing of the human germline by adding, removing or replacing DNA base pairs in gametes or early embryos— but only for serious conditions under strict oversight.

The joint report from the National Academy of Sciences and the National Academy of Medicine outlines several criteria that should be met before allowing genome editing clinical trials to move forward.

According to the report, these editing tools could one day be used on human embryos, eggs and sperm to remove the genes that can cause inherited diseases.

The report recommends a set of overarching principals to be used by any nation in governing human genome editing research or applications, including:

  • providing a benefit and preventing harm to those affected
  • transparency
  • due care
  • responsible science
  • respect for persons
  • fairness
  • transnational cooperation.

While genome editing is not new, more powerful, precise and less costly genome editing tools like CRISPR/Cas9, have paved the way for a bevy of new research opportunities and potential clinical applications—both heritable and non-heritable—to address a wide range of human health issues.

While the report warns of the ethical issues involving this type of science, it also states that genome editing deserves some serious consideration.

Human genome editing is widely used in basic research, but is in the early stages of developments and trials for clinical applications that involve non-heritable cells.

This type of therapy affects only the patient and not any offspring.

However, there is significant public concern over the prospect of using the same techniques for “enhancement” of human traits and capacities such as physical strength or even for uses that are not currently possible, including improving intelligence.

The report recommends that genome editing for enhancement should not currently be allowed and that public input and discussion should be solicited before allowing clinical trials for somatic genome editing for any purpose other than treating or preventing disease or disability.

“Human genome editing holds tremendous promise for understanding, treating, or preventing many devastating genetic diseases, and for improving treatment of many other illnesses,” Alta Charo, co-chair of the study committee and Sheldon B. Lubar Distinguished Chair and Warren P. Knowles Professor of Law and Bioethics, University of Wisconsin-Madison, said in a statement.

“However, genome editing to enhance traits or abilities beyond ordinary health raises concerns about whether the benefits can outweigh the risks, and about fairness if available only to some people,” she added.

Germline genome editing has become contentious because genetic changes would be inherited by the next generation and many view it as crossing an “ethically inviolable” line, according to the report.

There are concerns including spiritual objections to interfering with human reproduction to speculation about effects on social attitude toward people with disabilities to possible risks to the health and safety of future children.

However, germline genome editing could provide parents who are carriers of genetic diseases with their best or most acceptable option for having genetically related children who are born free of these diseases.

“Genome editing research is very much an international endeavor, and all nations should ensure that any potential clinical applications reflect societal values and be subject to appropriate oversight and regulation,” committee co-chair Richard Hynes, Howard Hughes Medical Institute Investigator and Daniel K. Ludwig Professor for Cancer Research, Massachusetts Institute of Technology, said in a statement. “These overarching principles and the responsibilities that flow from them should be reflected in each nation’s scientific community and regulatory processes.”

Advantage Business Media © Copyright 2017 Advantage Business Media
Accessed at http://www.rdmag.com/article/2017/02/genome-editing-criteria-outlined-new-report?et_cid=5831317&et_rid=598083810&type=headline&et_cid=5831317&et_rid=598083810&linkid=content on February 15, 2017.

 

 

Ann Morrison
By Ann Morrison February 27, 2017 21:19
Write a comment

No Comments

No Comments Yet!

Let me tell You a sad story ! There are no comments yet, but You can be first one to comment this article.

Write a comment
View comments

Write a comment

Your e-mail address will not be published.
Required fields are marked*

Donations Greatly Appreciated – Give To Help Out the Cause

DR. BILL DEAGLE’S WEBSITE: HOME OF THE NUTRIMEDICAL REPORT AND DR. BILL’S HIGHEST QUALITY NUTRITIONAL SUPPLEMENTS!

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Categories