text
string | id
string | dump
string | url
string | file_path
string | language
string | language_score
float64 | token_count
int64 | score
float64 | int_score
int64 | tags
list | matched_keywords
dict | match_summary
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
The Solar and Heliospheric Observatory (SOHO) spacecraft is expected to discover its 1,000TH comet this summer.
The SOHO spacecraft is a joint effort between NASA and the European Space Agency. It has accounted for approximately one-half of all comet discoveries with computed orbits in the history of astronomy.
"Before SOHO was launched, only 16 sun grazing comets had been discovered by space observatories. Based on that experience, who could have predicted SOHO would discover more than 60 times that number, and in only nine years," said Dr. Chris St. Cyr. He is senior project scientist for NASA's Living With a Star program at the agency's Goddard Space Flight Center, Greenbelt, Md. "This is truly a remarkable achievement!"
About 85 percent of the comets SOHO discovered belongs to the Kreutz group of sun grazing comets, so named because their orbits take them very close to Earth's star. The Kreutz sun grazers pass within 500,000 miles of the star's visible surface. Mercury, the planet closest to the sun, is about 36 million miles from the solar surface.
SOHO has also been used to discover three other well-populated comet groups: the Meyer, with at least 55 members; Marsden, with at least 21 members; and the Kracht, with 24 members. These groups are named after the astronomers who suggested the comets are related, because they have similar orbits.
Many comet discoveries were made by amateurs using SOHO images on the Internet. SOHO comet hunters come from all over the world. The United States, United Kingdom, China, Japan, Taiwan, Russia, Ukraine, France, Germany, and Lithuania are among the many countries whose citizens have used SOHO to chase comets.
Almost all of SOHO's comets are discovered using images from its Large Angle and Spectrometric Coronagraph (LASCO) instrument. LASCO is used to observe the faint, multimillion-degree outer atmosphere of the sun, called the corona. A disk in the instrument is used to make an artificial eclipse, blocking direct light from the sun, so the much fainter corona can be seen. Sun grazing comets are discovered when they enter LASCO's field of view as they pass close by the star.
"Building coronagraphs like LASCO is still more art than science, because the light we are trying to detect is very faint," said Dr. Joe Gurman, U.S. project scientist for SOHO at Goddard. "Any imperfections in the optics or dust in the instrument will scatter the light, making the images too noisy to be useful. Discovering almost 1,000 comets since SOHO's launch on December 2, 1995 is a testament to the skill of the LASCO team."
SOHO successfully completed its primary mission in April 1998. It has enough fuel to remain on station to keep hunting comets for decades if the LASCO continues to function.
For information about SOHO on the Internet, visit:
Explore further: Long-term warming, short-term variability: Why climate change is still an issue
|
<urn:uuid:78cbe1bd-1849-4138-b59a-5521e93122a3>
|
CC-MAIN-2013-20
|
http://phys.org/news4969.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.943417
| 663
| 4
| 4
|
[
"climate"
] |
{
"climate": [
"climate change"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
- Yes, this is a good time to plant native grass seed in the ground. You may have to supplement with irrigation if the rains stop before the seeds have germinated and made good root growth.
- Which grasses should I plant? The wonderful thing about California is that we have so many different ecosystems; the challenging thing about California is that we have so many different ecosystems. It’s impossible for us to know definitively which particular bunchgrasses used to grow or may still grow at your particular site, but to make the best guesses possible, we recommend the following:
- Bestcase scenario is to have bunchgrasses already on the site that you can augment through proper mowing or grazing techniques.
- Next best is to have a nearby site with native bunchgrasses and similar elevation, aspect, and soils, that you can use as a model.
- After that, go to sources such as our pamphlet Distribution of Native Grasses of California, by Alan Beetle, $7.50.
- Also reference local floras of your area, available through the California Native Plant Society.
Container growing: We grow seedlings in pots throughout the season, but ideal planning for growing your own plants in pots is to sow six months before you want to put them in the ground. Though restorationists frequently use plugs and liners (long narrow containers), and they may be required for large areas, we prefer growing them the horticultural way: first in flats, then transplanting into 4" pots, and when they are sturdy little plants, into the ground. Our thinking is that since they are not tap-rooted but fibrous-rooted (one of their main advantages as far as deep erosion control is concerned) square 4" pots suit them, and so far our experiences have borne this out.
In future newsletters, we will be reporting on the experiences and opinions of Marin ranchers Peggy Rathmann and John Wick, who are working with UC Berkeley researcher Wendy Silver on a study of carbon sequestration and bunchgrasses. So far, it’s very promising. But more on that later. For now, I’ll end with a quote from Peggy, who grows, eats, nurtures, lives, and sleeps bunchgrasses, for the health of their land and the benefit of their cows.
“It takes a while. But it’s so worth it.”
|
<urn:uuid:c183066d-32a9-42eb-91b6-191fdb0980c2>
|
CC-MAIN-2013-20
|
http://judithlarnerlowry.blogspot.com/2009/02/simplifying-california-native.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.956731
| 495
| 2.515625
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon sequestration"
],
"nature": [
"ecosystems"
]
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
by Piter Kehoma Boll
Let’s expand the universe of Friday Fellow by presenting a plant for the first time! And what could be a better choice to start than the famous Grandidier’s Baobab? Belonging to the species Adansonia grandidieri, this tree is one of the trademarks of Madagascar, being the biggest species of this genus found in the island.
Reaching up to 30 m in height and having a massive trunk only branched at the very top, it has a unique look and is found only at southwestern Madagascar. However, despite being so attractive and famous, it is classified as an endangered species by IUCN Red List, with a declining population threatened by agriculture expansion.
This tree is also heavily exploited, having vitamin C-rich fruits which can be consumed fresh and seeds used to extract oil. Its bark can also be used to make ropes and many trees are found with scars due to the extraction of part of the bark.
Having a fibrous trunk, baoabs are able to deal with drought by apparently storaging water inside them. There are no seed dispersors, which can be due to the extiction of the original dispersor by human activities.
Originally occuring close to temporary water bodies in the dry deciduous forest, today many large trees are found in always dry terrains. This probably is due to human impact that changed the local ecosystem, letting it to become drier than it was. Those areas have no or very poor ability to regenerate and probably will never go back to what they were and, once the old trees die, there will be no more baobabs there.
- – -
Baum, D. A. (1995). A Systematic Revision of Adansonia (Bombacaceae) Annals of the Missouri Botanical Garden, 82, 440-470 DOI: 10.2307/2399893
Wikipedia. Adamsonia grandidieri. Available online at <http://en.wikipedia.org/wiki/Adansonia_grandidieri>. Access on October 02, 2012.
World Conservation Monitoring Centre 1998. Adansonia grandidieri. In: IUCN 2012. IUCN Red List of Threatened Species. Version 2012.1. <www.iucnredlist.org>. Access on October 02, 2012.
|
<urn:uuid:10459212-d96b-47fa-9ead-4447c5ba731f>
|
CC-MAIN-2013-20
|
http://earthlingnature.wordpress.com/2012/10/05/friday-fellow-grandidiers-baobab/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.923648
| 488
| 3.703125
| 4
|
[
"climate",
"nature"
] |
{
"climate": [
"drought"
],
"nature": [
"conservation",
"ecosystem",
"endangered species"
]
}
|
{
"strong": 4,
"weak": 0,
"total": 4,
"decision": "accepted_strong"
}
|
Ki Tisa (Mitzvot)
For more teachings on this portion, see the archives to this blog, below at March 2006.
This week’s parasha is best known for the dramatic and richly meaningful story of the Golden Calf and the Divine anger, of Moses’ pleading on behalf of Israel, and the eventual reconciliation in the mysterious meeting of Moses with God in the Cleft of the Rock—subjects about which I’ve written at length, from various aspects, in previous years. Yet the first third of the reading (Exod 30:11-31:17) is concerned with various practical mitzvot, mostly focused on the ritual worship conducted in the Temple, which tend to be skimmed over in light of the intense interest of the Calf story. As this year we are concerned specifically with the mitzvot in each parasha, I shall focus on this section.
These include: the giving by each Israelite [male] of a half-shekel to the Temple; the making of the laver, from which the priests wash their hands and feet before engaging in Divine service; the compounding of the incense and of the anointing oil; and the Shabbat. I shall focus here upon the washing of the hands.
Hand-washing is a familiar Jewish ritual: it is, in fact, the first act performed by pious Jews upon awakening in the morning (some people even keep a cup of water next to their beds, so that they may wash their hands before taking even a single step); one performs a ritual washing of the hands before eating bread; before each of the daily prayers; etc. The section here dealing with the laver in the Temple (Exod 30:17-21) is also one of the four portions from the Torah recited by many each morning, as part of the section of the liturgy known as korbanot, chapters of Written and Oral Torah reminiscent of the ancient sacrificial system, that precede Pesukei de-Zimra.
Sefer ha-Hinukh, at §106, explains the washing of hands as an offshoot of the honor due to the Temple and its service—one of many laws intended to honor, magnify, and glorify the Temple. Even if the priest was pure and clean, he must wash (literally, “sanctify”) his hands before engaging in avodah. This simple gesture of purification served as a kind of separation between the Divine service and everyday life. It added a feeling of solemnity, of seriousness, a sense that one was engaged in something higher, in some way separate from the mundane activities of regular life. (One hand-washing by kohanim, in the morning, was sufficient, unless they left the Temple grounds or otherwise lost the continuity of their sacred activity.) Our own netilat yadaim, whether before prayer or breaking bread, may be seen as a kind of halakhic carryover from the Temple service, albeit on the level of Rabbinic injunction.
What is the symbolism of purifying one’s hands? Water, as a flowing element, as a solvent that washes away many of the things with which it comes in contact, is at once a natural symbol of both purity, and of the renewal of life. Mayim Hayyim—living waters—is an age old association. Torah is compared to water; water, constantly flowing, is constantly returning to its source. At the End of Days, “the land will be filled with knowledge of the Lord, like waters going down to the sea.” A small part of this is hinted in this simple, everyday gesture.
“See that this nation is Your people”
But I cannot pass over Ki Tisa without some comment on the incident of the Golden Calf and its ramifications. This week, reading through the words of the parasha in preparation for a shiur (what Ruth Calderon, founder of Alma, a secularist-oriented center for the study of Judaism in Tel Aviv, called “barefoot reading”—that is, naïve, without preconceptions), I discovered something utterly simple that I had never noticed before in quite the same way.
At the beginning of the Calf incident, God tells Moses, who has been up on the mountain with Him, “Go down, for your people have spoiled” (32:7). A few verses later, when God asks leave of Moses (!) to destroy them, Moses begs for mercy on behalf of the people with the words “Why should Your anger burn so fiercely against Your people…” (v. 11). That is, God calls them Moses’ people, while Moses refers to them as God’s people. Subsequent to this exchange, each of them refers to them repeatedly in the third person, as “the people” or “this people” (העם; העם הזה). Neither of them refers to them, as God did in the initial revelation to Moses at the burning bush (Exodus 3:7 and passim) as “my people,” or with the dignified title, “the children of Israel”—as if both felt a certain alienation, of distance from this tumultuous, capricious bunch. Only towards the end, after God agrees not to destroy them, but still states “I will not go up with them,” but instead promises to send an angel, does Moses says “See, that this nation is Your people” (וראה כי עמך הגוי הזה; 33:13).
What does all this signify? Reading the peshat carefully, there is one inevitable conclusion: that God wished to nullify His covenant with the people Israel. It is in this that there lies the true gravity, and uniqueness, of the Golden Calf incident. We are not speaking here, as we read elsewhere in the Bible—for example, in the two great Imprecations (tokhahot) in Lev 26 and Deut 28, or in the words of the prophets during the First Temple—merely of threats of punishment, however harsh, such as drought, famine, pestilence, enemy attacks, or even exile and slavery. There, the implicit message is that, after a period of punishment, a kind of moral purgation through suffering, things will be restored as they were. Here, the very covenant itself, the very existence of an intimate connection with God, hangs in the balance. God tells Moses, “I shall make of you a people,” i.e., instead of them.
This, it seems to me, is the point of the second phase of this story. Moses breaks the tablets; he and his fellow Levites go through the camp killing all those most directly implicated in worshipping the Calf; God recants and agrees not to destroy the people. However, “My angel will go before them” but “I will not go up in your midst” (33:2, 3). This should have been of some comfort; yet this tiding is called “this bad thing,” the people mourn, and remove the ornaments they had been wearing until then. Evidently, they understood the absence of God’s presence or “face” as a grave step; His being with them was everything. That is the true importance of the Sanctuary in the desert and the Tent of Meeting, where Moses speaks with God in the pillar of cloud (33:10). God was present with them there in a tangible way, in a certain way continuing the epiphany at Sinai. All that was threatened by this new declaration.
Moses second round of appeals to God, in Exod 33:12-23, focuses on bringing God, as it were, to a full reconciliation with the people. This is the significance of the Thirteen Qualities of Mercy, of what I have called the Covenant in the Cleft of the Rock, the “faith of Yom Kippur” as opposed to that of Shavuot (see HY I: Ki Tisa; and note Prof. Jacob Milgrom’s observation that this chapter stands in the exact center, in a literary sense, of the unit known as the Hextateuch—Torah plus the Book of Joshua).
But I would add two important points. One, that this is the first place in the Torah where we read about sin followed by reconciliation. After Adam and Eve ate of the fruit of the Garden, they were punished without hope of reprieve; indeed, their “punishment “ reads very much like a description of some basic aspects of the human condition itself. Cain, after murdering Abel, was banished, made to wander the face of the earth. The sin of the brothers in selling Joseph, and their own sense of guilt, is a central factor in their family dynamic from then on, but there is nary a word of God’s response or intervention. It would appear that God’s initial expectation in the covenant at Sinai was one of total loyalty and fidelity. The act of idolatry was an unforgivable breach of the covenant—much as adultery is generally perceived as a fundamental violation of the marital bond.
Moses, in persuading God to recant of His jealousy and anger, to give the faithless people another chance, is thus introducing a new concept: of a covenant that includes the possibility of even the most serious transgressions being forgiven; of the knowledge that human beings are fallible, and that teshuvah and forgiveness are essential components of any economy of men living before a demanding God.
The second, truly astonishing point is the role played by Moses in all this. Moshe Rabbenu, “the man of God,” is not only the great teacher of Israel, the channel through which they learn the Divine Torah, but also, as it were, one who teaches God Himself. It is God who “reveals His Qualities of Mercy” at the Cleft of the Rock; but without Moses cajoling, arguing, persuading (and note the numerous midrashim around this theme), “were it not for my servant Moses who stood in the breach,” all this would not have happened. It was Moses who elicited this response and who, so to speak, pushed God Himself to this new stage in his relation with Israel—to give up His expectations of perfection from His covenanted people, and to understand that living within a covenant means, not rigid adherence to a set of laws, but a living relationship with real people, taking the bad with the good. (Again, the parallel to human relationships is obvious)
|
<urn:uuid:c4c19472-691a-44c6-a55b-21fbb183475b>
|
CC-MAIN-2013-20
|
http://hitzeiyehonatan.blogspot.com/2008_02_01_archive.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.966594
| 2,269
| 2.671875
| 3
|
[
"climate"
] |
{
"climate": [
"drought"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
“A remote Indian village is responding to global warming-induced water shortages by creating large masses of ice, or “artificial glaciers,” to get through the dry spring months. (See a map of the region.)
Located on the western edge of the Tibetan plateau, the village of Skara in the Ladakh region of India is not a common tourist destination.
“It’s beautiful, but really remote and difficult to get to,” said Amy Higgins, a graduate student at the Yale School of Forestry & Environmental Studies who worked on the artificial glacier project.
“A lot of people, when I met them in Delhi and I said I was going to Ladakh, they looked at me like I was going to the moon,” said Higgins, who is also a National Geographic grantee.
People in Skara and surrounding villages survive by growing crops such as barley for their own consumption and for sale in neighboring towns. In the past, water for the crops came from meltwater originating in glaciers high in the Himalaya.”
Read more: National Geographic
|
<urn:uuid:5050ac83-4770-4e9c-9b44-38ba46d2466e>
|
CC-MAIN-2013-20
|
http://peakwater.org/2012/02/artificial-glaciers-water-crops-in-indian-highlands/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.973301
| 226
| 3.78125
| 4
|
[
"climate"
] |
{
"climate": [
"global warming"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Since 1993, RAN’s Protect-an-Acre program (PAA) has distributed more than one million dollars in grants to more than 150 frontline communities, Indigenous-led organizations, and allies, helping their efforts to secure protection for millions of acres of traditional territory in forests around the world.
Rainforest Action Network believes that Indigenous peoples are the best stewards of the world’s rainforests and that frontline communities organizing against the extraction and burning of dirty fossil fuels deserve the strongest support we can offer. RAN established the Protect-an-Acre program to protect the world’s forests and the rights of their inhabitants by providing financial aid to traditionally under-funded organizations and communities in forest regions.
Indigenous and frontline communities suffer disproportionate impacts to their health, livelihood and culture from extractive industry mega-projects and the effects of global climate change. That’s why Protect-an-Acre provides small grants to community-based organizations, Indigenous federations and small NGOs that are fighting to protect millions of acres of forest and keep millions of tons of CO2 in the ground.
Our grants support organizations and communities that are working to regain control of and sustainably manage their traditional territories through land title initiatives, community education, development of sustainable economic alternatives, and grassroots resistance to destructive industrial activities.
PAA is an alternative to “buy-an-acre” programs that seek to provide rainforest protection by buying tracts of land, but which often fail to address the needs or rights of local Indigenous peoples. Uninhabited forest areas often go unprotected, even if purchased through a buy-an-acre program. It is not uncommon for loggers, oil and gas companies, cattle ranchers, and miners to illegally extract resources from so-called “protected” areas.
Traditional forest communities are often the best stewards of the land because their way of life depends upon the health of their environment. A number of recent studies add to the growing body of evidence that Indigenous peoples are better protectors of their forests than governments or industry.
Based on the success of Protect-an-Acre, RAN launched The Climate Action Fund (CAF) in 2009 as a way to direct further resources and support to frontline communities and Indigenous peoples challenging the fossil fuel industry.
Additionally, RAN has been a Global Advisor to Global Greengrants Fund (GGF) since 1995, identifying recipients for small grants to mobilize resources for global environmental sustainability and social justice using the same priority and criteria as we use for PAA and CAF.
Through these three programs each year we support grassroots projects that result in at least:
|
<urn:uuid:995ec683-d967-4f36-82d9-547c9ea3d646>
|
CC-MAIN-2013-20
|
http://ran.org/protect-an-acre
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.938919
| 540
| 2.671875
| 3
|
[
"climate"
] |
{
"climate": [
"climate change",
"co2"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Karuk Tribe: Learning from the First Californians for the Next California
Editor's Note: This is part of series, Facing the Climate Gap, which looks at grassroots efforts in California low-income communities of color to address climate change and promote climate justice.
This article was published in collaboration with GlobalPossibilities.org.
The three sovereign entities in the United States are the federal government, the states and indigenous tribes, but according to Bill Tripp, a member of the Karuk Tribe in Northern California, many people are unaware of both the sovereign nature of tribes and the wisdom they possess when it comes to issues of climate change and natural resource management.
“A lot of people don’t realize that tribes even exist in California, but we are stakeholders too, with the rights of indigenous peoples,” says Tripp.
Tripp is an Eco-Cultural Restoration specialist at the Karuk Tribe Department of Natural Resources. In 2010, the tribe drafted an Eco-Cultural Resources Management Plan, which aims to manage and restore “balanced ecological processes utilizing Traditional Ecological Knowledge supported by Western Science.” The plan addresses environmental issues that affect the health and culture of the Karuk tribe and outlines ways in which tribal practices can contribute to mitigating the effects of climate change.
Before climate change became a hot topic in the media, many indigenous and agrarian communities, because of their dependence upon and close relationship to the land, began to notice troubling shifts in the environment such as intense drought, frequent wildfires, scarcer fish flows and erratic rainfall.
There are over 100 government recognized tribes in California, which represent more than 700,000 people. The Karuk is the second largest Native American tribe in California and has over 3,200 members. Their tribal lands include over 1.48 million acres within and around the Klamath and Six Rivers National Forests in Northwest California.
Tribes like the Karuk are among the hardest hit by the effects of climate change, despite their traditionally low-carbon lifestyles. The Karuk, in particular have experienced dramatic environmental changes in their forestlands and fisheries as a result of both climate change and misguided Federal and regional policies.
The Karuk have long depended upon the forest to support their livelihood, cultural practices and nourishment. While wildfires have always been a natural aspect of the landscape, recent studies have shown that fires in northwestern California forests have risen dramatically in frequency and size due to climate related and human influences. According to the California Natural Resources Agency, fires in California are expected to increase 100 percent due to increased temperatures and longer dry seasons associated with climate change.
Some of the other most damaging human influences to the Karuk include logging activities, which have depleted old growth forests, and fire suppression policies created by the U.S. Forest Service in the 1930s that have limited cultural burning practices. Tripp says these policies have been detrimental to tribal traditions and the forest environment.
“It has been huge to just try to adapt to the past 100 years of policies that have led us to where we are today. We have already been forced to modify our traditional practices to fit the contemporary political context,” says Tripp.
Further, the construction of dams along the Klamath River by PacifiCorp (a utility company) has impeded access to salmon and other fish that are central to the Karuk diet. Fishing regulations have also had a negative impact.
Though the Karuk’s dependence on the land has left them vulnerable to the projected effects of climate change, it has also given them and other indigenous groups incredible knowledge to impart to western climate science. Historically, though, tribes have been largely left out of policy processes and decisions. The Karuk decided to challenge this historical pattern of marginalization by formulating their own Eco-Cultural Resources Management Plan.
The Plan provides over twenty “Cultural Environmental Management Practices” that are based on traditional ecological knowledge and the “World Renewal” philosophy, which emphasizes the interconnectedness of humans and the environment. Tripp says the Plan was created in the hopes that knowledge passed down from previous generations will help strengthen Karuk culture and teach the broader community to live in a more ecologically sound way.
“It is designed to be a living document…We are building a process of comparative learning, based on the principals and practices of traditional ecological knowledge to revitalize culturally relevant information as passed through oral transmission and intergenerational observations,” says Tripp.
One of the highlights of the plan is to re-establish traditional burning practices in order to decrease fuel loads and the risk for more severe wildfires when they do happen. Traditional burning was used by the Karuk to burn off specific types of vegetation and promote continued diversity in the landscape. Tripp notes that these practices are an example of how humans can play a positive role in maintaining a sound ecological cycle in the forests.
“The practice of utilizing fire to manage resources in a traditional way not only improves the use quality of forest resources, it also builds and maintains resiliency in the ecological process of entire landscapes” explains Tripp.
Another crucial aspect of the Plan is the life cycle of fish, like salmon, that are central to Karuk food traditions and ecosystem health. Traditionally, the Karuk regulated fishing schedules to allow the first salmon to pass, ensuring that those most likely to survive made it to prime spawning grounds. There were also designated fishing periods and locations to promote successful reproduction. Tripp says regulatory agencies have established practices that are harmful this cycle.
“Today, regulatory agencies permit the harvest of fish that would otherwise be protected under traditional harvest management principles and close the harvest season when the fish least likely to reach the very upper river reaches are passing through,” says Tripp.
The Karuk tribe is now working closely with researchers from universities such as University of California, Berkeley and the University of California, Davis as well as public agencies so that this traditional knowledge can one day be accepted by mainstream and academic circles dealing with climate change mitigation and adaptation practices.
According to the Plan, these land management practices are more cost effective than those currently practiced by public agencies; and, if implemented, they will greatly reduce taxpayer cost burdens and create employment. The Karuk hope to create a workforce development program that will hire tribal members to implement the plan’s goals, such as multi-site cultural burning practices.
The Plan has a long way to full realization and Federal recognition. According to the National Indian Forest Resources Management Act and the National Environmental Protection Act, it must go through a formal review process. Besides that, the Karuk Tribe is still solidifying funding to pursue its goals.
The work of California’s environmental stewards will always be in demand, and the Karuk are taking the lead in showing how community wisdom can be used to generate an integrated approach to climate change. Such integrated and community engaged policy approaches are rare throughout the state but are emerging in other areas. In Oakland, for example, the Oakland Climate Action Coalition engaged community members and a diverse group of social justice, labor, environmental, and business organizations to develop an Energy and Climate Action Plan that outlines specific ways for the City to reduce greenhouse gas emissions and create a sustainable economy.
In the end, Tripp hopes the Karuk Plan will not only inspire others and address the global environmental plight, but also help to maintain the very core of his people. In his words: “Being adaptable to climate change is part of that, but primarily it is about enabling us to maintain our identity and the people in this place in perpetuity.”
Dr. Manuel Pastor is Professor of Sociology and American Studies & Ethnicity at the University of Southern California where he also directs the Program for Environmental and Regional Equity and co-directs USC’s Center for the Study of Immigrant Integration. His most recent books include Just Growth: Inclusion and Prosperity in America’s Metropolitan Regions (Routledge 2012; co-authored with Chris Benner) Uncommon Common Ground: Race and America’s Future (W.W. Norton 2010; co-authored with Angela Glover Blackwell and Stewart Kwoh), and This Could Be the Start of Something Big: How Social Movements for Regional Equity are Transforming Metropolitan America (Cornell 2009; co-authored with Chris Benner and Martha Matsuoka).
|
<urn:uuid:003baaf4-69c7-4ee7-b37f-468bf9b55842>
|
CC-MAIN-2013-20
|
http://www.resilience.org/stories/2012-10-19/karuk-tribe-learning-from-the-first-californians-for-the-next-california
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.945849
| 1,714
| 3.296875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"adaptation",
"climate change",
"climate justice",
"drought",
"greenhouse gas"
],
"nature": [
"ecological",
"ecosystem",
"ecosystem health",
"restoration"
]
}
|
{
"strong": 6,
"weak": 3,
"total": 9,
"decision": "accepted_strong"
}
|
What Is Air Pollution?
in its great magnitude has existed in the 20th century from the
coal burning industries of the early century to the fossil burning technology in
the new century. The problems of
air pollution are a major problem for highly developed nations whose large
industrial bases and highly developed infrastructures generate much of the air
Every year, billions of tonnes of pollutants are released into the
atmosphere; the sources include power plants burning fossil fuels to the effects
of sunlight on certain natural materials. But
the air pollutants released from natural materials pose very little health
threat, only the natural radioactive gas radon poses any threat to health.
So much of the air pollutants being released into the atmosphere are all
results of man’s activities.
In the United Kingdom, traffic
is the major cause of air pollution in British cities. Eighty six percent of families own either one or two
vehicles. Because of the
high-density population of cities and towns, the number of people exposed to air
pollutants is great. This had led
to the increased number of people getting chronic diseases over these past years
since the car ownership in the UK has nearly trebled. These include asthma and respiratory complaints ranging
through the population demographic from children to elderly people who are most
at risk. Certainly those who are
suffering from asthma will notice the effects more greatly if living in the
inner city areas or industrial areas or even near by major roads.
Asthma is already the fourth biggest killer, after heart diseases and
cancers in the UK and currently, it affects more than three point four million
In the past, severe pollution in London during 1952 added with low winds
and high-pressure air had taken more than four thousand lives and another seven
hundred in 1962, in what was called the ‘Dark Years’ because of the dense
dark polluted air.
is also causing devastation for the environment; many of these causes are by man
made gases like sulphur dioxide that results from electric plants burning fossil
fuels. In the UK, industries and
utilities that use tall smokestacks by means of removing air pollutants only
boost them higher into the atmosphere, thereby only reducing the concentration
at their site.
These pollutants are often transported over the North Sea and produce
adverse effects in western Scandinavia, where sulphur dioxide and nitrogen oxide
from UK and central Europe are generating acid rain, especially in Norway and
Sweden. The pH level, or relative
acidity of many of Scandinavian fresh water lakes has been altered dramatically
by acid rain causing the destruction of entire fish populations.
In the UK, acid rain formed by subsequent sulphur dioxide atmospheric
emissions has lead to acidic erosion in limestone in North Western Scotland and
marble in Northern England.
In 1998, the
London Metropolitan Police launched the ‘Emissions Controlled Reduction’
scheme where by traffic police would monitor the amount of pollutants being
released into the air by vehicle exhausts.
The plan was for traffic police to stop vehicles randomly on roads
leading into the city of London, the officer would then measure the amounts of
air pollutants being released using a CO2 measuring reader fixed in
the owner's vehicle's exhaust. If the
exhaust exceeded the legal amount (based on micrograms of pollutants) the driver
would be fined at around twenty-five pounds.
The scheme proved unpopular with drivers, especially with those driving
to work and did little to help improve the city air quality.
In Edinburgh, the main causes of bad air quality were from the vast
number of vehicles going through the city centre from west to east.
In 1990, the Edinburgh council developed the city by-pass at a cost of
nearly seventy five million pounds. The
by-pass was ringed around the outskirts of the city where its main aim was to
limit the number of vehicles going through the city centre and divert vehicles
to use the by-pass in order to reach their destination without going through the
city centre. This released much of
the congestion within the city but did little very little in solving the
city’s overall air quality.
To further decrease the number of vehicles on the roads, the government
promoted public transport. Over two
hundred million pounds was devoted in developing the country's public transport
network. Much of which included the development of more bus lanes in
the city of London, which increased the pace of bus services.
Introduction of gas and electric powered buses took place in Birmingham
in order to decrease air pollutants emissions around the centre of the city.
Because children and the elderly are at most risk to chronic diseases,
such as asthma, major diversion roads were build in order to divert the vehicles
away from residential areas, schools and elderly institutions.
In some councils, trees were planted along the sides of the road in order
to decrease the amount of carbon monoxide emissions.
Other ways of improving the air quality included the restriction on the
amounts of air pollutants being released into the atmosphere by industries;
tough regulations were placed whereby if the air quality dropped below a certain
level around the industries area, a heavy penalty would be wavered against them.
© Copyright 2000, Andrew Wan.
|
<urn:uuid:ea6c54fe-1f6e-4a4c-bcb5-4f4c9e0fb6de>
|
CC-MAIN-2013-20
|
http://everything2.com/user/KS/writeups/air+pollution
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.948933
| 1,097
| 3.25
| 3
|
[
"climate"
] |
{
"climate": [
"co2"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
The Economics of Ecosystems and Biodiversity: Ecological and Economic Foundations
Human well-being relies critically on ecosystem services provided by nature. Examples include water and air quality regulation, nutrient cycling and decomposition, plant pollination and flood control, all of which are dependent on biodiversity. They are predominantly public goods with limited or no markets and do not command any price in the conventional economic system, so their loss is often not detected and continues unaddressed and unabated. This in turn not only impacts human well-being, but also seriously undermines the sustainability of the economic system.
It is against this background that TEEB: The Economics of Ecosystems and Biodiversity project was set up in 2007 and led by the United Nations Environment Programme to provide a comprehensive global assessment of economic aspects of these issues. The Economics of Ecosystems and Biodiversity, written by a team of international experts, represents the scientific state of the art, providing a comprehensive assessment of the fundamental ecological and economic principles of measuring and valuing ecosystem services and biodiversity, and showing how these can be mainstreamed into public policies. The Economics of Ecosystems and Biodiversity and subsequent TEEB outputs will provide the authoritative knowledge and guidance to drive forward the biodiversity conservation agenda for the next decade.
1. Integrating the Ecological and Economic Dimensions in Biodiversity and Ecosystem Service Valuation
2. Biodiversity, Ecosystems and Ecosystem Services
3. Measuring Biophysical Quantities and the Use of Indicators
4. The Socio-cultural Context of Ecosystem and Biodiversity Valuation
5. The Economics of Valuing Ecosystem Services and Biodiversity
6. Discounting, Ethics, and Options for Maintaining Biodiversity and Ecosystem Integrity
7. Lessons Learned and Linkages with National Policies
Appendix 1: How the TEEB Framework Can be Applied: The Amazon Case
Appendix 2: Matrix Tables for Wetland and Forest Ecosystems
Appendix 3: Estimates of Monetary Values of Ecosystem Services
"A landmark study on one of the most pressing problems facing society, balancing economic growth and ecological protection to achieve a sustainable future."
- Simon Levin, Moffett Professor of Biology, Department of Ecology and Evolution Behaviour, Princeton University, USA
"TEEB brings a rigorous economic focus to bear on the problems of ecosystem degradation and biodiversity loss, and on their impacts on human welfare. TEEB is a very timely and useful study not only of the economic and social dimensions of the problem, but also of a set of practical solutions which deserve the attention of policy-makers around the world."
- Nicholas Stern, I.G. Patel Professor of Economics and Government at the London School of Economics and Chairman of the Grantham Research Institute on Climate Change and the Environment
"The [TEEB] project should show us all how expensive the global destruction of the natural world has become and – it is hoped – persuade us to slow down.' The Guardian 'Biodiversity is the living fabric of this planet – the quantum and the variability of all its ecosystems, species, and genes. And yet, modern economies remain largely blind to the huge value of the abundance and diversity of this web of life, and the crucial and valuable roles it plays in human health, nutrition, habitation and indeed in the health and functioning of our economies. Humanity has instead fabricated the illusion that somehow we can get by without biodiversity, or that it is somehow peripheral to our contemporary world. The truth is we need it more than ever on a planet of six billion heading to over nine billion people by 2050. This volume of 'TEEB' explores the challenges involved in addressing the economic invisibility of biodiversity, and organises the science and economics in a way decision makers would find it hard to ignore."
- Achim Steiner, Executive Director, United Nations Environment Programme
This volume is an output of TEEB: The Economics of Ecosystems and Biodiversity study and has been edited by Pushpam Kumar, Reader in Environmental Economics, University of Liverpool, UK. TEEB is hosted by the United Nations Environment Programme (UENP) and supported by the European Commission, the German Federal Ministry for the Environment (BMU) and the UK Department for Environment, Food and Rural Affairs (DEFRA), recently joined by Norway's Ministry for Foreign Affairs, The Netherlands' Ministry of Housing (VROM), the UK Department for International Development (DFID) and also the Swedish International Development Cooperation Agency (SIDA). The study leader is Pavan Sukhdev, who is also Special Adviser – Green Economy Initiative, UNEP.
View other products from the same publisher
|
<urn:uuid:906f7240-4b78-478d-9b89-4b845237d4f3>
|
CC-MAIN-2013-20
|
http://www.nhbs.com/the_economics_of_ecosystems_and_biodiversity_tefno_176729.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.898484
| 966
| 3.21875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change"
],
"nature": [
"biodiversity",
"biodiversity loss",
"conservation",
"ecological",
"ecosystem",
"ecosystem integrity",
"ecosystem services",
"ecosystems",
"wetland"
]
}
|
{
"strong": 8,
"weak": 2,
"total": 10,
"decision": "accepted_strong"
}
|
DENVER – Put on your poodle skirts and tune in Elvis on the transistor radio, because it’s starting to look a lot like the 1950s.
Unfortunately, this won’t be the nostalgic ’50s of big cars and pop music.
The 1950s that could be on the way to Colorado is the decade of drought.
So says Brian Bledsoe, a Colorado Springs meteorologist who studies the history of ocean currents and uses what he learns to make long-term weather forecasts.
“I think we’re reliving the ’50s, bottom line,” Bledsoe said Friday morning at the annual meeting of the Colorado Water Congress.
Bledsoe studies the famous El Niño and La Niña ocean currents. But he also looks at other, less well-known cycles, including long-term temperature cycles in the oceans.
In the 1950s, water in the Pacific Ocean was colder than normal, but it was warmer than usual in the Atlantic. That combination caused a drought in Colorado that was just as bad as the Dust Bowl of the 1930s.
The ocean currents slipped back into their 1950s pattern in the last five years, Bledsoe said. The cycles can last a decade or more, meaning bad news for farmers, ranchers, skiers and forest residents.
“Drought feeds on drought. The longer it goes, the harder it is to break,” Bledsoe said.
The outlook is worst for Eastern Colorado, where Bledsoe grew up and his parents still own a ranch. They recently had to sell half their herd when their pasture couldn’t provide enough feed.
“They’ve spent the last 15 years grooming that herd for organic beef stock,” he said.
Bledsoe looks for monsoon rains to return to the Four Corners and Western Slope in July. But there’s still a danger in the mountains in the summer.
“Initially, dry lightning could be a concern, so obviously, the fire season is looking not so great right now,” he said.
Weather data showed the last year’s conditions were extreme.
Nolan Doesken, Colorado’s state climatologist, said the summer of 2012 was the hottest on record in Colorado. And it was the fifth-driest winter since record-keeping began more than 100 years ago.
Despite recent storms in the San Juan Mountains, this winter hasn’t been much better.
“We’ve had a wimpy winter so far,” Doesken said. “The past week has been a good week for Colorado precipitation.”
However, the next week’s forecast shows dryness returning to much of the state.
Reservoir levels are higher than they were in 2002 – the driest year since Coloradans started keeping track of moisture – but the state is entering 2013 with reservoirs that were depleted last year.
“You don’t want to start a year at this level if you’re about to head into another drought,” Doesken said.
It was hard to find good news in Friday morning’s presentations, but Bledsoe is happy that technology helps forecasters understand the weather better than they did during past droughts. That allows people to plan for what’s on the way.
“I’m a glass-half-full kind of guy,” he said.
|
<urn:uuid:6b5ff0a8-5351-4289-bb86-d7195a7837dc>
|
CC-MAIN-2013-20
|
http://durangoherald.com/article/20130201/NEWS01/130209956/0/20120510/Drought-is-making-itself-at-home
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.964422
| 739
| 2.640625
| 3
|
[
"climate"
] |
{
"climate": [
"drought",
"el niño",
"monsoon"
],
"nature": []
}
|
{
"strong": 3,
"weak": 0,
"total": 3,
"decision": "accepted_strong"
}
|
The “presidi” translates as “garrisons” (from the French word, “to equip”), as protectors of traditional food production practices
Monday, March 23, 2009
The “presidi” translates as “garrisons” (from the French word, “to equip”), as protectors of traditional food production practices
This past year, I have had rewarding opportunities to observe traditional food cultures in varied regions of the world. These are:
Athabascan Indian in the interior of Alaska (the traditional Tanana Chiefs Conference tribal lands) in July, 2008 (for more, read below);
Swahili coastal tribes in the area of Munje village (population about 300), near Msambweni, close to the Tanzania border in December, 2008-January, 2009 (for more, read below); and,Laikipia region of Kenya (January, 2009), a German canton of Switzerland (March, 2009), and the Piemonte-Toscana region of northern/central Italy (images only, February-March, 2009).
In Fort Yukon, Alaska, salmon is a mainstay of the diet. Yet, among the Athabascan Indians, threats to subsistence foods and stresses on household economics abound. In particular, high prices for external energy sources (as of July, 2008, almost $8 for a gallon of gasoline and $6.50 for a gallon of diesel, which is essential for home heating), as well as low Chinook salmon runs for information click here, and moose numbers.
Additional resource management issues pose threats to sustaining village life – for example, stream bank erosion along the Yukon River, as well as uneven management in the Yukon Flats National Wildlife Refuge. People are worried about ever-rising prices for fuels and store-bought staples, and fewer and fewer sources of wage income. The result? Villagers are moving out from outlying areas into “hub” communities like Fort Yukon -- or another example, Bethel in Southwest Alaska – even when offered additional subsidies, such as for home heating. But, in reality, “hubs” often offer neither much employment nor relief from high prices.
In Munje village in Kenya, the Digo, a Bantu-speaking, mostly Islamic tribe in the southern coastal area of Kenya, enjoy the possibilities of a wide variety of fruits, vegetables, and fish/oils.
Breakfast in the village typically consists of mandazi (a fried bread similar to a doughnut), and tea with sugar. Lunch and dinner is typically ugali and samaki (fish), maybe with some dried cassava or chickpeas.
On individual shambas (small farms), tomatoes, cassava, maize, cowpeas, bananas, mangos, and coconut are typically grown. Ugali is consumed every day, as are cassava, beans, oil, fish -- and rice, coconut, and chicken, depending on availability.
Even with their own crops, villagers today want very much to enter the market economy and will sell products from their shambas to buy staples and the flour needed to make mandazis, which they in turn sell. Sales of mandazis (and mango and coconut, to a lesser extent) bring in some cash for villagers.
A treasured food is, in fact, the coconut. This set of pictures show how coconut is used in the village. True, coconut oil now is reserved only for frying mandazi. But it also is used as a hair conditioner, and the coconut meat is eaten between meals. I noted also that dental hygiene and health were good in the village. Perhaps the coconut and fish oils influence this (as per the work of Dr. Weston A. Price).
Photos L-R: Using a traditional conical basket (kikatu), coconut milk is pressed from the grated meat; Straining coconut milk from the grated meat, which is then heated to make oil; Common breakfast food (and the main source of cash income), the mandazi, is still cooked in coconut oil
Note: All photos were taken by G. Berardi
Thursday, February 19, 2009
Despite maize in the fields, it is widely known that farmers are hoarding stocks in many districts. Farmers are refusing the NCPB/government price of Sh1,950 per 90-kg bag. They are waiting to be offered at least the same amount of money as that which was being assigned to imports (Bii, 2009b). “The country will continue to experience food shortages unless the Government addresses the high cost of farm inputs to motivate farmers to increase production,” said Mr. Jonathan Bii of Uasin Gish (Bartoo & Lucheli, 2009; Bii, 2009a, 2009b; Bungee, 2009).
Pride and politics, racism and corruption are to blame for food deficits (Kihara & Marete, 2009; KNA, 2009; Muluka, 2009; Siele, 2009). Clearly, what are needed in Kenya are food system planning, disaster management planning, and protection and development of agricultural and rural economies.
Click here for the full text.
Photos taken by G. Berardi
Cabbage, an imported food (originally), and susceptible to much pest damage.
Camps still remain for Kenya’s Internally Displaced Persons resulting from post-election violence forced migrations. Food security is poor.
Lack of sustained recent short rains have resulted in failed maize harvests.
Friday, January 16, 2009
Today I went to a lunch time discussion of sustainability. This concept promoted development with an equitable eye to the triple bottom line - financial, social, and ecological costs. We discussed the how it seemed relatively easier to discuss the connections between financial and ecological costs, than between social costs and other costs. Sustainable development often comes down to "green" designs that consider environmental impacts or critiques of the capitalist model of financing.
As I thought about sustainable development, or sustainable community management if you are a bit queasy with the feasibility of continuous expansion, I considered its corollaries in the field of disaster risk reduction. It struck me again that it is somewhat easier to focus on some components of the triple bottom line in relation to disasters.
The vulnerability approach to disasters has rightly brought into focus the fact that not all people are equally exposed to or impacted by disasters. Rather, it is often the poor or socially marginalized most at risk and least able to recover. This approach certainly brings into focus the social aspects of disasters.
The disaster trap theory, likewise, brings into focus the financial bottom line. This perspective is most often discussed in international development and disaster reduction circles. It argues that disasters destroy development gains and cause communities to de-develop unless both disaster reduction and development occur in tandem. Building a cheaper, non-earthquake resistant school in an earthquake zone, may make short-term financial sense. However,over the long term, this approach is likely to result in loss of physical infrastructure, human life, and learning opportunities when an earthquake does occur.
What seems least developed to me, though I would enjoy being rebutted, is the ecological bottom line of disasters. Perhaps it is an oxymoron to discuss the ecological costs of disasters, given that many disasters are triggered natural ecological processes like cyclones, forest fires, and floods. It might also be an oxymoron simply because a natural hazard disaster is really looking at an ecological event from an almost exclusively human perspective. Its not a disaster if it doesn't destroy human lives and human infrastructure. But, the lunch-time discussion made me wonder if there wasn't something of an ecological bottom line to disasters in there somewhere. Perhaps it is in the difference between an ecological process heavily or lightly impacted by human ecological modification. Is a forest fire in a heavily managed forest different from that in an unmanaged forest? Certainly logging can heighten the impacts of heavy rains by inducing landslides, resulting in a landscape heavily rather than lightly impacted by the rains. Similar processes might also be true in the case of heavily managed floodplains. Flooding is concentrated and increased in areas outside of levee systems. What does that mean for the ecology of these locations? Does a marsh manage just as well in low as high flooding? My guess would be no.
And of course, there is the big, looming disaster of climate change. This is a human-induced change that may prove quite disasterous to many an ecological system, everything from our pine forests here, to arctic wildlife, and tropical coral reefs.
Perhaps, we disaster researchers, need to also consider a triple bottom line when making arguments for the benefits of disaster risk reduction.
Tuesday, January 13, 2009
This past week the Northwest experienced a severe barrage of weather systems back to back. Everyone seemed to be affected. Folks were re-routed on detours, got soaked, slipped on ice, or had to spend money to stay a little warmer. In Whatcom and Skagit Counties, there are hundreds to thousands of people currently in the process of recovering and cleaning-up after the floods. These people live in the rural areas throughout the county, with fewer people knowing about their devastation and having greater vulnerability to flood hazards.
Luckily, there are local agencies and non-profits who are ready at a moment’s call to help anyone in need. The primary organization that came to the aid of the flood victims was the American Red Cross.
The last week I began interning and volunteering with one of these non-profits, the Mt. Baker American Red Cross (ARC) Chapter. While I am still in the process of getting screened and officially trained, I received first-hand experience and saw how important this organization is to the community.
With the flood waters rising throughout the week, people were flooded out of their homes and rescued from the overflowing rivers and creeks. As the needs for help increased, hundreds of ARC volunteers were called to service. Throughout the floods there have been several shelters opened to accommodate the needs of these flood victims. On Saturday I was asked to help staff one of these shelters overnight in Ferndale.
While I talked with parents and children, I became more aware of the stark reality of how these people have to recover from having all their possessions covered in sewage and mud and damaged by flood waters. In the meantime, these flood victims have all their privacy exposed to others in a public shelter, while they work to find stability in the middle of all the traumas of the events. As I sat talking and playing with the children, another thought struck me. Children are young and resilient, but it must be very difficult when they connect with a volunteer and then lose that connection soon after. Sharing a shelter with the folks over the weekend showed a higher degree of reality and humanity to the situation than the news coverage ever could.
I posted this bit about my volunteer experience because it made me realize something about my education and degree track in disaster reduction and emergency planning. We look at ways to create a more sustainable community, and we need to remember that community service is an important part of creating this ideal. Underlying sustainable development is the triple bottom line (social, economy, and environment). Volunteers and non-profits are a major part of this social line of sustainability. Organizations like the American Red Cross only exist because of volunteers. So embrace President-elect Obama’s call for a culture of civil service this coming week and make a commitment to the organization of your choice with your actions or even your pocketbook. Know that sustainable development cannot exist with out social responsibility.
Thursday, January 8, 2009
Its been two days now that schools have been closed in Whatcom County, not for snow, but for rain and flooding. This unusual event coincides with record flooding throughout Western Washington, just a year after record flooding closed I5 for three days and Lewis County businesses experienced what they then called an unprecedented 500 year flood. I guess not.
There are many strange things about flood risk notation, and this idea that a 500 year flood often trips people up. They often believe a flood of that size will happen only once in 500 years. On a probabilistic level, this is inaccurate. A 500 year flood simply has a .2% probability of happening each year. A more useful analogy might be to tell people they are rolling a 500 sided die every year and hoping that it doesn’t come up with a 1. Next year they’ll be forced to roll again.
But, this focus on misunderstandings of probability often hides an even larger societal misunderstanding . Flood risk changes when we change the environment in which it occurs. If a flood map tells you that you are not in the flood plain, better check the date of the map. Most maps are utterly out of date and many vastly underestimate present flood risk. There are several reasons this happens. Urban development, especially development with a lot of parking lots and buildings that don’t let water seep into the ground, will cause rainwater to move quickly into rivers rather than seep into the ground and slowly release. Developers might complain that they are required to create runoff catchment wetlands when they do build. They do, but these requirements may very well be based upon outdated data on flood risk. Thus, each new development never fully compensates for its runoff, a small problem for each site but a mammoth problem when compounded downstream.
Deforesting can have the same effect, with the added potential for house-crushing and river-clogging mudslides. Timber harvesting is certainly an important industry in our neck of the woods. Not only is commercial logging an important source of jobs for many rural and small towns, logging on state Department of Natural Resource land is the major source of funding for K-12 education. Yet, commercial logging, like other industries, suffers from a problem of cost externalization. When massive mudslides occurred during last year’s storm, Weyerhaeuser complained that it wasn’t it’s logging practices, but the fact that it was an unprecedented, out of the blue, 500 year storm that caused it. While it is doubtful the slides would have occurred uncut land, that isn’t the only fallacy. When the slide did occur, the costs of repairing roads, treatment plants, and bridges went to the county and often was passed on to the nation’s tax payers through state and federal recovery grants. Thus, what should have been paid by Weyerhaeuser, 500 year probability or not, was paid by someone else.
Finally, there is local government. Various folks within local governments set regulations for zoning, deciding what will be built and where. Here is the real crux of the problem. Local government also gets an increase in revenue in the form of property, sales, and business income taxes. Suppress the updating of flood plain maps, and you get a short term profit and often, a steady supply of happy voters. You might think these local governments will have to pay when the next big flood comes, but often that can be avoided. Certainly, they must comply with federal regulations on flood plain management to be part of the National Flood Insurance program, but that plan has significant leeway and little monitoring. Like the commercial logging, disaster-stricken local governments can often push the recovery costs off to individual homeowners through the FEMA homeowner’s assistance program, and off to state and federal agencies by receiving disaster recovery and community development grants and loans. Certainly, some communities are so regularly devastated, and are so few resources, that disasters simply knock them down before they can given stand up again. But others have found loopholes and can profit by continuing to use old food maps and failing to aggressively control flood plain development.
What is it going to take to really change this system and make it unprofitable to profit from bad land use management?
Here’s a good in-depth article on last year’s landslides in Lewis County. http://seattletimes.nwsource.com/html/localnews/2008048848_logging13m.html
An interesting article on the failure of best management practices in development catchment basins can be found here: Hur, J. et al (2008) Does current management of storm water runoff adequately protect water resources in developing catchments? Journal of Soil and Water Conservation, 63 (2) pp. 77-90.
Monday, December 29, 2008
It’s difficult to imagine a more colorful book, celebrating locally-grown and –marketed foods, than David Westerlund’s Simone Goes to the Market: A Children’s Book of Colors Connecting Face and Food. This book is aimed at families and the foods they eat. Who doesn’t want to know where their food is coming from – the terroir, the kind of microclimate it’s produced in, as well as who’s selling it? Gretchen sells her pole beans (purple), Maria her Serrano peppers (green), Dana and Matt sell their freshly-roasted coffee (black), Katie her carrots (orange), a blue poem from Matthew, brown potatoes from Roslyn, yellow patty pan squash from Jed, red tomatoes (soft and ripe) from Diana, and golden honey from Bill (and his bees). This is a book perfect for children of any age who want to connect to and with the food systems that sustain community. Order from [email protected].
|
<urn:uuid:e139d24e-7144-4cf8-866c-6066d64a435f>
|
CC-MAIN-2013-20
|
http://igcr.blogspot.com/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.962803
| 3,622
| 2.875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change",
"disaster risk reduction",
"flood risk",
"food security"
],
"nature": [
"conservation",
"ecological",
"wetlands"
]
}
|
{
"strong": 4,
"weak": 3,
"total": 7,
"decision": "accepted_strong"
}
|
Buried inside Robert Bryce’s relatively new book entitled Power Hungry is a call to “aggressively pursue taxes or caps on the emissions of neurotoxins, particularly those that come from burning coal” to generate electricity such as mercury and lead. This is notable not because Bryce agrees with many environmental and human health experts, but also because the book credibly debunks the move to tax or cap carbon dioxide emissions both from technical and political perspectives.
The word “neurotoxic” literally translates as “nerve poison”. Broadly described, a neurotoxicant is any chemical substance which adversely acts on the structure or function of the human nervous system.
As its subtitle signals, Power Hungry also declares policies subsidizing renewable sources of electricity, biofuels and electric vehicles as too costly and impractical to make a significant difference in making the U.S. power and transportation systems more sustainable.
So why take aim at mercury and lead, which is certain to drive up the cost of coal-fired electricity just as a carbon cap or tax would? Because, Bryce asserts, “arguing against heavy metal contaminants with known neurotoxicity will be far easier than arguing against carbon dioxide emissions. Cutting the output of mercury and the other heavy metals may, in the long run, turn out to have far greater benefits for the environmental and human health.” Bryce draws a parallel to the U.S. government ordering oil refiners to remove lead from gasoline starting in the 1970s.
In the book, which has has received predominantly good reviews on Amazon.com, Bryce makes some valid points about the carbon density of our energy sources. Among his overarching messages is that the carbon density of the world’s major economies is actually declining (see graph below). Not to be missed: his attack on carbon sequestration, pp. 160-165. His case about the threat of neurotoxins begins on p. 167.
There’s a lot more to this challenge of reducing America’s reliance on coal-fired power plants than this. But considering the failure by the U.S. Congress to agree on a carbon tax or cap, his idea has serious merit and deserves a broad discussion, especially as Congress reassess its budget priorities. This includes billions of dollars of tax breaks and incentives for oil and other fossil fuels.
|
<urn:uuid:ed7842f6-485f-401b-96c7-6ca3e6045411>
|
CC-MAIN-2013-20
|
http://www.theenergyfix.com/2011/05/07/tax-toxins-not-carbon-dioxide-from-coal-fired-power-plants/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955607
| 481
| 2.671875
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide",
"carbon sequestration"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Deaths in Moscow have doubled to an average of 700 people a day as the Russian capital is engulfed by poisonous smog from wildfires and a sweltering heat wave, a top health official said today, according to the Associated Press.
The Russian newspaper Pravda reported: “Moscow is suffocating. Thick toxic smog has been covering the sky above the city for days. The sun in Moscow looks like the moon during the day: it’s not that bright and yellow, but pale and orange with misty outlines against the smoky sky. Muscovites have to experience both the smog and sweltering heat at once.”
“Russia has recently seen the longest unprecedented heat wave for at least one thousand years, the head of the Russian Meteorological Center,” the news site Ria Novosti reported.
Various news sites report that foreign embassies have reduced activities or shut down, with many staff leaving Moscow to escape the toxic atmosphere.
Russian heatwave: This NASA map released today shows areas of Russia experiencing above-average temperatures this summer (orange and red). The map was released on NASA’s Earth Obervatory website.
NASA Earth Observatory image by Jesse Allen, based on MODIS land surface temperature data available through the NASA Earth Observations (NEO) Website. Caption by Michon Scott.
According to NASA:
In the summer of 2010, the Russian Federation had to contend with multiple natural hazards: drought in the southern part of the country, and raging fires in western Russia and eastern Siberia. The events all occurred against the backdrop of unusual warmth. Bloomberg reported that temperatures in parts of the country soared to 42 degrees Celsius (108 degrees Fahrenheit), and the Wall Street Journal reported that fire- and drought-inducing heat was expected to continue until at least August 12.
This map shows temperature anomalies for the Russian Federation from July 20-27, 2010, compared to temperatures for the same dates from 2000 to 2008. The anomalies are based on land surface temperatures observed by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. Areas with above-average temperatures appear in red and orange, and areas with below-average temperatures appear in shades of blue. Oceans and lakes appear in gray.
Not all parts of the Russian Federation experienced unusual warmth on July 20-27, 2010. A large expanse of northern central Russia, for instance, exhibits below-average temperatures. Areas of atypical warmth, however, predominate in the east and west. Orange- and red-tinged areas extend from eastern Siberia toward the southwest, but the most obvious area of unusual warmth occurs north and northwest of the Caspian Sea. These warm areas in eastern and western Russia continue a pattern noticeable earlier in July, and correspond to areas of intense drought and wildfire activity.
Bloomberg reported that 558 active fires covering 179,596 hectares (693 square miles) were burning across the Russian Federation as of August 6, 2010. Voice of America reported that smoke from forest fires around the Russian capital forced flight restrictions at Moscow airports on August 6, just as health officials warned Moscow residents to take precautions against the smoke inhalation.
Posted by David Braun
Earlier related post: Russia burns in hottest summer on record (July 28, 2010)
Talk about tough: These guys throw themselves out of 50-year-old aircraft into burning Siberian forests. (National Geographic Magazine feature, February 2008)
Photo by Mark Thiessen
Join Nat Geo News Watch community
Readers are encouraged to comment on this and other posts–and to share similar stories, photos and links–on the Nat Geo News Watch Facebook page. You must sign up to be a member of Facebook and a fan of the blog page to do this.
Leave a comment on this page
You may also email David Braun ([email protected]) if you have a comment that you would like to be considered for adding to this page. You are welcome to comment anonymously under a pseudonym.
|
<urn:uuid:10b103c1-284b-41c5-8dc9-bc9d1b7577ea>
|
CC-MAIN-2013-20
|
http://newswatch.nationalgeographic.com/2010/08/09/russia_chokes_as_fires_rage_worst_summer_ever/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.933289
| 827
| 2.84375
| 3
|
[
"climate"
] |
{
"climate": [
"drought",
"heatwave"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Time to think big
Did the designation of 2010 as the first-ever International Year of Biodiversity mean anything at all? Is it just a publicity stunt, with no engagement on the real, practical issues of conservation, asks Simon Stuart, Chair of IUCN’s Species Survival Commission.
Eight years ago 183 of the world’s governments committed themselves “to achieve by 2010 a significant reduction of the current rate of biodiversity loss at the global, regional and national level as a contribution to poverty alleviation and to the benefit of all life on Earth”. This was hardly visionary—the focus was not on stopping extinctions or loss of key habitats, but simply on slowing their rate of loss—but it was, at least, the first time the nations of the world had pledged themselves to any form of concerted attempt to face up to the ongoing degradation of nature.
Now the results of all the analyses of conservation progress since 2002 are coming in, and there is a unanimous finding: the world has spectacularly failed to meet the 2010 Biodiversity Target, as it is called. Instead species extinctions, habitat loss and the degradation of ecosystems are all accelerating. To give a few examples: declines and extinctions of amphibians due to disease and habitat loss are getting worse; bleaching of coral reefs is growing; and large animals in South-East Asia are moving rapidly towards extinction, especially from over-hunting and degradation of habitats.
|This month the world’s governments will convene in Nagoya, Japan, for the Convention on Biological Diversity’s Conference of the Parties. Many of us hope for agreement there on new, much more ambitious biodiversity targets for the future. The first test of whether or not the 2010 International Year of Biodiversity means anything will be whether or not the international community can commit itself to a truly ambitious conservation agenda.|
The early signs are promising. Negotiating sessions around the world have produced 20 new draft targets for 2020. Collectively these are nearly as strong as many of us hoped, and certainly much stronger than the 2010 Biodiversity Target. They include: halving the loss and degradation of forests and other natural habitats; eliminating overfishing and destructive fishing practices; sustainably managing all areas under agriculture, aquaculture and forestry; bringing pollution from excess nutrients and other sources below critical ecosystem loads; controlling pathways introducing and establishing invasive alien species; managing multiple pressures on coral reefs and other vulnerable ecosystems affected by climate change and ocean acidification; effectively protecting at least 15 per cent of land and sea, including the areas of particular importance for biodiversity; and preventing the extinction of known threatened species. We now have to keep up the pressure to prevent these from becoming diluted.
We at IUCN are pushing for urgent action to stop biodiversity loss once and for all. The well-being of the entire planet—and of people—depends on our committing to maintain healthy ecosystems and strong wildlife populations. We are therefore proposing, as a mission for 2020, “to have put in place by 2020 all the necessary policies and actions to prevent further biodiversity loss”. Examples include removing government subsidies which damage biodiversity (as many agricultural ones do), establishing new nature reserves in important areas for threatened species, requiring fisheries authorities to follow the advice of their scientists to ensure the sustainability of catches, and dramatically cutting carbon dioxide emissions worldwide to reduce the impacts of climate change and ocean acidification.
If the world makes a commitment along these lines, then the 2010 International Year of Biodiversity will have been about more than platitudes. But it will still only be a start: the commitment needs to be implemented. We need to look for signs this year of a real change from governments and society over the priority accorded to biodiversity.
|One important sign will be the amount of funding that governments pledge this year for replenishing the Global Environment Facility (GEF), the world’s largest donor for biodiversity conservation in developing countries. Between 1991 and 2006, it provided approximately $2.2 billion in grants to support more than 750 biodiversity projects in 155 countries. If the GEF is replenished at much the same level as over the last decade we shall know that the governments are still in “business as usual” mode. But if it is doubled or tripled in size, then we shall know that they are starting to get serious.|
IUCN estimates that even a tripling of funding would still fall far short of what is needed to halt biodiversity loss. Some conservationists have suggested that developed countries need to contribute 0.2 per cent of gross national income in overseas biodiversity assistance to achieve this. That would work out at roughly $120 billion a year—though of course this would need to come through a number of sources, not just the GEF. It is tempting to think that this figure is unrealistically high, but it is small change compared to the expenditures governments have committed to defence and bank bail outs.
It is time for the conservation movement to think big. We are addressing problems that are hugely important for the future of this planet and its people, and they will not be solved without a huge increase in funds.
|
<urn:uuid:2d3e80a0-ca7b-4358-80a9-0f5129e87a3e>
|
CC-MAIN-2013-20
|
http://cms.iucn.org/es/recursos/focus/enfoques_anteriores/cbd_2010/noticias/opinion/?6131/time-to-think-big
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.940236
| 1,055
| 3.296875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon dioxide",
"climate change"
],
"nature": [
"biodiversity",
"biodiversity loss",
"conservation",
"ecosystem",
"ecosystems",
"habitat"
]
}
|
{
"strong": 7,
"weak": 1,
"total": 8,
"decision": "accepted_strong"
}
|
Opportunities and Challenges in High Pressure Processing of Foods
By Rastogi, N K; Raghavarao, K S M S; Balasubramaniam, V M; Niranjan, K; Knorr, D
Consumers increasingly demand convenience foods of the highest quality in terms of natural flavor and taste, and which are free from additives and preservatives. This demand has triggered the need for the development of a number of nonthermal approaches to food processing, of which high-pressure technology has proven to be very valuable. A number of recent publications have demonstrated novel and diverse uses of this technology. Its novel features, which include destruction of microorganisms at room temperature or lower, have made the technology commercially attractive. Enzymes and even spore forming bacteria can be inactivated by the application of pressure-thermal combinations, This review aims to identify the opportunities and challenges associated with this technology. In addition to discussing the effects of high pressure on food components, this review covers the combined effects of high pressure processing with: gamma irradiation, alternating current, ultrasound, and carbon dioxide or anti-microbial treatment. Further, the applications of this technology in various sectors-fruits and vegetables, dairy, and meat processing-have been dealt with extensively. The integration of high-pressure with other matured processing operations such as blanching, dehydration, osmotic dehydration, rehydration, frying, freezing / thawing and solid- liquid extraction has been shown to open up new processing options. The key challenges identified include: heat transfer problems and resulting non-uniformity in processing, obtaining reliable and reproducible data for process validation, lack of detailed knowledge about the interaction between high pressure, and a number of food constituents, packaging and statutory issues.
Keywords high pressure, food processing, non-thermal processing
Consumers demand high quality and convenient products with natural flavor and taste, and greatly appreciate the fresh appearance of minimally processed food. Besides, they look for safe and natural products without additives such as preservatives and humectants. In order to harmonize or blend all these demands without compromising the safety of the products, it is necessary to implement newer preservation technologies in the food industry. Although the fact that “high pressure kills microorganisms and preserves food” was discovered way back in 1899 and has been used with success in chemical, ceramic, carbon allotropy, steel/alloy, composite materials and plastic industries for decades, it was only in late 1980′s that its commercial benefits became available to the food processing industries. High pressure processing (HPP) is similar in concept to cold isostatic pressing of metals and ceramics, except that it demands much higher pressures, faster cycling, high capacity, and sanitation (Zimmerman and Bergman, 1993; Mertens and Deplace, 1993). Hite (1899) investigated the application of high pressure as a means of preserving milk, and later extended the study to preserve fruits and vegetables (Hite, Giddings, and Weakly, 1914). It then took almost eighty years for Japan to re- discover the application of high-pressure in food processing. The use of this technology has come about so quickly that it took only three years for two Japanese companies to launch products, which were processed using this technology. The ability of high pressure to inactivate microorganisms and spoilage catalyzing enzymes, whilst retaining other quality attributes, has encouraged Japanese and American food companies to introduce high pressure processed foods in the market (Mermelstein, 1997; Hendrickx, Ludikhuyze, Broeck, and Weemaes, 1998). The first high pressure processed foods were introduced to the Japanese market in 1990 by Meidi-ya, who have been marketing a line of jams, jellies, and sauces packaged and processed without application of heat (Thakur and Nelson, 1998). Other products include fruit preparations, fruit juices, rice cakes, and raw squid in Japan; fruit juices, especially apple and orange juice, in France and Portugal; and guacamole and oysters in the USA (Hugas, Garcia, and Monfort, 2002). In addition to food preservation, high- pressure treatment can result in food products acquiring novel structure and texture, and hence can be used to develop new products (Hayashi, 1990) or increase the functionality of certain ingredients. Depending on the operating parameters and the scale of operation, the cost of highpressure treatment is typically around US$ 0.05-0.5 per liter or kilogram, the lower value being comparable to the cost of thermal processing (Thakur and Nelson, 1998; Balasubramaniam, 2003).
The non-availability of suitable equipment encumbered early applications of high pressure. However, recent progress in equipment design has ensured worldwide recognition of the potential for such a technology in food processing (Could, 1995; Galazka and Ledward, 1995; Balci and Wilbey, 1999). Today, high-pressure technology is acknowledged to have the promise of producing a very wide range of products, whilst simultaneously showing potential for creating a new generation of value added foods. In general, high-pressure technology can supplement conventional thermal processing for reducing microbial load, or substitute the use of chemical preservatives (Rastogi, Subramanian, and Raghavarao, 1994).
Over the past two decades, this technology has attracted considerable research attention, mainly relating to: i) the extension of keeping quality (Cheftel, 1995; Farkas and Hoover, 2001), ii) changing the physical and functional properties of food systems (Cheftel, 1992), and iii) exploiting the anomalous phase transitions of water under extreme pressures, e.g. lowering of freezing point with increasing pressures (Kalichevsky, Knorr, and Lillford, 1995; Knorr, Schlueter, and Heinz, 1998). The key advantages of this technology can be summarized as follows:
1. it enables food processing at ambient temperature or even lower temperatures;
2. it enables instant transmittance of pressure throughout the system, irrespective of size and geometry, thereby making size reduction optional, which can be a great advantage;
3. it causes microbial death whilst virtually eliminating heat damage and the use of chemical preservatives/additives, thereby leading to improvements in the overall quality of foods; and
4. it can be used to create ingredients with novel functional properties.
The effect of high pressure on microorganisms and proteins/ enzymes was observed to be similar to that of high temperature. As mentioned above, high pressure processing enables transmittance of pressure rapidly and uniformly throughout the food. Consequently, the problems of spatial variations in preservation treatments associated with heat, microwave, or radiation penetration are not evident in pressure-processed products. The application of high pressure increases the temperature of the liquid component of the food by approximately 3C per 100 MPa. If the food contains a significant amount of fat, such as butter or cream, the temperature rise is greater (8-9C/100 MPa) (Rasanayagam, Balasubramaniam, Ting, Sizer, Bush, and Anderson, 2003). Foods cool down to their original temperature on decompression if no heat is lost to (or gained from) the walls of the pressure vessel during the holding stage. The temperature distribution during the pressure-holding period can change depending on heat transfer across the walls of the pressure vessel, which must be held at the desired temperature for achieving truly isothermal conditions. In the case of some proteins, a gel is formed when the rate of compression is slow, whereas a precipitate is formed when the rate is fast. High pressure can cause structural changes in structurally fragile foods containing entrapped air such as strawberries or lettuce. Cell deformation and cell damage can result in softening and cell serum loss. Compression may also shift the pH depending on the imposed pressure. Heremans (1995) indicated a lowering of pH in apple juice by 0.2 units per 100 MPa increase in pressure. In combined thermal and pressure treatment processes, Meyer (2000) proposed that the heat of compression could be used effectively, since the temperature of the product can be raised from 70-90C to 105-120C by a compression to 700 MPa, and brought back to the initial temperature by decompression.
As a thermodynamic parameter, pressure has far-reaching effects on the conformation of macromolecules, the transition temperature of lipids and water, and a number of chemical reactions (Cheftel, 1992; Tauscher, 1995). Phenomena that are accompanied by a decrease in volume are enhanced by pressure, and vice-versa (principle of Le Chatelier). Thus, under pressure, reaction equilibriums are shifted towards the most compact state, and the reaction rate constant is increased or decreased, depending on whether the “activation volume” of the reaction (i.e. volume of the activation complex less volume of reactants) is negative or positive. It is likely that pressure a\lso inhibits the availability of the activation energy required for some reactions, by affecting some other energy releasing enzymatic reactions (Farr, 1990). The compression energy of 1 litre of water at 400 MPa is 19.2 kJ, as compared to 20.9 kJ for heating 1 litre of water from 20 to 25C. The low energy levels involved in pressure processing may explain why covalent bonds of food constituents are usually less affected than weak interactions. Pressure can influence most biochemical reactions, since they often involve change in volume. High pressure controls certain enzymatic reactions. The effect of high pressure on protein/enzyme is reversible unlike temperature, in the range 100-400 MPa and is probably due to conformational changes and sub-unit dissociation and association process (Morild, 1981).
For both the pasteurization and sterilization processes, a combined treatment of high pressure and temperature are frequently considered to be most appropriate (Farr, 1990; Patterson, Quinn, Simpson, and Gilmour, 1995). Vegetative cells, including yeast and moulds, are pressure sensitive, i.e. they can be inactivated by pressures of ~300-600 MPa (Knorr, 1995; Patterson, Quinn, Simpson, and Gilmour, 1995). At high pressures, microbial death is considered to be due to permeabilization of cell membrane. For instance, it was observed that in the case of Saccharomyces cerevasia, at pressures of about 400 MPa, the structure and cytoplasmic organelles were grossly deformed and large quantities of intracellular material leaked out, while at 500 MPa, the nucleus could no longer be recognized, and a loss of intracellular material was almost complete (Farr, 1990). Changes that are induced in the cell morphology of the microorganisms are reversible at low pressures, but irreversible at higher pressures where microbial death occurs due to permeabilization of the cell membrane. An increase in process temperature above ambient temperature, and to a lesser extent, a decrease below ambient temperature, increases the inactivation rates of microorganisms during high pressure processing. Temperatures in the range 45 to 50C appear to increase the rate of inactivation of pathogens and spoilage microorganisms. Preservation of acid foods (pH ≤ 4.6) is, therefore, the most obvious application of HPP as such. Moreover, pasteurization can be performed even under chilled conditions for heat sensitive products. Low temperature processing can help to retain nutritional quality and functionality of raw materials treated and could allow maintenance of low temperature during post harvest treatment, processing, storage, transportation, and distribution periods of the life cycle of the food system (Knorr, 1995).
Bacterial spores are highly pressure resistant, since pressures exceeding 1200 MPa may be needed for their inactivation (Knorr, 1995). The initiation of germination or inhibition of germinated bacterial spores and inactivation of piezo-resistive microorganisms can be achieved in combination with moderate heating or other pretreatments such as ultrasound. Process temperature in the range 90-121C in conjunction with pressures of 500-800 MPa have been used to inactivate spores forming bacteria such as Clostridium botulinum. Thus, sterilization of low-acid foods (pH > 4.6), will most probably rely on a combination of high pressure and other forms of relatively mild treatments.
High-pressure application leads to the effective reduction of the activity of food quality related enzymes (oxidases), which ensures high quality and shelf stable products. Sometimes, food constituents offer piezo-resistance to enzymes. Further, high pressure affects only non-covalent bonds (hydrogen, ionic, and hydrophobic bonds), causes unfolding of protein chains, and has little effect on chemical constituents associated with desirable food qualities such as flavor, color, or nutritional content. Thus, in contrast to thermal processing, the application of high-pressure causes negligible impairment of nutritional values, taste, color flavor, or vitamin content (Hayashi, 1990). Small molecules such as amino acids, vitamins, and flavor compounds remain unaffected by high pressure, while the structure of the large molecules such as proteins, enzymes, polysaccharides, and nucleic acid may be altered (Balci and Wilbey, 1999).
High pressure reduces the rate of browning reaction (Maillard reaction). It consists of two reactions, condensation reaction of amino compounds with carbonyl compounds, and successive browning reactions including metanoidin formation and polymerization processes. The condensation reaction shows no acceleration by high pressure (5-50 MPa at 50C), because it suppresses the generation of stable free radicals derived from melanoidin, which are responsible for the browning reaction (Tamaoka, Itoh, and Hayashi, 1991). Gels induced by high pressure are found to be more glossy and transparent because of rearrangement of water molecules surrounding amino acid residues in a denatured state (Okamoto, Kawamura, and Hayashi, 1990).
The capability and limitations of HPP have been extensively reviewed (Thakur and Nelson, 1998; Smelt, 1998;Cheftal, 1995; Knorr, 1995; Fair, 1990; Tiwari, Jayas, and Holley, 1999; Cheftel, Levy, and Dumay, 2000; Messens, Van Camp, and Huyghebaert, 1997; Ontero and Sanz, 2000; Hugas, Garriga, and Monfort, 2002; Lakshmanan, Piggott,and Paterson, 2003; Balasubramaniam, 2003; Matser, Krebbers, Berg, and Bartels, 2004; Hogan, Kelly, and Sun, 2005; Mor-Mur and Yuste, 2005). Many of the early reviews primarily focused on the microbial efficacy of high-pressure processing. This review comprehensively covers the different types of products processed by highpressure technology alone or in combination with the other processes. It also discusses the effect of high pressure on food constituents such as enzymes and proteins. The applications of this technology in fruits and vegetable, dairy and animal product processing industries are covered. The effects of combining high- pressure treatment with other processing methods such as gamma- irradiation, alternating current, ultrasound, carbon dioxide, and anti microbial peptides have also been described. Special emphasis has been given to opportunities and challenges in high pressure processing of foods, which can potentially be explored and exploited.
EFFECT OF HIGH PRESSURE ON ENZYMES AND PROTEINS
Enzymes are a special class of proteins in which biological activity arises from active sites, brought together by a three- dimensional configuration of molecule. The changes in active site or protein denaturation can lead to loss of activity, or changes the functionality of the enzymes (Tsou, 1986). In addition to conformational changes, enzyme activity can be influenced by pressure-induced decompartmentalization (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996). Pressure induced damage of membranes facilitates enzymesubstrate contact. The resulting reaction can either be accelerated or retarded by pressure (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996; Morild, 1981). Hendrickx, Ludikhuy ze, Broeck, and Weemaes ( 1998) and Ludikhuyze, Van Loey, and Indrawati et al. (2003) reviewed the combined effect of pressure and temperature on enzymes related to the ity of fruits and vegetables, which comprises of kinetic information as well as process engineering aspects.
Pectin methylesterase (PME) is an enzyme, which normally tends to lower the viscosity of fruits products and adversely affect their texture. Hence, its inactivation is a prerequisite for the preservation of such products. Commercially, fruit products containing PME (e.g. orange juice and tomato products) are heat pasteurized to inactivate PME and prolong shelf life. However, heating can deteriorate the sensory and nutritional quality of the products. Basak and Ramaswamy (1996) showed that the inactivation of PME in orange juice was dependent on pressure level, pressure-hold time, pH, and total soluble solids. An instantaneous pressure kill was dependent only on pressure level and a secondary inactivation effect dependent on holding time at each pressure level. Nienaber and Shellhammer (2001) studied the kinetics of PME inactivation in orange juice over a range of pressures (400-600 MPa) and temperatures (25-5O0C) for various process holding times. PME inactivation followed a firstorder kinetic model, with a residual activity of pressure-resistant enzyme. Calculated D-values ranged from 4.6 to 117.5 min at 600 MPa/50C and 400 MPa/25C, respectively. Pressures in excess of 500 MPa resulted in sufficiently faster inactivation rates for economic viability of the process. Binh, Van Loey, Fachin, Verlent, Indrawati, and Hendrickx (2002a, 2002b) studied the kinetics of inactivation of strawberry PME. The combined effect of pressure and temperature on inactivation kinetics followed a fractional-conversion model. Purified strawberry PME was more stable toward high-pressure treatments than PME from oranges and bananas. Ly-Nguyen, Van Loey, Fachin, Verlent, Hendrickx (2002) showed that the inactivation of the banana PME enzyme during heating at temperature between 65 and 72.5C followed first order kinetics and the effect of pressure treatment of 600-700 MPa at 10C could be described using a fractionalconversion model. Stoforos, Crelier, Robert, and Taoukis (2002) demonstrated that under ambient pressure, tomato PME inactivation rates increased with temperature, and the highest rate was obtained at 75C. The inactivation rates were dramatically reduced as soon as the essing pressure was raised beyond 75C. High inactivation rates were obtained at a pressure higher than 700 MPa. Riahi and Ramaswamy (2003) studied high- pressure inactivation kinetics of PME isolated from a variety of sources and showed that PME from a microbial source was more resistant \to pressure inactivation than from orange peel. Almost a full decimal reduction in activity of commercial PME was achieved at 400 MPa within 20 min.
Verlent, Van Loey, Smout, Duvetter, Nguyen, and Hendrickx (2004) indicated that the optimal temperature for tomato pectinmethylesterase was shifted to higher values at elevated pressure compared to atmospheric pressure, creating the possibilities for rheology improvements by the application of high pressure.
Castro, Van Loey, Saraiva, Smout, and Hendrickx (2006) accurately described the inactivation of the labile fraction under mild-heat and high-pressure conditions by a fractional conversion model, while a biphasic model was used to estimate the inactivation rate constant of both the fractions at more drastic conditions of temperature/ pressure (10-64C, 0.1-800 MPa). At pressures lower than 300 MPa and temperatures higher than 54C, an antagonistic effect of pressure and temperature was observed.
Balogh, Smout, Binh, Van Loey, and Hendrickx (2004) observed the inactivation kinetics of carrot PME to follow first order kinetics over a range of pressure and temperature (650800 MPa, 10-40C). Enzyme stability under heat and pressure was reported to be lower in carrot juice and purified PME preparations than in carrots.
The presence of pectinesterase (PE) reduces the quality of citrus juices by destabilization of clouds. Generally, the inactivation of the enzyme is accomplished by heat, resulting in a loss of fresh fruit flavor in the juice. High pressure processing can be used to bypass the use of extreme heat for the processing of fruit juices. Goodner, Braddock, and Parish (1998) showed that the higher pressures (>600 MPa) caused instantaneous inactivation of the heat labile form of the enzyme but did not inactivate the heat stable form of PE in case of orange and grapefruit juices. PE activity was totally lost in orange juice, whereas complete inactivation was not possible in case of grapefruit juices. Orange juice pressurized at 700 MPa for l min had no cloud loss for more than 50 days. Broeck, Ludikhuyze, Van Loey, and Hendrickx (2000) studied the combined pressure-temperature inactivation of the labile fraction of orange PE over a range of pressure (0.1 to 900 MPa) and temperature (15 to 65C). Pressure and temperature dependence of the inactivation rate constants of the labile fraction was quantified using the well- known Eyring and Arrhenius relations. The stable fraction was inactivated at a temperature higher than 75C. Acidification (pH 3.7) enhanced the thermal inactivation of the stable fraction, whereas the addition of Ca^sup ++^ ions (IM) suppressed inactivation. At elevated pressure (up to 900 MPa), an antagonistic effect of pressure and temperature on inactivation of the stable fraction was observed. Ly-Nguyen, Van Loey, Smout, Ozean, Fachin, Verlent, Vu- Truong, Duvetter, and Hendrickx (2003) investigated the combined heat and pressure treatments on the inactivation of purified carrot PE, which followed a fractional-conversion model. The thermally stable fraction of the enzyme could not be inactivated. At a lower pressure (<300 MPa) and higher temperature (>50C), an antagonistic effect of pressure and heat was observed.
High pressures induced conformational changes in polygalacturonase (PG) causing reduced substrate binding affinity and enzyme inactivation. Eun, Seok, and Wan ( 1999) studied the effect of high-pressure treatment on PG from Chinese cabbage to prevent the softening and spoilage of plant-based foods such as kimchies without compromising quality. PG was inactivated by the application of pressure higher than 200 MPa for l min. Fachin, Van Loey, Indrawati, Ludikhuyze, and Hendrickx (2002) investigated the stability of tomato PG at different temperatures and pressures. The combined pressure temperature inactivation (300-600 MPa/50 -50C) of tomato PG was described by a fractional conversion model, which points to Ist-order inactivation kinetics of a pressure-sensitive enzyme fraction and to the occurrence of a pressure-stable PG fraction. Fachin, Smout, Verlent, Binh, Van Loey, and Hendrickx (2004) indicated that in the combination of pressure-temperature (5- 55C/100-600 MPa), the inactivation of the heat labile portion of purified tomato PG followed first order kinetics. The heat stable fraction of the enzyme showed pressure stability very similar to that of heat labile portion.
Peelers, Fachin, Smout, Van Loey, and Hendrickx (2004) demonstrated that effect of high-pressure was identical on heat stable and heat labile fractions of tomato PG. The isoenzyme of PG was detected in thermally treated (140C for 5 min) tomato pieces and tomato juice, whereas, no PG was found in pressure treated tomato juice or pieces.
Verlent, Van Loey, Smout, Duvetter, and Hendrickx (2004) investigated the effect of nigh pressure (0.1 and 500 MPa) and temperature (25-80C) on purified tomato PG. At atmospheric pressure, the optimum temperature for enzyme was found to be 55-60C and it decreased with an increase in pressure. The enzyme activity was reported to decrease with an increase in pressure at a constant temperature.
Shook, Shellhammer, and Schwartz (2001) studied the ability of high pressure to inactivate lipoxygenase, PE and PG in diced tomatoes. Processing conditions used were 400,600, and 800 MPa for 1, 3, and 5 min at 25 and 45C. The magnitude of the applied pressure had a significant effect in inactivating lipoxygenase and PG, with complete loss of activity occurring at 800 MPa. PE was very resistant to the pressure treatment.
Polyphenoloxidase and Pemxidase
Polyphenoloxidase (PPO) and peroxidase (POD), the enzymes responsible for color and flavor loss, can be selectively inactivated by a combined treatment of pressure and temperature. Gomes and Ledward (1996) studied the effects of pressure treatment (100-800 MPa for 1-20 min) on commercial PPO enzyme available from mushrooms, potatoes, and apples. Castellari, Matricardi, Arfelli, Rovere, and Amati ( 1997) demonstrated that there was a limited inactivation of grape PPO using pressures between 300 and 600 MPa. At 900 MPa, a low level of PPO activity was apparent. In order to reach complete inactivation, it may be necessary to use high- pressure processing treatments in conjunction with a mild thermal treatment (40-50C). Weemaes, Ludikhuyze, Broeck, and Hendrickx (1998) studied the pressure stabilities of PPO from apple, avocados, grapes, pears, and plums at pH 6-7. These PPO differed in pressure stability. Inactivation of PPO from apple, grape, avocado, and pear at room temperature (25C) became noticeable at approximately 600, 700, 800 and 900 MPa, respectively, and followed first-order kinetics. Plum PPO was not inactivated at room temperature by pressures up to 900 MPa. Rastogi, Eshtiaghi, and Knorr (1999) studied the inactivation effects of high hydrostatic pressure treatment (100-600 MPa) combined with heat treatment (0-60C) on POD and PPO enzyme, in order to develop high pressure-processed red grape juice having stable shelf-life. The studies showed that the lowest POD (55.75%) and PPO (41.86%) activities were found at 60C, with pressure at 600 and 100 MPa, respectively. MacDonald and Schaschke (2000) showed that for PPO, both temperature and pressure individually appeared to have similar effects, whereas the holding time was not significant. On the other hand, in case of POD, temperature as well as interaction between temperature and holding time had the greatest effect on activity. Namkyu, Seunghwan, and Kyung (2002) showed that mushroom PPO was highly pressure stable. Exposure to 600 MPa for 10 min reduced PPO activity by 7%; further exposure had no denaturing effect. Compression for 10 and 20 min up to 800 MPa, reduced activity by 28 and 43%, respectively.
Rapeanu, Van Loey, Smout, and Hendrickx (2005) indicated that the thermal and/or high-pressure inactivation of grape PPO followed first order kinetics. A third degree polynomial described the temperature/pressure dependence of the inactivation rate constants. Pressure and temperature were reported to act synergistically, except in the high temperature (≥45C)-low pressure (≥300 MPa) region where an antagonistic effect was observed.
Gomes, Sumner, and Ledward (1997) showed that the application of increasing pressures led to a gradual reduction in papain enzyme activity. A decrease in activity of 39% was observed when the enzyme solution was initially activated with phosphate buffer (pH 6.8) and subjected to 800 MPa at ambient temperature for 10 min, while 13% of the original activity remained when the enzyme solution was treated at 800 MPa at 60C for 10 min. In Tris buffer at pH 6.8 after treatment at 800 MPa and 20C, papain activity loss was approximately 24%. The inactivation of the enzyme is because of induced change at the active site causing loss of activity without major conformational changes. This loss of activity was due to oxidation of the thiolate ion present at the active site.
Weemaes, Cordt, Goossens, Ludikhuyze, Hendrickx, Heremans, and Tobback (1996) studied the effects of pressure and temperature on activity of 3 different alpha-amylases from Bacillus subtilis, Bacillus amyloliquefaciens, and Bacillus licheniformis. The changes in conformation of Bacillus licheniformis, Bacillus subtilis, and Bacillus amyloliquefaciens amylases occurred at pressures of 110, 75, and 65 MPa, respectively. Bacillus licheniformis amylase was more stable than amylases from Bacillus subtilis and Bacillus amyloliquefaciens to the combined heat/pressure treatment.
Riahi and Ramaswamy (2004) demonstrated that pressure inactivation of amylase in apple juice was significantly (P < 0.01 ) influenced by pH, pressure, holding time, and temperature. The inactivation was described using a bi-phasic model. The application of high pressure was sh\own to completely inactivate amylase. The importance of the pressure pulse and pressure hold approach for inactivation of amylase was also demonstrated.
High pressure denatures protein depending on the protein type, processing conditions, and the applied pressure. During the process of denaturation, the proteins may dissolve or precipitate on the application of high pressure. These changes are generally reversible in the pressure range 100-300 MPa and irreversible for the pressures higher than 300 MPa. Denaturation may be due to the destruction of hydrophobic and ion pair bonds, and unfolding of molecules. At higher pressure, oligomeric proteins tend to dissociate into subunits becoming vulnerable to proteolysis. Monomeric proteins do not show any changes in proteolysis with increase in pressure (Thakur and Nelson, 1998).
High-pressure effects on proteins are related to the rupture on non-covalent interactions within protein molecules, and to the subsequent reformation of intra and inter molecular bonds within or between the molecules. Different types of interactions contribute to the secondary, tertiary, and quaternary structure of proteins. The quaternary structure is mainly held by hydrophobic interactions that are very sensitive to pressure. Significant changes in the tertiary structure are observed beyond 200 MPa. However, a reversible unfolding of small proteins such as ribonuclease A occurs at higher pressures (400 to 800 MPa), showing that the volume and compressibility changes during denaturation are not completely dominated by the hydrophobic effect. Denaturation is a complex process involving intermediate forms leading to multiple denatured products. secondary structure changes take place at a very high pressure above 700 MPa, leading to irreversible denaturation (Balny and Masson, 1993).
Figure 1 General scheme for pressure-temperature phase diagram of proteins, (from Messens, Van Camp, and Huyghebaert, 1997).
When the pressure increases to about 100 MPa, the denaturation temperature of the protein increases, whereas at higher pressures, the temperature of denaturation usually decreases. This results in the elliptical phase diagram of native denatured protein shown in Fig. 1. A practical consequence is that under elevated pressures, proteins denature usually at room temperature than at higher temperatures. The phase diagram also specifies the pressure- temperature range in which the protein maintains its native structure. Zone III specifies that at high temperatures, a rise in denaturation temperature is found with increasing pressure. Zone II indicates that below the maximum transition temperature, protein denaturation occurs at the lower temperatures under higher pressures. Zone III shows that below the temperature corresponding to the maximum transition pressure, protein denaturation occurs at lower pressures using lower temperatures (Messens, Van Camp, and Huyghebaert, 1997).
The application of high pressure has been shown to destabilize casein micelles in reconstituted skim milk and the size distribution of spherical casein micelles decrease from 200 to 120 nm; maximum changes have been reported to occur between 150-400 MPa at 20C. The pressure treatment results in reduced turbidity and increased lightness, which leads to the formation of a virtually transparent skim milk (Shibauchi, Yamamoto, and Sagara, 1992; Derobry, Richard, and Hardy, 1994). The gels produced from high-pressure treated skim milk showed improved rigidity and gel breaking strength (Johnston, Austin, and Murphy, 1992). Garcia, Olano, Ramos, and Lopez (2000) showed that the pressure treatment at 25C considerably reduced the micelle size, while pressurization at higher temperature progressively increased the micelle dimensions. Anema, Lowe, and Stockmann (2005) indicated that a small decrease in the size of casein micelles was observed at 100 MPa, with slightly greater effects at higher temperatures or longer pressure treatments. At pressure >400 MPa, the casein micelles disintegrated. The effect was more rapid at higher temperatures although the final size was similar in all samples regardless of the pressure or temperature. At 200 MPa and 1O0C, the casein micelle size decreased slightly on heating, whereas, at higher temperatures, the size increased as a result of aggregation. Huppertz, Fox, and Kelly (2004a) showed that the size of casein micelles increased by 30% upon high-pressure treatment of milk at 250 MPa and micelle size dropped by 50% at 400 or 600 MPa.
Huppertz, Fox, and Kelly (2004b) demonstrated that the high- pressure treatment of milk at 100-600 MPa resulted in considerable solubilization of alphas 1- and beta-casein, which may be due to the solubilization of colloidal calcium phosphate and disruption of hydrophobic interactions. On storage of pressure, treated milk at 5C dissociation of casein was largely irreversible, but at 20C, considerable re-association of casein was observed. The hydration of the casein micelles increased on pressure treatment (100-600 MPa) due to induced interactions between caseins and whey proteins. Pressure treatment increased levels of alphas 1- and beta-casein in the soluble phase of milk and produced casein micelles with properties different to those in untreated milk. Huppertz, Fox, and Kelly (2004c) demonstrated that the casein micelle size was not influenced by pressures less than 200 MPa, but a pressure of 250 MPa increased the micelle size by 25%, while pressures of 300 MPa or greater, irreversibly reduced the size to 50% ofthat in untreated milk. Denaturation of alpha-lactalbumin did not occur at pressures less than or equal to 400 MPa, whereas beta-lactoglobulin was denatured at pressures greater than 100 MPa.
Galazka, Ledward, Sumner, and Dickinson (1997) reported loss of surface hydrophobicity due to application of 300 MPa in dilute solution. Pressurizing beta-lactoglobulin at 450 MPa for 15 minutes resulted in reduced solubility in water. High-pressure treatment induced extensive protein unfolding and aggregation when BSA was pressurized at 400 MPa. Beta-lactoglobulin appears to be more sensitive to pressure than alpha-lactalbumin. Olsen, Ipsen, Otte, and Skibsted (1999) monitored the state of aggregation and thermal gelation properties of pressure-treated beta-lactoglobulin immediately after depressurization and after storage for 24 h at 50C. A pressure of 150 MPa applied for 30 min, or pressures higher than 300 MPa applied for 0 or 30 min, led to formation of soluble aggregates. When continued for 30 min, a pressure of 450 MPa caused gelation of the 5% beta-lactoglobulin solution. Iametti, Tansidico, Bonomi, Vecchio, Pittia, Rovere, and DaIl’Aglio (1997) studied irreversible modifications in the tertiary structure, surface hydrophobicity, and association state of beta-lactoglobulin, when solutions of the protein at neutral pH and at different concentrations, were exposed to pressure. Only minor irreversible structural modifications were evident even for treatments as intense as 15 min at 900 MPa. The occurrence of irreversible modifications was time-dependent at 600 MPa but was complete within 2 min at 900 MPa. The irreversibly modified protein was soluble, but some covalent aggregates were formed. Subirade, Loupil, Allain, and Paquin (1998) showed the effect of dynamic high pressure on the secondary structure of betalactoglobulin. Thermal and pH sensitivity of pressure treated beta-lactoglobulin was different, suggesting that the two forms were stabilized by different electrostatic interactions. Walker, Farkas, Anderson, and Goddik (2004) used high- pressure processing (510 MPa for 10 min at 8 or 24C) to induce unfolding of beta-lactoglobulin and characterized the protein structure and surface-active properties. The secondary structure of the protein processed at 8C appeared to be unchanged, whereas at 24C alpha-helix structure was lost. Tertiary structures changed due to processing at either temperature. Model solutions containing the pressure-treated beta-lactoglobulin showed a significant decrease in surface tension. Izquierdo, Alli, Gmez, Ramaswamy, and Yaylayan (2005) demonstrated that under high-pressure treatments (100-300 MPa), the β-lactoglobulin AB was completely hydrolyzed by pronase and α-chymotrypsin. Hinrichs and Rademacher (2005) showed that the denaturation kinetics of beta-lactoglobulin followed second order kinetics while for alpha-lactalbumin it was 2.5. Alpha- lactalbumin was more resistant to denaturation than beta- lactoglobulin. The activation volume for denaturation of beta- lactoglobulin was reported to decrease with increasing temperature, and the activation energy increased with pressure up to 200 MPa, beyond which it decreased. This demonstrated the unfolding of the protein molecules.
Drake, Harison, Apslund, Barbosa-Canovas, and Swanson (1997) demonstrated that the percentage moisture and wet weight yield of cheese from pressure treated milk were higher than pasteurized or raw milk cheese. The microbial quality was comparable and some textural defects were reported due to the excess moisture content. Arias, Lopez, and Olano (2000) showed that high-pressure treatment at 200 MPa significantly reduced rennet coagulation times over control samples. Pressurization at 400 MPa led to coagulation times similar to those of control, except for milk treated at pH 7.0, with or without readjustment of pH to 6.7, which presented significantly longer coagulation times than their non-pressure treated counterparts.
Hinrichs and Rademacher (2004) demonstrated that the isobaric (200-800 MPa) and isothermal (-2 to 70C) denaturation of beta- lactoglobulin and alpha-lactalbumin of whey protein followed 3rd and 2nd order kinetics, respectively. Isothermal pressure denaturation of both beta-lactoglobulin A and B did not differ significantly and an increase in temperature resulted in an increase in thedenaturation rate. At pressures higher than 200 MPa, the denaturation rate was limited by the aggregation rate, while the pressure resulted in the unfolding of molecules. The kinetic parameters of denaturation were estimated using a single step non- linear regression method, which allowed a global fit of the entire data set. Huppertz, Fox, and Kelly (2004d) examined the high- pressure induced denaturation of alpha-lactalbumin and beta- lactoglobulin in dairy systems. The higher level of pressure- induced denaturation of both proteins in milk as compared to whey was due to the absence of casein micelles and colloidal calcium phosphate in the whey.
The conformation of BSA was reported to remain fairly stable at 400 MPa due to a high number of disulfide bonds which are known to stabilize its three dimensional structure (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Kieffer and Wieser (2004) indicated that the extension resistance and extensibility of wet gluten were markedly influenced by high pressure (up to 800 MPa), while the temperature and the duration of pressure treatment (30-80C for 2-20 min) had a relatively lesser effect. The application of high pressure resulted in a marked decrease in protein extractability due to the restructuring of disulfide bonds under high pressure leading to the incorporation of alpha- and gamma-gliadins in the glutenin aggregate. The change in secondary structure following high- pressure treatment was also reported.
The pressure treatment of myosin led to head-to-head interaction to form oligomers (clumps), which became more compact and larger in size during storage at constant pressure. Even after pressure treatment at 210 MPa for 5 minutes, monomieric myosin molecules increased and no gelation was observed for pressure treatment up to 210 MPa for 30 minutes. Pressure treatment did not also affect the original helical structure of the tail in the myosin monomers. Angsupanich, Edde, and Ledward (1999) showed that high pressure- induced denaturation of myosin led to formation of structures that contained hydrogen bonds and were additionally stabilized by disulphide bonds.
Application of 750 MPa for 20 minutes resulted in dimerization of metmyoglobin in the pH range of 6-10, whereas maximum pH was not at isoelectric pH (6.9). Under acidic pH conditions, no dimers were formed (Defaye and Ledward, 1995). Zipp and Kouzmann ( 1973) showed the formation of precipitate when pressurized (750 MPa for 20 minutes) near the isoelectric point, the precipitate redissolved slowly during storage. Pressure treatment had no effect on lipid oxidation in the case of minced meat packed in air at pressure less than 300 MPa, while the oxidation increased proportionally at higher pressures. However, on exposure to higher pressure, minced meat in contact with air oxidized rapidly. Pressures > 300-400 MPa caused marked denaturation of both myofibriller and sarcoplasmic proteins in washed pork muscle and pork mince (Ananth, Murano and Dickson, 1995). Chapleau and Lamballerie (2003) showed that high-pressure treatment induced a threefold increase in the surface hydrophobicity of myofibrillar proteins between O and 450 MPa. Chapleau, Mangavel, Compoint, and Lamballerie (2004) reported that high pressure modified the secondary structure of myofibrillar proteins extracted from cattle carcasses. Irreversible changes and aggregation were reported at a pressure higher than 300 MPa, which can potentially affect the functional properties of meat products. Lamballerie, Perron, Jung, and Cheret (2003) indicated that high pressure treatment increases cathepsin D activity, and that pressurized myofibrils are more susceptible to cathepsin D action than non- pressurized myofibrils. The highest cathepsin D activity was observed at 300 MPa. Cariez, Veciana, and Cheftel ( 1995) demonstrated that L color values increased significantly in meat treated at 200-350 MPa, the meat becoming pink, and a-value decreased in meat treated at 400-500 MPa to give a grey-brown color. The total extractable myoglobin decreased in meat treated at 200- 500 MPa, while the metmyoglobin content of meat increased and the oxymyoglobin decreased at 400500 MPa. Meat discoloration from pressure processing resulted in a whitening effect at 200-300 MPa due to globin denaturation, and/or haem displacement/release, or oxidation of ferrous myoglobin to ferric myoglobin at pressure higher than 400 MPa.
The conformation of the main protein component of egg white, ovalbumin, remains fairly stable when pressurized at 400 MPa, may be due to the four disulfide bonds and non-covalent interactions stabilizing the three dimensional structure of ovalbumin (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Hayashi, Kawamura, Nakasa and Okinada (1989) reported irreversible denaturation of egg albumin at 500-900 MPa with concomitant increase in susceptibility to subtilisin. Zhang, Li, and Tatsumi (2005) demonstrated that the pressure treatment (200-500 MPa) resulted in denaturation of ovalbumin. The surface hydrophobicity of ovalbumin was found to increase with increase in pressure treatment and the presence of polysaccharide protected the protein against denaturation. Iametti, Donnizzelli, Pittia, Rovere, Squarcina, and Bonomi (1999) showed that the addition of NaCl or sucrose to egg albumin prior to high- pressure treatment (up to 10 min at 800 MPa) prevented insolubulization or gel formation after pressure treatment. As a consequence of protein unfolding, the treated albumin had increased viscosity but retained its foaming and heat-gelling properties. Farr (1990) reported the modification of functionality of egg proteins. Egg yolk formed a gel when subjected to a pressure of 400 MPa for 30 minutes at 25C, kept its original color, and was soft and adhesive. The hardness of the pressure treated gel increased and adhesiveness decreased with an increase in pressure. Plancken, Van Loey, and Hendrickx (2005) showed that the application of high pressure (400- 700 MPa) to egg white solution resulted in an increase in turbidity, surface hydrophobicity, exposed sulfhydryl content, and susceptibility to enzymatic hydrolysis, while it resulted in a decrease in protein solubility, total sulfhydryl content, denaturation enthalpy, and trypsin inhibitory activity. The pressure- induced changes in these properties were shown to be dependent on the pressuretemperature and the pH of the solution. Speroni, Puppo, Chapleau, Lamballerie, Castellani, Aon, and Anton (2005) indicated that the application of high pressure (200-600 MPa) at 2OC to low- density lipoproteins did not change the solubility even if the pH is changed, whereas aggregation and protein denaturation were drastically enhanced at pH 8. Further, the application of high- pressure under alkaline pH conditions resulted in decreased droplet flocculation of low-density lipoproteins dispersions.
The minimum pressure required for the inducing gelation of soya proteins was reported to be 300 MPa for 10-30 minutes and the gels formed were softer with lower elastic modulus in comparison with heat-treated gels (Okamoto, Kawamura, and Hayashi, 1990). The treatment of soya milk at 500 MPa for 30 min changed it from a liquid state to a solid state, whereas at lower pressures and at 500 MPa for 10 minutes, the milk remained in a liquid state, but indicated improved emulsifying activity and stability (Kajiyama, Isobe, Uemura, and Noguchi, 1995). The hardness of tofu gels produced by high-pressure treatment at 300 MPa for 10 minutes was comparable to heat induced gels. Puppo, Chapleau, Speroni, Lamballerie, Michel, Anon, and Anton (2004) demonstrated that the application of high pressure (200-600 MPa) on soya protein isolate at pH 8.0 resulted in an increase in a protein hydorphobicity and aggregation, a reduction of free sulfhydryl content and a partial unfolding of the 7S and 11S fractions at pH 8. The change in the secondary structure leading to a more disordered structure was also reported. Whereas at pH 3.0, the protein was partially denatured and insoluble aggregates were formed, the major molecular unfolding resulted in decreased thermal stability, increased protein solubility, and hydorphobicity. Puppo, Speroni, Chapleau, Lamballerie, An, and Anton (2005) studied the effect of high- pressure (200, 400, and 600 MPa for 10 min at 10C) on the emulsifying properties of soybean protein isolates at pH 3 and 8 (e.g. oil droplet size, flocculation, interfacial protein concentration, and composition). The application of pressure higher than 200 MPa at pH 8 resulted in a smaller droplet size and an increase in the levels of depletion flocculation. However, a similar effect was not observed at pH 3. Due to the application of high pressure, bridging flocculation decreased and the percentage of adsorbed proteins increased irrespective of the pH conditions. Moreover, the ability of the protein to be adsorbed at the oil- water interface increased. Zhang, Li, Tatsumi, and Isobe (2005) showed that the application of high pressure treatment resulted in the formation of more hydrophobic regions in soy protein, which dissociated into subunits, which in some cases formed insoluble aggregates. High-pressure denaturation of beta-conglycinin (7S) and glycinin (11S) occurred at 300 and 400 MPa, respectively. The gels formed had the desirable strength and a cross-linked network microstructure.
Soybean whey is a by-product of tofu manufacture. It is a good source of peptides, proteins, oligosaccharides, and isoflavones, and can be used in special foods for the elderly persons, athletes, etc. Prestamo and Penas (2004) studied the antioxidative activity of soybean whey proteins and their pepsin and chymotrypsin hydrolysates. The chymotrypsin hydrolysate showed a higher antioxidative activity than the non-hydrolyzed protein, but the pepsin hydrolysate showed an opposite trend. High pressure processing at 100 MPa inc\reased the antioxidative activity of soy whey protein, but decreased the antioxidative activity of the hydrolysates. High pressure processing increased the pH of the protein hydrolysates. Penas, Prestamo, and Gomez (2004) demonstrated that the application of high pressure (100 and 200 MPa, 15 min, 37C) facilitated the hydrolysis of soya whey protein by pepsin, trypsin, and chymotrypsin. It was shown that the highest level of hydrolysis occurred at a treatment pressure of 100 MPa. After the hydrolysis, 5 peptides under 14 kDa with trypsin and chymotrypsin, and 11 peptides with pepsin were reported.
COMBINATION OF HIGHPRESSURE TREATMENT WITH OTHER NON-THERMAL PROCESSING METHODS
Many researchers have combined the use of high pressure with other non-thermal operations in order to explore the possibility of synergy between processes. Such attempts are reviewed in this section.
Crawford, Murano, Olson, and Shenoy (1996) studied the combined effect of high pressure and gamma-irradiation for inactivating Clostridium spmgenes spores in chicken breast. Application of high pressure reduced the radiation dose required to produce chicken meat with extended shelf life. The application of high pressure (600 MPa for 20 min at 8O0C) reduced the irradiation doses required for one log reduction of Clostridium spmgenes from 4.2 kGy to 2.0 kGy. Mainville, Montpetit, Durand, and Farnworth (2001) studied the combined effect of irradiation and high pressure on microflora and microorganisms of kefir. The irradiation treatment of kefir at 5 kGy and high-pressure treatment (400 MPa for 5 or 30 min) deactivated the bacteria and yeast in kefir, while leaving the proteins and lipids unchanged.
The exposure of microbial cells and spores to an alternating current (50 Hz) resulted in the release of intracellular materials causing loss or denaturation of cellular components responsible for the normal functioning of the cell. The lethal damage to the microorganisms enhanced when the organisms are exposed to an alternating current before and after the pressure treatment. High- pressure treatment at 300 MPa for 10 min for Escherichia coli cells and 400 MPa for 30 min for Bacillus subtalis spores, after the alternating current treatment, resulted in reduced surviving fractions of both the organisms. The combined effect was also shown to reduce the tolerant level of microorganisms to other challenges (Shimada and Shimahara, 1985, 1987; Shimada, 1992).
The pretreatment with ultrasonic waves (100 W/cm^sup 2^ for 25 min at 25C) followed by high pressure (400 MPa for 25 min at 15C) was shown to result in complete inactivation of Rhodoturola rubra. Neither ultrasonic nor high-pressure treatment alone was found to be effective (Knorr, 1995).
Carbon Dioxide and Argon
Heinz and Knorr (1995) reported a 3 log reduction of supercritical CO2 pretreated cultures. The effect of the pretreatment on germination of Bacillus subtilis endospores was monitored. The combination of high pressure and mild heat treatment was the most effective in reducing germination (95% reduction), but no spore inactivation was observed.
Park, Lee, and Park (2002) studied the combination of high- pressure carbon dioxide and high pressure as a nonthermal processing technique to enhance the safety and shelf life of carrot juice. The combined treatment of carbon dioxide (4.90 MPa) and high-pressure treatment (300 MPa) resulted in complete destruction of aerobes. The increase in high pressure to 600 MPa in the presence of carbon dioxide resulted in reduced activities of polyphenoloxidase (11.3%), lipoxygenase (8.8%), and pectin methylesterase (35.1%). Corwin and Shellhammer (2002) studied the combined effect of high-pressure treatment and CO2 on the inactivation of pectinmethylesterase, polyphenoloxidase, Lactobacillus plantarum, and Escherichia coli. An interaction was found between CO2 and pressure at 25 and 50C for pectinmethylesterase and polyphenoloxidase, respectively. The activity of polyphenoloxidase was decreased by CO2 at all pressure treatments. The interaction between CO2 and pressure was significant for Lactobacillus plantarum, with a significant decrease in survivors due to the addition of CO2 at all pressures studied. No significant effect on E. coli survivors was seen with CO2 addition. Truong, Boff, Min, and Shellhammer (2002) demonstrated that the addition of CO2 (0.18 MPa) during high pressure processing (600 MPa, 25C) of fresh orange juice increases the rate of PME inactivation in Valencia orange juice. The treatment time due to CO2 for achieving the equivalent reduction in PME activity was from 346 s to 111 s, but the overall degree of PME inactivation remained unaltered.
Fujii, Ohtani, Watanabe, Ohgoshi, Fujii, and Honma (2002) studied the high-pressure inactivation of Bacillus cereus spores in water containing argon. At the pressure of 600 MPa, the addition of argon reportedly accelerated the inactivation of spores at 20C, but had no effect on the inactivation at 40C.
The complex physicochemical environment of milk exerted a strong protective effect on Escherichia coli against high hydrostatic pressure inactivation, reducing inactivation from 7 logs at 400 MPa to only 3 logs at 700 MPa in 15 min at 20C. A substantial improvement in inactivation efficiency at ambient temperature was achieved by the application of consecutive, short pressure treatments interrupted by brief decompressions. The combined effect of high pressure (500 MPa) and natural antimicrobial peptides (lysozyme, 400 g/ml and nisin, 400 g/ml) resulted in increased lethality for Escherichia coli in milk (Garcia, Masschalck, and Michiels, 1999).
OPPORTUNITIES FOR HIGH PRESSURE ASSISTED PROCESSING
The inclusion of high-pressure treatment as a processing step within certain manufacturing flow sheets can lead to novel products as well as new process development opportunities. For instance, high pressure can precede a number of process operations such as blanching, dehydration, rehydration, frying, and solid-liquid extraction. Alternatively, processes such as gelation, freezing, and thawing, can be carried out under high pressure. This section reports on the use of high pressures in the context of selected processing operations.
Eshtiaghi and Knorr (1993) employed high pressure around ambient temperatures to develop a blanching process similar to hot water or steam blanching, but without thermal degradation; this also minimized problems associated with water disposal. The application of pressure (400 MPa, 15 min, 20C) to the potato sample not only caused blanching but also resulted in a four-log cycle reduction in microbial count whilst retaining 85% of ascorbic acid. Complete inactivation of polyphenoloxidase was achieved under the above conditions when 0.5% citric acid solution was used as the blanching medium. The addition of 1 % CaCl^sub 2^ solution to the medium also improved the texture and the density. The leaching of potassium from the high-pressure treated sample was comparable with a 3 min hot water blanching treatment (Eshtiaghi and Knorr, 1993). Thus, high- pressures can be used as a non-thermal blanching method.
Dehydration and Osmotic Dehydration
The application of high hydrostatic pressure affects cell wall structure, leaving the cell more permeable, which leads to significant changes in the tissue architecture (Fair, 1990; Dornenburg and Knorr, 1994, Rastogi, Subramanian, and Raghavarao, 1994; Rastogi and Niranjan, 1998; Rastogi, Raghavarao, and Niranjan, 2005). Eshtiaghi, Stute, and Knorr (1994) reported that the application of pressure (600 MPa, 15 min at 70C) resulted in no significant increase in the drying rate during fluidized bed drying of green beans and carrot. However, the drying rate significantly increased in the case of potato. This may be due to relatively limited permeabilization of carrot and beans cells as compared to potato. The effects of chemical pre-treatment (NaOH and HCl treatment) on the rates of dehydration of paprika were compared with products pre-treated by applying high pressure or high intensity electric field pulses (Fig. 2). High-pressure (400 MPa for 10 min at 25C) and high intensity electric field pulses (2.4 kV/cm, pulse width 300 s, 10 pulses, pulse frequency 1 Hz) were found to result in drying rates comparable with chemical pre-treatments. The latter pre-treatments, however, eliminated the use of chemicals (Ade- Omowaye, Rastogi, Angersbach, and Knorr, 2001).
Figure 2 (a) Effects of various pre-treatments such as hot water blanching, high pressure and high intensity electric field pulse treatment on dehydration characteristics of red paprika (b) comparison of drying time (from Ade-Omowaye, Rastogi, Angersbach, and Knorr, 2001).
Figure 3 (a) Variation of moisture and (b) solid content (based on initial dry matter content) with time during osmotic dehydration (from Rastogi and Niranjan, 1998).
Generally, osmotic dehydration is a slow process. Application of high pressures causes permeabilization of the cell structure (Dornenburg and Knorr, 1993; Eshtiaghi, Stute, and Knorr, 1994; Fair, 1990; Rastogi, Subramanian, and Raghavarao, 1994). This phenomenon has been exploited by Rastogi and Niranjan (1998) to enhance mass transfer rates during the osmotic dehydration of pineapple (Ananas comsus). High-pressure pre-treatments (100-800 MPa) were found to enhance both water removal as well as solid gain (Fig. 3). Measured diffusivity values for water were found to be four-fold greater, whilst solute (sugar) diffusivity values were found to be two-fold greater. Compression and decompression occurring during high pressure pre-treatment itself caused the removal of a significant amount of water, which was attributed to the cell wall rupture (Rastogi and Niranjan, 1998). Differential interference contrast microscopic examination showed the ext\ent of cell wall break-up with applied pressure (Fig. 4). Sopanangkul, Ledward, and Niranjan (2002) demonstrated that the application of high pressure (100 to 400 MPa) could be used to accelerate mass transfer during ingredient infusion into foods. Application of pressure opened up the tissue structure and facilitated diffusion. However, higher pressures above 400 MPa induced starch gelatinization also and hindered diffusion. The values of the diffusion coefficient were dependent on cell permeabilization and starch gelatinization. The maximum value of diffusion coefficient observed represented an eight-fold increase over the values at ambient pressure.
The synergistic effect of cell permeabilization due to high pressure and osmotic stress as the dehydration proceeds was demonstrated more clearly in the case of potato (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003). The moisture content was reduced and the solid content increased in the case of samples treated at 400 MPa. The distribution of relative moisture (M/M^sub o^) and solid (S/S^sub o^) content as well as the cell permeabilization index (Zp) (shown in Fig. 5) indicate that the rate of change of moisture and solid content was very high at the interface and decreased towards the center (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003).
Most dehydrated foods are rehydrated before consumption. Loss of solids during rehydration is a major problem associated with the use of dehydrated foods. Rastogi, Angersbach, Niranjan, and Knorr (2000c) have studied the transient variation of moisture and solid content during rehydration of dried pineapples, which were subjected to high pressure treatment prior to a two-stage drying process consisting of osmotic dehydration and finish-drying at 25C (Fig. 6). The diffusion coefficients for water infusion as well as for solute diffusion were found to be significantly lower in high-pressure pre- treated samples. The observed decrease in water diffusion coefficient was attributed to the permeabilization of cell membranes, which reduces the rehydration capacity (Rastogi and Niranjan, 1998). The solid infusion coefficient was also lower, and so was the release of the cellular components, which form a gel- network with divalent ions binding to de-esterified pectin (Basak and Ramaswamy, 1998; Eshtiaghi, Stute, and Knorr, 1994; Rastogi Angersbach, Niranjan, and Knorr, 2000c). Eshtiaghi, Stute, and Knorr (1994) reported that high-pressure treatment in conjunction with subsequent freezing could improve mass transfer during rehydration of dried plant products and enhance product quality.
Figure 4 Microstructures of control and pressure treated pineapple (a) control; (b) 300 MPa; (c) 700 MPa. ( 1 cm = 41.83 m) (from Rastogi and Niranjan, 1998).
Ahromrit, Ledward, and Niranjan (2006) explored the use of high pressures (up to 600 MPa) to accelerate water uptake kinetics during soaking of glutinous rice. The results showed that the length and the diameter the of the rice were positively correlated with soaking time, pressure and temperature. The water uptake kinetics was shown to follow the well-known Fickian model. The overall rates of water uptake and the equilibrium moisture content were found to increase with pressure and temperature.
Zhang, Ishida, and Isobe (2004) studied the effect of highpressure treatment (300-500 MPa for 0-380 min at 20C) on the water uptake of soybeans and resulting changes in their microstructure. The NMR analysis indicated that water mobility in high-pressure soaked soybean was more restricted and its distribution was much more uniform than in controls. The SEM analysis revealed that high pressure changed the microstructures of the seed coat and hilum, which improved water absorption and disrupted the individual spherical protein body structures. Additionally, the DSC and SDS-PAGE analysis revealed that proteins were partially denatured during the high pressure soaking. Ibarz, Gonzalez, Barbosa-Canovas (2004) developed the kinetic models for water absorption and cooking time of chickpeas with and without prior high-pressure treatment (275-690 MPa). Soaking was carried out at 25C for up to 23 h and cooking was achieved by immersion in boiling water until they became tender. As the soaking time increased, the cooking time decreased. High-pressure treatment for 5 min led to reductions in cooking times equivalent to those achieved by soaking for 60-90 min.
Ramaswamy, Balasubramaniam, and Sastry (2005) studied the effects of high pressure (33, 400 and 700 MPa for 3 min at 24 and 55C) and irradiation (2 and 5 kGy) pre-treatments on hydration behavior of navy beans by soaking the treated beans in water at 24 and 55C. Treating beans under moderate pressure (33 MPa) resulted in a high initial moisture uptake (0.59 to 1.02 kg/kg dry mass) and a reduced loss of soluble materials. The final moisture content after three hours of soaking was the highest in irradiated beans (5 kGy) followed by high-pressure treatment (33 MPa, 3 min at 55C). Within the experimental range of the study, Peleg’s model was found to satisfactorily describe the rate of water absorption of navy beans.
A reduction of 40% in oil uptake during frying was observed, when thermally blanched frozen potatoes were replaced by high pressure blanched frozen potatoes. This may be due to a reduction in moisture content caused by compression and decompression (Rastogi and Niranjan, 1998), as well as the prevalence of different oil mass transfer mechanisms (Knorr, 1999).
Solid Liquid Extraction
The application of high pressure leads to rearrangement in tissue architecture, which results in increased extractability even at ambient temperature. Extraction of caffeine from coffee using water could be increased by the application of high pressure as well as increase in temperature (Knorr, 1999). The effect of high pressure and temperature on caffeine extraction was compared to extraction at 100C as well as atmospheric pressure (Fig. 7). The caffeine yield was found to increase with temperature at a given pressure. The combination of very high pressures and lower temperatures could become a viable alternative to current industrial practice.
Figure 5 Distribution of (a, b) relative moisture and (c, d) solid content as well as (e, f) cell disi
|
<urn:uuid:759ff0b9-9458-45d0-8deb-368c01089695>
|
CC-MAIN-2013-20
|
http://www.redorbit.com/news/business/815480/opportunities_and_challenges_in_high_pressure_processing_of_foods/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.924161
| 14,546
| 2.5625
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide",
"co2",
"temperature rise"
],
"nature": []
}
|
{
"strong": 2,
"weak": 1,
"total": 3,
"decision": "accepted_strong"
}
|
China has worked actively and seriously to tackle global climate change and build capacity to respond to it. We believe that every country has a stake in dealing with climate change and every country has a responsibility for the safety of our planet. China is at a critical stage of building a moderately prosperous society on all fronts, and a key stage of accelerated industrialization and urbanization. Yet, despite the huge task of developing the economy and improving people’s lives, we have joined global actions to tackle climate change with the utmost resolve and a most active attitude, and have acted in line with the principle of common but differentiated responsibilities established by the United Nations. China voluntarily stepped up efforts to eliminate backward capacity in 2007, and has since closed a large number of heavily polluting small coal-fired power plants, small coal mines and enterprises in the steel, cement, paper-making, chemical and printing and dyeing sectors. Moreover, in 2009, China played a positive role in the success of the Copenhagen conference on climate change and the ultimate conclusion of the Copenhagen Accord. In keeping with the requirements of the Copenhagen Accord, we have provided the Secretariat of the United Nations Framework Convention on Climate Change with information on China’s voluntary actions on emissions reduction and joined the list of countries supporting the Copenhagen Accord.
The targets released by China last year for greenhouse gas emissions control require that by 2020, CO2 emissions per unit of GDP should go down by 40% - 45% from the 2005 level, non-fossil energy should make up about 15% of primary energy consumption, and forest coverage should increase by 40 million hectares and forest stock volume by 1.3 billion cubic meters, both from the 2005 level. The measure to lower energy consumption alone will help save 620 million tons of standard coal in energy consumption in the next five years, which will be equivalent to the reduction of 1.5 billion tons of CO2 emissions. This is what China has done to step up the shift in economic development mode and economic restructuring. It contributes positively to Asia’s and the global effort to tackle climate change.
Ladies and Gentlemen,
Green and sustainable development represents the trend of our times. To achieve green and sustainable development in Asia and beyond and ensure the sustainable development of resources and the environment such as the air, fresh water, ocean, land and forest, which are all vital to human survival, we countries in Asia should strive to balance economic growth, social development and environmental protection. To that end, we wish to work with other Asian countries and make further efforts in the following six areas.
First, shift development mode and strive for green development. To accelerate the shift in economic development mode and economic restructuring provides an important precondition for our efforts to actively respond to climate change, achieve green development and secure the sustainable development of the population, resources and the environment. It is the shared responsibility of governments and enterprises of all countries in Asia and around the world. We should actively promote a conservation culture and raise awareness for environmental protection. We need to make sure that the concept of green development, green consumption and a green lifestyle and the commitment to taking good care of Planet Earth, our common home are embedded in the life of every citizen in society.
Second, value the importance of science and technology as the backing of innovation and development. We Asian countries have a long way to go before we reach the advanced level in high-tech-powered energy consumption reduction and improvement of energy and resource efficiency. Yet, this means we have a huge potential to catch up. It is imperative for us to quicken the pace of low-carbon technology development, promote energy efficient technologies and raise the proportion of new and renewable energies in our energy mix so as to provide a strong scientific and technological backing for green and sustainable development of Asian countries. As for developed countries, they should facilitate technology transfer and share technologies with developing countries on the basis of proper protection of intellectual property rights.
Third, open wider to the outside world and realize harmonious development. In such an open world as ours, development of Asian countries and development of the world are simply inseparable. It is important that we open our markets even wider, firmly oppose and resist protectionism in all forms and uphold a fair, free and open global trade and investment system. At the same time, we should give full play to the role of regional and sub-regional dialogue and cooperation mechanisms in Asia to promote harmonious and sustainable development of Asia and the world.
Fourth, strengthen cooperation and sustain common development. Pragmatic, mutually beneficial and win-win cooperation is a sure choice of all Asian countries if we are to realize sustainable development. No country could stay away from or manage to meet on its own severe challenges like the international financial crisis, climate change and energy and resources security. We should continue to strengthen macro-economic policy coordination and vigorously promote international cooperation in emerging industries, especially in energy conservation, emissions reduction, environmental protection and development of new energy sources to jointly promote sustainable development of the Asian economy and the world economy as a whole.
Fifth, work vigorously to eradicate poverty and gradually achieve balanced development. A major root cause for the loss of balance in the world economy is the seriously uneven development between the North and the South. Today, 900 million people in Asia, or roughly one fourth of the entire Asian population, are living below the 1.25 dollars a day poverty line. We call for greater efforts to improve the international mechanisms designed to promote balanced development, and to scale up assistance from developed countries to developing countries, strengthen South-South cooperation, North-South cooperation and facilitate attainment of the UN Millennium Development Goals. This will ensure that sustainable development brings real benefits to poor regions, poor countries and poor peoples.
Sixth, bring forth more talents to promote comprehensive development. The ultimate goal of green and sustainable development is to improve people’s living environment, better their lives and promote their comprehensive development. Success in this regard depends, to a large extent, on the emergence of talents with an innovative spirit. We need to build institutions, mechanisms and a social environment to help people bring out the best of their talents, and to intensify education and training of professionals of various kinds. This will ensure that as Asia achieves green and sustainable development, our people will enjoy comprehensive development.
Ladies and Gentlemen,
We demonstrated solidarity as we rose up together to the international financial crisis in 2009. Let us carry forward this great spirit, build up consensus, strengthen unity and cooperation and explore a path of green and sustainable development. This benefits Asia. It benefits the world, too.
In conclusion, I wish this annual conference of the Boao Forum for Asia a complete success.
|
<urn:uuid:648ee2b5-f8cd-4273-8ab0-29206d637638>
|
CC-MAIN-2013-20
|
http://news.xinhuanet.com/english2010/china/2010-04/11/c_13245754_2.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.936942
| 1,357
| 2.96875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change",
"co2",
"greenhouse gas"
],
"nature": [
"conservation"
]
}
|
{
"strong": 4,
"weak": 0,
"total": 4,
"decision": "accepted_strong"
}
|
by Gerry Everding
St. Louis MO (SPX) Feb 12, 2013
Nominated early this year for recognition on the UNESCO World Heritage List, which includes such famous cultural sites as the Taj Mahal, Machu Picchu and Stonehenge, the earthen works at Poverty Point, La., have been described as one of the world's greatest feats of construction by an archaic civilization of hunters and gatherers.
Now, new research in the current issue of the journal Geoarchaeology, offers compelling evidence that one of the massive earthen mounds at Poverty Point was constructed in less than 90 days, and perhaps as quickly as 30 days - an incredible accomplishment for what was thought to be a loosely organized society consisting of small, widely scattered bands of foragers.
"What's extraordinary about these findings is that it provides some of the first evidence that early American hunter-gatherers were not as simplistic as we've tended to imagine," says study co-author T.R. Kidder, PhD, professor and chair of anthropology in Arts and Sciences at Washington University in St. Louis.
"Our findings go against what has long been considered the academic consensus on hunter-gather societies - that they lack the political organization necessary to bring together so many people to complete a labor-intensive project in such a short period."
Co-authored by Anthony Ortmann, PhD, assistant professor of geosciences at Murray State University in Kentucky, the study offers a detailed analysis of how the massive mound was constructed some 3,200 years ago along a Mississippi River bayou in northeastern Louisiana.
Based on more than a decade of excavations, core samplings and sophisticated sedimentary analysis, the study's key assertion is that Mound A at Poverty Point had to have been built in a very short period because an exhaustive examination reveals no signs of rainfall or erosion during its construction.
"We're talking about an area of northern Louisiana that now tends to receive a great deal of rainfall," Kidder says. "Even in a very dry year, it would seem very unlikely that this location could go more than 90 days without experiencing some significant level of rainfall. Yet, the soil in these mounds shows no sign of erosion taking place during the construction period. There is no evidence from the region of an epic drought at this time, either."
Part of a much larger complex of earthen works at Poverty Point, Mound A is believed to be the final and crowning addition to the sprawling 700-acre site, which includes five smaller mounds and a series of six concentric C-shaped embankments that rise in parallel formation surrounding a small flat plaza along the river. At the time of construction, Poverty Point was the largest earthworks in North America.
Built on the western edge of the complex, Mound A covers about 538,000 square feet [roughly 50,000 square meters] at its base and rises 72 feet above the river. Its construction required an estimated 238,500 cubic meters - about eight million bushel baskets - of soil to be brought in from various locations near the site. Kidder figures it would take a modern, 10-wheel dump truck about 31,217 loads to move that much dirt today.
"The Poverty Point mounds were built by people who had no access to domesticated draft animals, no wheelbarrows, no sophisticated tools for moving earth," Kidder explains. "It's likely that these mounds were built using a simple 'bucket brigade' system, with thousands of people passing soil along from one to another using some form of crude container, such as a woven basket, a hide sack or a wooden platter."
To complete such a task within 90 days, the study estimates it would require the full attention of some 3,000 laborers. Assuming that each worker may have been accompanied by at least two other family members, say a wife and a child, the community gathered for the build must have included as many as 9,000 people, the study suggests.
"Given that a band of 25-30 people is considered quite large for most hunter-gatherer communities, it's truly amazing that this ancient society could bring together a group of nearly 10,000 people, find some way to feed them and get this mound built in a matter of months," Kidder says.
Soil testing indicates that the mound is located on top of land that was once low-lying swamp or marsh land - evidence of ancient tree roots and swamp life still exists in undisturbed soils at the base of the mound. Tests confirm that the site was first cleared for construction by burning and quickly covered with a layer of fine silt soil. A mix of other heavier soils then were brought in and dumped in small adjacent piles, gradually building the mound layer upon layer.
As Kidder notes, previous theories about the construction of most of the world's ancient earthen mounds have suggested that they were laid down slowly over a period of hundreds of years involving small contributions of material from many different people spanning generations of a society. While this may be the case for other earthen structures at Poverty Point, the evidence from Mound A offers a sharp departure from this accretional theory. Kidder's home base in St.
Louis is just across the Mississippi River from one of America's best known ancient earthen structures, the Monk Mound at Cahokia, Ill. He notes that the Monk Mound was built many centuries later than the mounds at Poverty Point by a civilization that was much more reliant on agriculture, a far cry from the hunter-gatherer group that built Poverty Point. Even so, Mound A at Poverty Point is much larger than almost any other mound found in North America; only Monk's Mound at Cahokia is larger.
"We've come to realize that the social fabric of these socieites must have been much stronger and more complex that we might previously have given them credit. These results contradict the popular notion that pre-agricultural people were socially, politically, and economically simple and unable to organize themselves into large groups that could build elaborate architecture or engage in so-called complex social behavior," Kidder says.
"The prevailing model of hunter-gatherers living a life 'nasty, brutish and short' is contradicted and our work indicates these people were practicing a sophisticated ritual/religious life that involved building these monumental mounds."
Washington University in St. Louis
All About Human Beings and How We Got To Be Here
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
|
<urn:uuid:a5058d3c-2691-4aef-862f-88a3935a760d>
|
CC-MAIN-2013-20
|
http://www.terradaily.com/reports/Archaic_Native_Americans_built_massive_Louisiana_mound_in_less_than_90_days_999.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.966482
| 1,459
| 2.9375
| 3
|
[
"climate"
] |
{
"climate": [
"drought"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
By Pauline Hammerbeck
It's been a doozy of a wildfire season (Colorado's most destructive ever), leaving homeowners wondering what safety measures they can put in place to stave off flames in the event of a fire in their own neighborhood.
Landscaping, it turns out, can be an important measure in wildfire protection.
But fire-wise landscaping isn't just something for those dwelling on remote Western hilltops. Brush, grass and forest fires occur nearly everywhere in the United States, says the National Fire Protection Association. Here's how your landscaping can help keep you safe.
Create 'defensible' space
Most homes that burn during a wildfire are ignited by embers landing on the roof, gutters, and on decks and porches. So your first point of action should be creating a defensible space, a buffer zone around your home, to reduce sources of fuel.
Start by keeping the first 3 to 5 feet around your home free of all flammable materials and vegetation: plants, shrubs, trees and grasses, as well as bark and other organic mulches should all be eliminated (a neat perimeter of rock mulch or a rock garden can be a beautiful thing). Maintenance is also important:
- Clear leaves, pine needles and other debris from roofs, gutters and eaves
- Cut back tree branches that overhang the roof
- Clear debris from under decks, porches and other structures
Moving farther from the house, you might consider adding hardscaping - driveways, patios, walkways, gravel paths, etc. These features add visual interest, but they also maintain a break between vegetation and your home in the event of a fire. Some additional tasks to consider in the first 100 feet surrounding your home:
- Thin out trees and shrubs (particularly evergreens) within 30 feet
- Trim low tree branches so they're a minimum of 6 feet off the ground
- Mow lawn regularly and dispose of clippings and other debris promptly
- Move woodpiles to a space at least 30 feet from your home
Use fire-resistant plants
Populating your landscape with plants that are resistant to fire can also be an important tactic. Look for low-growing plants that have thick leaves (a sign that they hold water), extensive root systems and the ability to withstand drought.
This isn't as limiting as it sounds. Commonly used hostas, butterfly bushes and roses are all good choices. And there are plenty of fire-resistant plant lists to give you ideas on what to pick.
Where and how you plant can also have a dramatic effect on fire behavior. The plants nearest your home should be smaller and more widely spaced than those farther away.
Be sure to use a variety of plant types, which reduces disease and keeps the landscape healthy and green. Plant in small clusters - create a garden island, for instance, by surrounding a group of plantings with a rock perimeter - and use rock mulch to conserve moisture.
Maintain accessible water sources
Wildfires present a special challenge to local fire departments, so it's in your interest to be able to access or maintain an emergency water supply - particularly if you're in a remote location.
At a minimum, keep 100 feet of garden hose attached to a spigot (if your water comes from a well, consider an emergency generator to operate the pump during a power failure). But better protection can come from the installation of a small pond, cistern or, if budget allows, a swimming pool.
Good planning and a bit of elbow grease have a big hand in wildfire safety. In a year with record heat and drought, looking over your landscape with a firefighter's eye can offer significant peace of mind.
- Are You Properly Insured for Your Real Estate?
- The Ins and Outs of Homeowner's Insurance
- Tips for Fire Safety in Your Home
Guest blogger Pauline Hammerbeck is an editor for the Allstate Blog, which helps people prepare for the unpredictability of life.
Note: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinion or position of Zillow.
|
<urn:uuid:dbe77f52-384c-4c40-a487-84aae16a1d76>
|
CC-MAIN-2013-20
|
http://www.gloucestertimes.com/real_estate_news/x2068758245/How-to-Landscape-Your-Home-for-Fire-Safety
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939375
| 854
| 2.578125
| 3
|
[
"climate"
] |
{
"climate": [
"drought"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Upland Bird Regional Forecast
When considering upland game population levels during the fall hunting season, two important factors impact population change. First is the number of adult birds that survived the previous fall and winter and are considered viable breeders in the spring. The second is the reproductive success of this breeding population. Reproductive success consists of nest success (the number of nests that successfully hatched) and chick survival (the number of chicks recruited into the fall population). For pheasant and quail, annual population turnover is relatively high; therefore, the fall population is more dependent on reproductive success than breeding population levels. For grouse (prairie chickens), annual population turnover is not as rapid although reproductive success is still the major population regulator and important for good hunting. In the following forecast, breeding population and reproductive success of pheasants, quail, and prairie chickens will be discussed. Breeding population data were gathered during spring breeding surveys for pheasants (crow counts), quail (whistle counts), and prairie chickens (lek counts). Data for reproductive success were collected during late summer roadside surveys for pheasants and quail. Reproductive success of prairie chickens cannot be easily assessed using the same methods because they generally do not associate with roads like the other game birds.
Kansas experienced extreme drought this past year. Winter weather was mild, but winter precipitation is important for spring vegetation, which can impact reproductive success, and most of Kansas did not get enough winter precipitation. Pheasant breeding populations showed significant reductions in 2012, especially in primary pheasant range in western Kansas. Spring came early and hot this year, but also included fair spring moisture until early May, when the precipitation stopped, and Kansas experienced record heat and drought through the rest of the reproductive season. Early nesting conditions were generally good for prairie chickens and pheasants. However, the primary nesting habitat for pheasants in western Kansas is winter wheat, and in 2012, Kansas had one of the earliest wheat harvests on record. Wheat harvest can destroy nests and very young broods. The early harvest likely lowered pheasant nest and early brood success. The intense heat and lack of rain in June and July resulted in a decrease in brooding cover and insect populations, causing lower chick survival for all upland game birds.
Because of drought, all counties in Kansas were opened to Conservation Reserve Program (CRP) emergency haying or grazing. CRP emergency haying requires fields that are hayed to leave at least 50 percent of the field in standing grass cover. CRP emergency grazing requires 25 percent of the field (or contiguous fields) to be left ungrazed or grazing at 75-percent normal stocking rates across the entire field. Many CRP fields, including Walk In Hunting Areas (WIHA), may be affected across the state. WIHA property is privately-owned land open to the public for hunting access. Kansas has more than one million acres of WIHA. Often, older stands of CRP grass are in need of disturbance, and haying and grazing can improve habitat for the upcoming breeding season, and may ultimately be beneficial if weather is favorable.
Due to continued drought, Kansas will likely experience a below-average upland game season this fall. For those willing to hunt hard, there will still be pockets of decent bird numbers, especially in the northern Flint Hills and northcentral and northwestern parts of the state. Kansas has approximately 1.5 million acres open to public hunting (wildlife areas and WIHA combined). The regular opening date for the pheasant and quail seasons will be Nov. 10 for the entire state. The previous weekend will be designated for the special youth pheasant and quail season. Youth participating in the special season must be 16 years old or younger and accompanied by a non-hunting adult who is 18 or older. All public wildlife areas and WIHA tracts will be open for public access during the special youth season. Please consider taking a young person hunting this fall, so they might have the opportunity to develop a passion for the outdoors that we all enjoy.
PHEASANT – Drought in 2011 and 2012 has taken its toll on pheasant populations in Kansas. Pheasant breeding populations dropped by nearly 50 percent or more across pheasant range from 2011 to 2012 resulting in fewer adult hens in the population to start the 2012 nesting season. The lack of precipitation has resulted in less cover and insects needed for good pheasant reproduction. Additionally, winter wheat serves as a major nesting habitat for pheasants in western Kansas, and a record early wheat harvest this summer likely destroyed many nests and young broods. Then the hot, dry weather set in from May to August, the primary brood-rearing period for pheasants. Pheasant chicks need good grass and weed cover and robust insect populations to survive. Insufficient precipitation and lack of habitat and insects throughout the state’s primary pheasant range resulted in limited production. This will reduce hunting prospects compared to recent years. However, some good opportunities still exist to harvest roosters in the sunflower state, especially for those willing to work for their birds. Though the drought has taken its toll, Kansas still contains a pheasant population that will produce a harvest in the top three or four major pheasant states this year.
The best areas this year will likely be pockets of northwest and northcentral Kansas. Populations in southwest Kansas were hit hardest by the 2011-2012 drought (72 percent decline in breeding population), and a very limited amount of production occurred this season due to continued drought and limited breeding populations.
QUAIL – The bobwhite breeding population in 2012 was generally stable or improved compared to 2011. Areas in the northern Flint Hills and parts of northeast Kansas showed much improved productivity this year. Much of eastern Kansas has seen consistent declines in quail populations in recent decades. After many years of depressed populations, this year’s rebound in quail reproduction in eastern Kansas is welcomed, but overall populations are still below historic averages. The best quail hunting will be found throughout the northern Flint Hills and parts of central Kansas. Prolonged drought undoubtedly impacted production in central and western Kansas.
PRAIRIE CHICKEN – Kansas is home to greater and lesser prairie chickens. Both species require a landscape of predominately native grass. Lesser prairie chickens are found in westcentral and southwestern Kansas in native prairie and nearby stands of native grass within the conservation reserve program (CRP). Greater prairie chickens are found primarily in the tallgrass and mixed-grass prairies in the eastern one-third and northern one-half of the state.
The spring prairie chicken lek survey indicated that most populations remained stable or declined from last year. Declines were likely due to extreme drought throughout 2011. Areas of northcentral and northwest Kansas fared the best, while areas in southcentral and southwest Kansas experienced the sharpest declines where drought was most severe. Many areas in the Flint Hills were not burned this spring due to drought. This resulted in far more residual grass cover for much improved nesting conditions compared to recent years. There have been some reports of prairie chickens broods in these areas, and hunting will likely be somewhat improved compared to recent years.
Because of recent increases in prairie chicken (both species) populations in northwest Kansas, regulations have been revised this year. The early prairie chicken season (Sept. 15-Oct. 15) and two-bird bag limit has been extended into northwest Kansas. The northwest unit boundary has also been revised to include areas north of U.S. Highway 96 and west of U.S. Highway 281. Additionally, all prairie chicken hunters are now required to purchase a $2.50 prairie chicken permit. This permit will allow KDWPT to better track hunters and harvest, which will improve management activities. Both species of prairie chicken are of conservation concern and the lesser prairie chicken is a candidate species for federal listing under the Endangered Species Act.
This region has 11,809 acres of public land and 339,729 acres of WIHA open to hunters this fall.
Pheasant – Spring breeding populations declined almost 50 percent from 2011 to 2012, reducing fall population potential. Early nesting conditions were decent due to good winter wheat growth, but early wheat harvest and severe heat and drought through the summer reduced populations. While this resulted in a significant drop in pheasant numbers, the area will still have the highest densities of pheasants this fall compared to other areas in the state. Some counties — such as Graham, Rawlins, Decatur, and Sherman — showed the relatively-highest densities of pheasants during summer brood surveys. Much of the cover will be reduced compared to previous years due to drought and resulting emergency haying and grazing in CRP fields. Good hunting opportunities will also be reduced compared to recent years, and harvest will likely be below average.
Quail – Populations in this region have been increasing in recent years although the breeding population had a slight decline. This area is at the extreme northwestern edge of bobwhite range in Kansas, and densities are relatively low compared to central Kansas. Some counties — such as Graham, Rawlins, and Decatur — will provide hunting opportunities for quail.
Prairie Chicken – Prairie chicken populations have expanded in both numbers and range within the region over the past 20 years. The better hunting opportunities will be found in the central and southeastern portions of the region in native prairies and nearby CRP grasslands. Spring lek counts in that portion of the region were slightly depressed from last year and nesting conditions were only fair this year. Extreme drought likely impaired chick survival.
This region has 75,576 acres of public land and 311,182 acres of WIHA open to hunters this fall.
Pheasant – The Smoky Hills breeding population dropped about 40 percent from 2011 to 2012, reducing overall fall population potential. While nesting conditions were fair due to good winter wheat growth, the drought and early wheat harvest impacted the number of young recruited into the fall population. Certain areas had decent brood production, including portions of Mitchell, Rush, Rice, and Cloud counties. Across the region, hunting opportunities will likely be below average and definitely reduced from recent years. CRP was opened to emergency haying and grazing, reducing available cover.
Quail – Breeding populations increased nearly 60 percent from 2011 to 2012, increasing fall population potential. However, drought conditions were severe, likely impairing nesting and brood success. There are reports of fair quail numbers in certain areas throughout the region. Quail populations in northcentral Kansas are naturally spotty due to habitat characteristics. Some areas, such as Cloud County, showed good potential while other areas in the more western edges of the region did not fare as well.
Prairie Chicken – Greater prairie chickens occur throughout the Smoky Hills in large areas of native rangeland and some CRP. This region includes some of the highest densities and greatest hunting opportunities in the state for greater prairie chickens. Spring counts indicated that numbers were stable or slightly reduce from last year. Much of the rangeland cover is significantly reduced due to drought, which likely impaired production, resulting in reduced fall hunting opportunities..
This region has 60,559 acres of public land and 54,170 of WIHA open to hunters this fall.
Pheasant – Spring crow counts this year showed a significant increase in breeding populations of pheasants. While this increase is welcome, this region was nearing all-time lows in 2011. Pheasant densities across the region are still low, especially compared to other areas in western Kansas. Good hunting opportunities will exist in only a few pockets of good habitat.
Quail – Breeding populations stayed relatively the same as last year, and some quail were detected during the summer brood survey. The long-term trend for this region has been declining, largely due to unfavorable weather and degrading habitat. This year saw an increase in populations. Hunting opportunities for quail will be improved this fall compared to recent years in this region. The best areas will likely be in Marshall and Jefferson counties.
Prairie Chickens – Very little prairie chicken range occurs in this region, and opportunities are limited. The best areas are in the western edges of the region, in large areas of native rangeland.
This region has 80,759 acres of public land and 28,047 acres of WIHA open to hunters this fall.
Pheasant – This region is outside the primary pheasant range and has very limited hunting. A few birds can be found in the northwestern portion of the region.
Quail – Breeding populations were relatively stable from 2011 to 2012 for this region although long term trends have been declining. In the last couple years, the quail populations throughout much of the region have been on the increase. Specific counties that showed relatively higher numbers are Coffey, Osage, and Wilson. However, populations remain far below historic levels across the bulk of the region due to extreme habitat degradation.
Prairie Chicken – Greater prairie chickens occur in the central and northwest parts of this region in large areas of native rangeland. Breeding population densities were up nearly 40 percent from last year, and opportunities may increase accordingly. However, populations have been in consistent decline over the long term. Infrequent fire frequency has resulted in woody encroachment of native grasslands in the area, gradually reducing the amount of suitable habitat.
This region has 128,371 acres of public land and 63,069 acres of WIHA open to hunters this fall.
Pheasant – This region is on the eastern edge of pheasant range in Kansas and well outside the primary range. Pheasant densities have always been relatively low throughout the Flint Hills. Spring breeding populations were down nearly 50 percent, and reproduction was limited this summer. The best pheasant hunting will be in the northwestern edge of this region in Marion and Dickinson counties.
Quail – This region contains some of the highest densities of bobwhite in Kansas. The breeding population in this region increased 25 percent compared to 2011, and the long-term trend (since 1998) has been stable do to steadily increasing populations over the last four or five years. High reproductive success was reported in the northern half of this region, and some of the best opportunities for quail hunting will be found in the northern Flint Hills this year. In the south, Cowley County showed good numbers of quail this summer.
Prairie Chickens – The Flint Hills is the largest intact tallgrass prairie left in North America. It has served as a core habitat for greater prairie chickens for many years. Since the early 1980s, inadequate range burning frequencies have consistently reduced nest success in the area, and prairie chicken numbers have been declining as a result. Because of the drought this spring, many areas that are normally burned annually were left unburned this year. This left more residual grass cover for nesting and brood rearing. There are some good reports of prairie chicken broods, and hunting opportunities will likely increase throughout the region this year.
This region has 19,534 acres of public land and 73,341 acres of WIHA open to hunters this fall.
Pheasant – The breeding population declined about 40 percent from 2011 to 2012. Prolonged drought for two years now and very poor vegetation conditions resulted in poor reproductive success this year. All summer indices showed a depressed pheasant population in this region, especially compared to other regions. Some of the relatively better counties in this area will be Reno, Pawnee, and Pratt although these counties have not been immune to recent declines. There will likely few good hunting opportunities this fall.
Quail – The breeding population dropped over 30 percent this year from 2011 although long term trends (since 1998) have been stable in this region. This region generally has some of the highest quail densities in Kansas, but prolonged drought and reduced vegetation have caused significant declines in recent years. Counties such as Reno, Pratt, and Stafford will likely have the best opportunities in the region. While populations may be down compared to recent years, this region will continue to provide fair hunting opportunities for quail.
Prairie Chicken – This region is almost entirely occupied by lesser prairie chickens. The breeding population declined nearly 50 percent from 2011 to 2012. Reproductive conditions were not good for the region due to extreme drought and heat for the last two years, and production was limited. The best hunting opportunities will likely be in the sand prairies south of the Arkansas River.
This region has 2,904 acres of public land and 186,943 acres of WIHA open to hunters this fall.
Pheasant – The breeding population plummeted more than 70 percent in this region from 2011 to 2012. Last year was one of the worst on record for pheasant reproduction. However, last fall there was some carry-over rooster (second-year) from a record high season in 2010. Those carry-over birds are mostly gone now, which will hurt hunting opportunities this fall. Although reproduction was slightly improved from 2011, chick recruitment was still fair to below average this summer due to continued extreme drought conditions. Moreover, there were not enough adult hens in the population yet to make a significant rebound. Generally, hunting opportunity will remain well below average in this region. Haskell and Seward counties showed some improved reproductive success, especially compared to other counties in the region.
Quail – The breeding population in this region tends to be highly variable depending on available moisture and resulting vegetation. The region experienced an increase in breeding populations from 2011 to 2012 although 2011 was a record low for the region. While drought likely held back production, the weather was better than last year, and some reproduction occurred. Indices are still well below average for the region. There will be some quail hunting opportunities in the region although good areas will be sparse.
Prairie Chicken – While breeding populations in the eastern parts of this region were generally stable or increasing, areas of extreme western and southwest portions (Cimarron National Grasslands) saw nearly 30-percent declines last year and 65 percent declines this year. Drought remained extreme in this region, and reproductive success was likely very low. Hunting opportunities in this region will be extremely limited this fall.
|
<urn:uuid:a611d07f-9067-4341-92f3-f62b82e34e98>
|
CC-MAIN-2013-20
|
http://www.kdwpt.state.ks.us/index.php/news/Hunting/Upland-Birds/Upland-Bird-Regional-Forecast
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.956535
| 3,769
| 3.484375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"drought"
],
"nature": [
"conservation",
"endangered species",
"habitat"
]
}
|
{
"strong": 4,
"weak": 0,
"total": 4,
"decision": "accepted_strong"
}
|
Oil & Natural Gas Projects
Transmission, Distribution, & Refining
Multispectral and Hyperspectral Remote Sensing Techniques for Natural Gas Transmission Infrastructure Systems
The goal is to help maintain the nation's natural gas transmission infrastructure through the timely and effective detection of natural gas leaks through evaluation of geobotanical stress signatures.
The remote sensing techniques being developed employ advanced spectrometer systems that produce visible and near infrared reflected light images with spatial resolution of 1 to 3 meters in 128 wavelength bands. This allows for the discrimination of individual species of plants as well as geological and man-made objects, and permits the detection of biological impacts of methane leaks or seepages in large complicated areas. The techniques employed do not require before-and-after imagery because they use the spatial patterns of plant species and health variations present in a single image to distinguish leaks. Also, these techniques should allow discrimination between the effects of small leaks and the damage caused by human incursion or natural factors such as storm run off, landslides and earthquakes. Because plants in an area can accumulate doses of leaked materials, species spatial patterns can record time-integrated effects of leaked methane. This can be important in finding leaks that would otherwise be hard to detect by direct observation of methane concentrations in the air.
This project is developing remote sensing methods of detecting, discriminating, and mapping the effects of natural gas leaks from underground pipelines. The current focus is on the effects that the increased methane soil concentrations, created by the leaks, will have on plants. These effects will be associated with extreme soil CH4 concentrations, plant sickness, and even death. Similar circumstances have been observed and studied in the effects of excessive CO2soil concentrations at Mammoth Mountain near Mammoth Lakes California, USA. At the Mammoth Mountain site, the large CO2 soil concentrations are due to the volcanic rumblings of the magma still active below Mammoth Mountain. At more subtle levels this research has been able to map, using hyperspectral air borne imagery, the tree plant stress over all of the Mammoth Mountain. These plant stress maps match, and greatly extend into surrounding regions, the on-ground CO2 emission mapping done by the USGS in Menlo Park, California.
In addition, vegetation health mapping along with altered mineralization mapping at Mammoth Mountain does reveal subtle hidden faults. These hidden faults are pathways for potential CO2 leaks, at least near the surface, over the entire region. The methods being developed use airborne hyperspectral and multi-spectral high-resolution imagery and very high resolution (0.6 meter) satellite imagery. The team has identified and worked with commercial providers of both airborne hyperspectral imagery acquisitions and high resolution satellite imagery acquisitions. Both offer competent image data post processing, so that eventually, the ongoing surveillance of pipeline corridors can be contracted for commercially. Current work under this project is focused on detecting and quantifying natural gas pipeline leaks using hyperspectral imagery from airborne or satellite based platforms through evaluation of plant stress.
Lawrence Livermore National Laboratory (LLNL) – project management and research products
NASA – Ames – Development of UAV platform used to carry hyperspectral payload
HyVista Corporation– Development and operation of the HyMap hyperspectral sensor
Livermore, CA 94511
The use of geobotanical plant stress signatures from hyperspectral imagery potential offers a unique means of detecting and quantifying the existence of natural gas leaks from the U.S. pipeline infrastructure. The method holds the potential to cover large expanses of pipeline with minimal man effort thus reducing the potential likelihood that a leak would go undetected. By increasing the effectiveness and efficiency of leak detection, the amount of gas leaked from a site can be reduced resulting in decreased environmental impact from fugitive emissions of gas, increased safety and reliability of gas delivery and increase in overall available gas; as less product is lost from the lines.
The method chosen for testing these techniques was to image the area surrounding known gas pipeline leaks. After receiving notice and location information for a newly discovered leak from research collaborator Pacific Gas and Electric (PG&E), researchers determined the area above the buried pipeline to be scanned, including some surrounding areas thought to be outside the influence of any methane that might percolate to within root depth of the surface. Flight lines were designed for the airborne acquisition program and researchers used a geographic positioning system (GPS) and digital cameras to visually record the soils, plants, minerals, waters, and manmade objects in the area while the airborne imagery was acquired. After the airborne imagery set for all flight lines was received (including raw data, data corrected to reflectance including atmospheric absorptions, and georectification control files), the data was analyzed using commercial computer software (ENVI) by a team of researchers at University of California, Santa Cruz (UCSC), Lawrence Livermore National Laboratory (LLNL), and one of the acquisition contractors.
- Created an advanced Geographic Information System (GIS) that will be able provide dynamic integration of airborne imagery, satellite imagery, and other GIS information to monitor pipelines for geobotanical leak signatures.
- Used the software to integrate hyperspectral imagery, high resolution satellite imagery, and digital elevation models of the area around a known gas leak to determine if evidence of the leak could be resolved.
- Helped develop hyperspectral imagery payload for use on an unmanned aerial vehicle developed by NASA-Ames.
- Participated in DOE-NETL sponsored natural gas pipeline leak detection demonstration in Casper, Wyoming on September 13-17, 2004. Using both the UAV hyperspectral payload (~1000 ft), and Hyvista hyperspectral platform (~5000 ft) to survey for plant stress.
Researchers used several different routines available within the ENVI program suite to produce “maps” of plant species types, plant health within species types, soil types, soil conditions, water bodies, water contents such as algae or sediments, mineralogy of exposed formations, and manmade objects. These maps were then studied for relative plant health patterns, altered mineral distributions, and other categories. The researchers then returned to the field to verify and further understand the mappings, fine-tune the results, and produce more accurate maps. Since the maps are georectified and the pixel size is 3 meters, individual objects can all be located using the maps and a handheld GPS.
These detailed maps show areas of existing anomalous conditions such as plant kills and linear species modifications caused by subtle hidden faults, modifications of the terrain due to pipeline work or encroachment. They are also the “baseline” that can be used to chart any future changes by re-imaging the area routinely to monitor and document any effects caused by significant methane leakage.
The sensors used for image acquisition are hyperspectral scanners, one of which provides 126 bands across the reflective solar wavelength region of 0.45 – 2.5 nm with contiguous spectral coverage (except in the atmospheric water vapor bands) and bandwidths between 15 – 20 nm. This sensor operates on a 3-axis gyro-stabilized platform to minimize image distortion due to aircraft motion and provides a signal to noise ratio >500:1. Geo-location and image geo-coding is achieved with an-on board Differential GPS (DGPS) and an integrated IMU (inertial monitoring unit).
During a DOE – NETL sponsored natural gas leak detection demonstration at the National Petroleum Reserve 3 (NPR3) site of the Rocky Mountain Oilfield Testing Center (RMOTC) outside of Casper, Wyoming, the project utilized hyperspectral imaging of vegetation to sense plant stress related to the presence of natural gas on a simulated pipeline using actual natural gas releases. The spectral signature of sunlight reflected from vegetation was used to determine vegetation health. Two different platforms were used for imaging the virtual pipeline path: a Twin Otter aircraft flying at an altitude of about 5,000 feet above ground level that imaged the entire site in strips, and an unmanned autonomous vehicle (UAV) flying at an altitude of approximately 1,000 feet above ground level that imaged an area surrounding the virtual pipeline.
The manned hyperspectral imaging took place on two days. Wednesday, September 9 and Wednesday, September 15. The underground leaks were started on August 30. This was done to allow time for the methane from the leaks to saturate the soils and produce plant stress by excluding oxygen from the plant root systems. On both days, the entire NPR3-RMOTC site was successfully imaged.
At that time of year, the vegetation at NPR3-RMOTC was largely in hibernation. The exception was in the gullies where there was some moisture. Therefore, the survey looked for unusually stressed plant “patches” in the gullies as possible leak points. Several spots were found in the hyperspectral imagery that had the spectral signature typical of sick vegetation that were several pixels in diameter in locations in the gullies or ravines along the virtual pipeline route. Due to the limited vegetation along the test route the successful detection of natural gas leaks through imaging of plant stress was limited in success. The technique did demonstrate an ability to show plant stress in areas near leak sites but was less successful in determining general leak severity based on those results. In areas with much denser vegetation coverage and less dormant plant life the method still shows promise.
| Airborne hyperspectral imagery unit - close-up
|| Airborne hyperspectral imagery unit - on plane
Overall results from the DOE-NETL sponsored natural gas leak detection demonstration can be found in the demonstration final report [PDF-7370KB] .
Current Status and Remaining Tasks:
All work under this project has been completed.
Project Start: August 13, 2001
Project End: December 31, 2005
DOE Contribution: $966,900
Performer Contribution: $0
NETL – Richard Baker ([email protected] or 304-285-4714)
LLNL – Dr. William L. Pickles ([email protected] or 925-422-7812)
DOE Leak Detection Technology Demonstration Final Report [PDF-7370KB]
DOE Fossil Energy Techline: National Labs to Strengthen Natural Gas Pipelines' Integrity, Reliability
Status Assessment [PDF-26KB]
|
<urn:uuid:14f91c40-6ff9-4e80-8412-73a8f1b2b57e>
|
CC-MAIN-2013-20
|
http://www.netl.doe.gov/technologies/oil-gas/NaturalGas/Projects_n/TDS/TD/T%26D_A_FEW0104-0085Multispectral.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.92439
| 2,141
| 2.828125
| 3
|
[
"climate"
] |
{
"climate": [
"co2",
"methane"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
On September 19, the Cary Institute hosted a one-day conference on the impacts of tropical storms Irene and Lee on the Hudson River. Organized by the Hudson River Environmental Society, with leadership from Cary's Stuart Findlay, the forum examined how the river and estuary responded to the storms, which dropped an estimated 12-18 inches of rainfall throughout the Hudson Valley and Catskill regions. Topics included dredging, sediment transport, water quality, impacts to fish, and future management practices.
In late October, Gary Lovett will present his assessment of the health of the Catskill Forest at the second Catskill Environmental Research & Monitoring Conference (CERM). The forum brings together research on the region, to better understand the effects of extreme weather, air pollution, invasive species, biodiversity loss, and habitat fragmentation. The Catskills provide the majority of New York City's drinking water supply; CERM forums help coordinate research and identify research agendas to protect these resources.
In November, Cary Institute will hold a two-day conference examining the effects of climate change on plant, animal, and microbial species. The invitation-only event is being organized by Richard Ostfeld, Shannon LaDeau, and Amy Angert (University of British Columbia). With more than 50 invited experts, the conference's goal is to identify tools that will help lessen the negative effects of climate change on biodiversity, disease risk, extinction, and ecosystem function.
|
<urn:uuid:40f6fcd0-48f9-4b94-a078-36492fab999c>
|
CC-MAIN-2013-20
|
http://www.caryinstitute.org/newsroom/conferences?page=0%2525252C3%25252C2%252C2
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.907748
| 290
| 2.703125
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change",
"extreme weather"
],
"nature": [
"biodiversity",
"biodiversity loss",
"ecosystem",
"habitat",
"invasive species"
]
}
|
{
"strong": 6,
"weak": 1,
"total": 7,
"decision": "accepted_strong"
}
|
Sign up for our newsletter
Gardening in the Rainy Zone.
Pronounced: AL-lee-um ka-ra-tah-vee-EN-see
Sunset zones 1-24.
USDA zones: 5-9.
Heat zones: 9-5.
Height: 4-10 inches (10-25 cm).
Two to three-inch diameter spherical umbel of 50 or more, flowers on a 6-inch stem.
Just above ground level the thick, leathery, pleated, 6-inch long leaves. 'Ivory White' has a pale, pewter tone.
Full sun to partial shade.
Dry, well-drained, ordinary garden soil.
Remove offsets in autumn and plant.
Pests and Diseases:
Bulb rot can occur during our damp conditions of fall through spring. Onion fly and thrips may be a problem.
Rainy Side Notes
Allium karataviense is a spunky plant with a large flower globe. It is a dwarf ornamental onion compared to the giant species in the Melanocrommyum group of alliums, in which it belongs. It grows exceptionally well in our Mediterranean climate, as the plants go dormant by the time our annual drought comes around. Some gardeners grow the species and its cultivars because it has the most attractive foliage in the entire genus. Its horizontal, long, leathery, pleated foliage is green with a striking purple cast. As the blossom first opens, it is nestled on top of the attractive foliage; the flowering stems continue to grow. It pushes the large, spherical umbels up and away from the leaves as they begin to look shabby. When you plant bulbs en masse in a garden bed or container, use companion perennials that are late in filling out, or annuals to fill in any bare soil the bulb's dormant state leaves behind.
Sensitive to excess moisture, these alliums are prone to rot, so grow them in well-drained soil in the ground or in deep containers with excellent drainage. Some A. karataviense cultivars available are 'Ivory Queen', 'Lucky' and 'Red Globe'.
Photographed in author's garden.
|
<urn:uuid:6eea884d-528f-40e3-9d46-b020db8089a9>
|
CC-MAIN-2013-20
|
http://rainyside.com/plant_gallery/bulbs/Allium_karataviense.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.841011
| 467
| 2.828125
| 3
|
[
"climate"
] |
{
"climate": [
"drought"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Fracking is short for ‘hydraulic fracturing,’ a term used to describe the process of pumping millions of gallons of pressurized water, sand and chemicals down a newly drilled well to blast out the surrounding shale rock and gas.
It’s a relatively new technique that‘s made shale gas more popular in recent years. For a long time, shale gas — a natural gas that’s embedded in ancient rocks known as shale — was deemed as being not worth the trouble by drilling companies because it was so difficult to recover. The gas is embedded in rocks and the best way to get it out is to drill in sideways, which only became possible in the 1980s and 1990s as the gas industry improved its directional drilling technology. Later, technological advances that let drillers use more water pressure made fracking into an economically viable option for obtaining shale gas from the rocks.
Read more about 'fracking'
Shale is scattered throughout the United States. The two hottest shale sites in America right now are the Barnett Shale in Texas and the Marcellus shale, which is buried beneath seven states and part of Lake Erie. Other large shale deposits are located in Arkansas, Louisiana, New Mexico, Oklahoma and Wyoming.
Despite its potential, though, a movement has welled up lately to block the shale gas boom. Some critics say embracing natural gas so heartily will slow the rise of renewable energy, but the biggest beef with shale isn't as much about its gas — it's about how we get it out of the ground. Shale gas would likely still be a novelty fuel without modern advances in hydraulic fracturing, yet the need for fracking is also starting to seem like it could be shale's fatal flaw. The practice has sparked major environmental and public heath concerns near U.S. gas fields, from diesel fuel and unidentified chemicals in groundwater to methane seeping out of sink faucets and even blowing up houses.
|
<urn:uuid:75db20fa-e3dc-4304-905b-a8885026ad81>
|
CC-MAIN-2013-20
|
http://www.mnn.com/eco-glossary/fracking?page=3
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.96927
| 395
| 3.453125
| 3
|
[
"climate"
] |
{
"climate": [
"methane",
"renewable energy"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
The atlas of climate change: mapping the world's greatest challenge
University of California Press
, 2007 - Science
- 112 pages
Today's headlines and recent events reflect the gravity of climate change. Heat waves, droughts, and floods are bringing death to vulnerable populations, destroying livelihoods, and driving people from their homes.
Rigorous in its science and insightful in its message, this atlas examines the causes of climate change and considers its possible impact on subsistence, water resources, ecosystems, biodiversity, health, coastal megacities, and cultural treasures. It reviews historical contributions to greenhouse gas levels, progress in meeting international commitments, and local efforts to meet the challenge of climate change.
With more than 50 full-color maps and graphics, this is an essential resource for policy makers, environmentalists, students, and everyone concerned with this pressing subject.
The Atlas covers a wide range of topics, including:
* Warning signs
* Future scenarios
* Vulnerable populations
* Renewable energy
* Emissions reduction
* Personal and public action
Copub: Myriad Editions
|
<urn:uuid:28c4b9cd-3a36-4bc3-8eec-b16faeb61b8e>
|
CC-MAIN-2013-20
|
http://books.google.ca/books?id=c5vuAAAAMAAJ&q=carbon+credits&dq=related:ISBN819000610X&source=gbs_word_cloud_r&cad=6
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.876604
| 224
| 3.125
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change",
"greenhouse gas",
"renewable energy"
],
"nature": [
"biodiversity",
"ecosystems"
]
}
|
{
"strong": 5,
"weak": 0,
"total": 5,
"decision": "accepted_strong"
}
|
Vanity Top Finishes
Vanity Top Composition
Geologically speaking, rocks are classified into 3 main categories: igneous, sedimentary, and metamorphic. Our sinks are made of all natural stones including granite, travertine, and cream marfil. Granite is a type of igneous rock, travertine is sedimentary while cream marfil is metamorphic.
Cream Marfil is a type of marble, which is a metamorphic rock resulting from the metamorphism of limestone. Limestone is a sedimentary rock, formed mainly from the accumulation of organic remains (bones and shells of marine microorganisms and coral) from millions of years ago. The calcium in the marine remains combined with carbon dioxide in the water in turn forms calcium carbonate, which is the basic mineral structure of all limestone. When subjected to heat and pressure, the original limestone experiences a process of complete recrystallization (metamorphism), forming what we know as marble. The characteristic swirls and veins of many colored marble varieties, for example, cream marfil, are usually due to various mineral impurities. Cream marfil is formed with a medium density with pores.
Travertine is a variety of limestone, a kind of sedimentary rock, formed of massive calcium carbonate from deposition by rivers and springs, especially hot bubbly mineral rich springs. When hot water passes through limestone beds in springs or rivers, it dissolves the limestone, taking the calcium carbonate from the limestone to suspension as well as taking that solution to the surface. If enough time comes about, water evaporates and calcium carbonate is crystallized, forming what we know as travertine stone. Travertine is characterized by pores and pitted holes in the surface and takes a good polish. It is usually hard and semicrystalline. It is often beautifully colored (from ivory to golden brown) and banded as a result of the presence of iron compounds or other (e.g., organic) impurities.
Travertine is mined extensively in Italy; in the U.S., Yellowstone Mammoth Hot Springs are actively depositing travertine. It also occurs in limestone caves.
Granite is a very common type of intrusive igneous rock, mainly composed of three minerals: feldspar, quartz, and mica, with the first being the major ingredient. Granite is formed when liquid magma?molten rock material?cools beneath the earth?s crust. Due to the extreme pressure within the center of the earth and the absence of atmosphere, granite is formed very densely with no pores and has a coarse-grained structure. It is hard, firm and durable.
|
<urn:uuid:a5a63392-3468-4388-85e9-d51ad43f7fd5>
|
CC-MAIN-2013-20
|
http://www.vintagetub.com/asp/product_detail.asp?item_no=GM-2206-40-BB
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939076
| 550
| 3.34375
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Green Power is electricity generated from renewable energy sources that are environmentally friendly such as solar, wind, biomass, and hydro power. New York State and the Public Service Commission have made a commitment to promote the use of Green Power and foster the development of renewable energy generation resources.
GREEN POWER IN NEW YORK
Electricity comes from a variety of sources such as natural gas, oil, coal, nuclear, hydro power, biomass, wind, solar, and solid waste. Green Power is electricity generated from renewable energy sources such as:
Solar: Solar energy systems convert sunlight directly into electricity.
Biomass: Organic wastes such as wood, other plant materials and landfill gases are used to generate electricity.
Wind: Modern wind turbines use large blades to catch the wind, spin turbines, and generate electricity
Hydropower: Small installations on rivers and streams use running or falling water to drive turbines that generate electricity.
NY’s ENERGY MIX
|The pie chart below shows the mix of energy sources that was used to generate New York’s electricity in 2003. Buying Green Power will help to increase the percentage of electricity that is produced using cleaner energy sources.|
You have the power to make a difference For only a few pennies more a day, you can choose Green Power and make a world of difference for generations to come.
- Produces fewer environmental impacts than fossil fuel energy.
- Helps to diversify the fuel supply, increasing the reliability of the NY State electric system and contributing to more stable energy prices.
- Reduces use of imported fossil fuels, keeping dollars spent on energy in the State’s economy.
- Creates jobs and helps the economy by spurring investments in environmentally-friendly facilities.
- Creates healthier air quality and helps to reduce respiratory illness.
If just 10% of New York’s households choose Green Power for their electricity supply, it would prevent nearly 3 billion pounds of carbon dioxide, 10 million pounds of sulfur dioxide, and nearly 4 million pounds of nitrogen oxides from getting into our air each year. Green Power helps us all breathe a little easier.
Your Energy…Your Choice
Your electric service is made up of two parts, supply and delivery. In New York’s competitive electric market, you can now shop for your electric supply. You can support cleaner, sustainable energy solutions by selecting Green Power for some or all of your supply. No matter what electric supply you choose,your utility is still responsible for delivering your electricity safely and reliably, and will provide you with customer service and respond to emergencies.
What happens when you choose to buy Green Power?
The Green Power you buy is supplied to the power grid that delivers the electricity to all customers in your region. Your Green Power purchase supports the development of more environmentally-friendly electricity generation. You are helping to create a cleaner, brighter New York for future generations. You will continue to receive the safe, reliable power you’ve come to depend on.
Switching to Green Power is as easy as:
1. Use the list below to contact the Green Power service providers in your area.
2. Compare the Green Power programs.
3. Choose the Green Energy Service Company program that is right for you.
Using New York’s power to change the future Energy conservation, energy efficiency and renewable energy are critical elements in New York’s economic, security and energy policies. New York State is committed to ensuring that we all have access to reliable electricity by helping consumers use and choose energy wisely. Recently, the state launched two initiatives – one designed to educate the public about the environmental impacts of energy production, and one to encourage the development of Green Power programs.
The Environmental Disclosure Label
NY RENEWABLE ENERGY SERVICE INITIATIVES
The New York State Public Service Commission is supporting development of renewable energy service programs in utility service territories across the state. These programs are spurring the development of new sources ofrenewable energy and the sale of Green Power to New York consumers. As a result, Green Power service providers are now offering a variety of renewable energy service options. Most New York consumers now have the opportunity to choose Green Power.
Suppliers Offering Green Energy Products
Green Power can be arranged through the following suppliers (may not operate in all utility territories.) PSC has created the following list of providers and does not recommend particular companies or products.
|Agway Energy Services||1-888-982-4929||www.agwayenergy.com|
|Amerada Hess||1-800-HessUSA (437-7872)||www.hess.com
(Commercial and Industrial only)
|Community Energy, Inc.||1-866-Wind-123
|Constellation New Energy||1-866-237-7693||www.integrysenergy.com
(Commercial and Industrial only)
|Energy Cooperative of New York||1-800-422-1475||www.ecny.org|
|Green Mountain Energy Company||1-800-810-7300||www.greenmountain.com|
|Integrys Energy NY||1-518-482-4615
|Juice Energy, Inc||1-888-925-8423||www.juice-inc.com|
|NYSEG Solutions, Inc.||1-800-567-6520||www.nysegsolutions.com|
|Pepco Energy Services, Inc.
(NYC commercial and industrial only)
|Just Energy (GeoPower – Con Ed territory)||1-866-587-8674||www.justenergy.com|
|Just Energy (GeoGas – Con Ed, KeySpan, NFG territories)||1-866-587-8674||www.justenergy.com|
|Central Hudson Gas and Electric||1-800-527-2714||www.centralhudson.com|
|National Grid||1-800-642-4272 (upstate)
1-800-930-5003 (Long Island)
|New York State Electric and Gas||1-800-356-9734||www.nyseg.com|
|Orange and Rockland||1-877-434-4100||www.oru.com|
|Rochester Gas and Electric||1-877-743-9463||www.rge.com|
|Long Island Power Authority(LIPA)||1-800-490-0025||www.lipower.org|
|
<urn:uuid:c5ac2c27-7304-41b9-936a-59bd96158c32>
|
CC-MAIN-2013-20
|
http://ecoanchornyc.com/resources/clean-energy-programs/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.807671
| 1,372
| 3.5625
| 4
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon dioxide",
"renewable energy"
],
"nature": [
"conservation"
]
}
|
{
"strong": 3,
"weak": 0,
"total": 3,
"decision": "accepted_strong"
}
|
Earth System Science Partnership (ESSP)
The ESSP is a partnership for the integrated study of the Earth System, the ways that it is changing, and the implications for global and regional sustainability.
The urgency of the challenge is great: In the present era, global environmental changes are both accelerating and moving the earth system into a state with no analogue in previous history.
To learn more about the ESSP, clink on links to access Strategy Paper, brochure and a video presentation by the Chair of the ESSP Scientific Committee, Prof. Dr. Rik Leemans of Wageningen University, The Netherlands.
The Earth System is the unified set of physical, chemical, biological and social components, processes and interactions that together determine the state and dynamics of Planet Earth, including its biota and its human occupants.
Earth System Science is the study of the Earth System, with an emphasis on observing, understanding and predicting global environmental changes involving interactions between land, atmosphere, water, ice, biosphere, societies, technologies and economies.
ESSP Transitions into 'Future Earth' (31/12/2012)
On 31st December 2012, the ESSP will close and transition into 'Future Earth' as it develops over the next few years. During this period, the four global environmental Change research programmes (DIVERSITAS, IGBP, IHDP, WCRP) will continue close collaboration with each other. 'Future Earth' is currently being planned as a ten-year international research initiative for global sustainability (www.icsu.org/future-earth) that will build on decades of scientific excellence of the four GEC research programmes and their scientific partnership.
Click here to read more.
Global Carbon Budget 2012
Carbon dioxide emissions from fossil fuel burning and cement production increased by 3 percent in 2011, with a total of 34.7 billion tonnes of carbon dioxide emitted to the atmosphere. These emissions were the highest in human history and 54 percent higher than in 1990 (the Kyoto Protocol reference year). In 2011, coal burning was responsible for 43 percent of the total emissions, 34 percent for oil, 18 percent for gas and 5 percent for cement.
For the complete 2012 carbon budget and trends, access the Global Carbon Project website.
GWSP International Conference - CALL for ABSTRACTS
The GWSP Conference on "Water in the Anthropocene: Challenges for Science and Governance" will convene in Bonn, Germany, 21 - 24 May 2014.
The focus of the conference is to address the global dimensions of water system changes due to anthropogenic as well as natural influences. The Conference will provide a platform to present global and regional perspectives on the responses of water management to global change in order to address issues such as variability in supply, increasing demands for water, environmental flows, and land use change. The Conference will help build links between science and policy and practice in the area of water resources management and governance, related institutional and technological innovations and identify ways that research can support policy and practice in the field of sustainable freshwater management.
Learn more about the Conference here.
Global Carbon Project (GCP) Employment Opportunity - Executive Director
The Global Carbon Project (GCP) is seeking to employ a highly motivated and independent person as Executive Director of the International Project Office (IPO) in Tsukuba, Japan, located at the Centre for Global Environmental Research at the National Institute for Environmental Studies (NIES). The successful candidate will work with the GCP Scientific Steering Committee (SSC) and other GCP offices to implement the science framework of the GCP. The GCP is seeking a person with excellent working knowledge of the policy-relevant objectives of the GCP and a keen interest in devising methods to integrate social and policy sciences into the understanding of the carbon-climate system as a coupled human/natural system. Read More.
Inclusive Wealth Report
The International Human Dimensions Programme on Global Environmental Change (IHDP) announces the launch of the Inclusive Wealth Report 2012 (IWR 2012) at the Rio +20 Conference in Brazil. The report presents a framework that offers a long-term perspective on human well-being and sustainability, based on a comprehensive analysis of nations' productive base and their link to economic development. The IWR 2012 was developed on the notion that current economic indicators such as Gross Domestic Product (GDP) and the Human Development Index (HDI) are insufficient, as they fail to reflect the state of natural resources or ecological conditions, and focus exclusively on the short-term, without indicating whether national policies are sustainable.
Future Earth: Global platform for sustainability research launched at Rio +20
Rio de Janeiro, Brazil (14 June 2012) - An alliance of international partners from global science, research funding and UN bodies launched a new 10-year initiative on global environmental change research for sustainability at the Forum on Science and Technology and Innovation for Sustainable Development. Future Earth - research for global sustainability, will provide a cutting-edge platform to coordinate scientific research which is designed and produced in partnership with governments, business and, more broadly, society. More details.
APN's 2012 Call for Proposals
The Asia-Pacific Network for Global Change Research (APN) announces the call for proposals for funding from April 2013. The proposals can be submitted under two separate programmes: regional global change research and scientific capacity development. More details.
State of the Planet Declaration
Planet Under Pressure 2012 was the largest gathering of global change scientists leading up to the United Nations Conference on Sustainable Development (Rio +20) with over 3,000 delegates at the conference venue and over 3,500 that attended virtually via live web streaming. The plenary sessions and the Daily Planet news show continue to draw audiences worldwide as they are available On Demand. An additional number of organisations, including 150 Science and Technology Centres worldwide streamed the plenary sessions at Planet Under Pressure-related events reaching an additional 12,000 viewers.
The first State of the Planet Declaration was issued at the conference.
Global Carbon Budget 2010
Global carbon dioxide emissions increased by a record 5.9 per cent in 2010 following the dampening effect of the 2008-2009 Global Financial Crisis (GFC), according to scientists working with the Global Carbon Project (GCP). The GCP annual analysis reports that the impact of the GFC on emissions has been short-lived owing to strong emissions growth in emerging economies and a return to emissions growth in developed economies.
Planet Under Pressure 2012 Debategraph
Debategraph and Planet Under Pressure Conference participants and organisers are collaborating to distill the main arguments and evidence, risks and policy options facing humanity into a dynamic knowledge map to help convey and inform the global deliberation at United Nations Rio +20 and beyond.
Join the debate! (http://debategraph.org/planet)
Integrated Global Change Research
The ESSP and partners - the German National Committee on Global Change Research (NKGCF), International Council for Science (ICSU) and the International Social Science Council (ISSC) is conducting a new study on 'Integrated Global Change Research: Co-designing knowledge across scientific fields, national borders and user groups'. An international workshop (funded by the German Research Foundation) convened in Berlin, 7 - 9 March 2012, designed to elucidate the dimensions of integration, to identify and analyse best practice examples, to exchange ideas about new concepts of integration, to discuss emerging challenges for science, and to begin discussions about balancing academic research and stakeholder involvement.
The Future of the World's Climate
The Future of the World's Climate (edited by Ann Henderson-Sellers and Kendal McGuffie) offers a state-of-the-art overview - based on the latest climate science modelling data and projections available - of our understanding of future climates. The book is dedicated to Stephen H Schneider, a world leader in climate interpretation and communication. The Future of the World's Climate summarizes our current understanding of climatic prediction and examines how that understanding depends on a keen grasp of integrated Earth system models and human interaction with climate. This book brings climate science up to date beyond the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report. More details.
Social Scientists Call for More Research on Human Dimensions of Global Change
Scientists across all disciplines share great concern that our planet is in the process of crossing dangerous biophysical tipping points. The results of a new large-scale global survey among 1,276 scholars from the social sciences and the humanities demonstrates that the human dimensions of the problem are equally important but severely under-addressed.
The survey conducted by the International Human Dimensions Programme on Global Environmental Change (IHDP-UNU) Secretariat in collaboration with the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the International Social Science Council (ISSC), identifies the following as highest research priority areas:
1) Equity/equality and wealth/resource distribution;
2) Policy, political systems/governance, and political economy;
3) Economic systems, economic costs and incentives;
4) Globalization, social and cultural transitions.
Food Security and Global Environmental Change
Food security and global environmental change, a synthesis book edited by John Ingram, Polly Ericksen and Diana Liverman of GECAFS has just been published. The book provides a major, accessible synthesis of the current state of knowledge and thinking on the relationship between GEC and food security. Click here for further information.
GECAFS is featured in the latest UNESCO-SCOPE-UNEP Policy Brief - No. 12 entitled Global Environmental Change and Food Security. The brief reviews current knowledge, highlights trends and controversies, and is a useful reference for policy planners, decision makers and stakeholders in the community.
GWSP Digital Water Atlas
The Global Water System Project (GWSP) has launched its Digital Water Atlas. The purpose and intent of the Digital Water Atlas is to describe the basic elements of the Global Water System, the interlinkages of the elements and changes in the state of the Global Water System by creating a consistent set of annotated maps. The project will especially promote the collection, analysis and consideration of social science data on the global basis. Click here to access the GWSP Digital Water Atlas.
The ESSP office was carbon neutral in its office operations and travel in 2011. The ESSP supported the Gujarat wind project in India. More details.
The Global Carbon Project has published an ESSP commissioned report, "carbon reductions and offsets" with a number of recommendations for individuals and institutions who want to participate in this voluntary market. Click here to learn more and to download the report from the GCP website.
The ESSP is a joint initiative of four global environmental change programmes:
|
<urn:uuid:120b0d29-fee9-445b-a798-b67b2cbeb131>
|
CC-MAIN-2013-20
|
http://www.essp.org/index.php?id=10&L=0%252F%252Fassets%252Fsnipp%20%E2%80%A6%2F%2Fassets%2Fsnippets%2Freflect%2Fsnippet.reflect.php%3Freflect_base%3D
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.894252
| 2,191
| 2.984375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon budget",
"carbon dioxide",
"climate change",
"climate system",
"food security",
"ipcc"
],
"nature": [
"ecological"
]
}
|
{
"strong": 4,
"weak": 3,
"total": 7,
"decision": "accepted_strong"
}
|
To preserve our planet, scientists tell us we must reduce the amount of CO2 in the atmosphere from its current level of 392 parts per million ("ppm")to below 350 ppm. But 350 is more than a number—it's a symbol of where we need to head as a planet.
At 350.org, we're building a global grassroots movement to solve the climate crisis and push for policies that will put the world on track to get to 350 ppm.
Scientists say that 350 parts per million CO2 in the atmosphere is the safe limit for humanity. Learn more about 350—what it means, where it came from, and how to get there. Read More »
Submit your success story from your work in the climate movement and we'll share the best ones on our blog and social networks! Stories from people like you are crucial tools in growing the climate movement.
Help spread the word and look good while doing it—check out the 350 Store for t-shirts, buttons, stickers, and more.
|
<urn:uuid:ca6b4c4b-78a7-46de-b5b0-5091760e0798>
|
CC-MAIN-2013-20
|
http://350.org/en/about/blogs/story-endfossilfuelsubsidies
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.928506
| 206
| 3.078125
| 3
|
[
"climate"
] |
{
"climate": [
"co2"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Respiratory distress syndrome (RDS) occurs most often in infants who are born too early. RDS can cause breathing difficulty in newborns. If it is not properly treated, RDS can result in complications. This may include pneumonia, respiratory failure, chronic lung problems, and possibly asthma. In severe cases, RDS can lead to convulsions and death.
RDS occurs when infant's lungs have not developed enough. Immature lungs lack a fluid called surfactant. This is a foamy liquid that helps the lungs open wide and take in air. When there is not enough surfactant, the lungs do not open well. This will make it difficult for the infant to breathe.
The chance of developing RDS decreases as the fetus grows. Babies born after 36 weeks rarely develop this condition.
A risk factor is something that increases your chance of getting a disease or condition. Factors that increase your baby's risk of RDS include:
- Birth before 37 weeks; increased risk and severity of condition with earlier prematurity
- Mother with insulin dependent diabetes
- Multiple birth
- Cesarean section delivery
- Cold stress
- Precipitous delivery
- Previously affected infant
- Being male
- Hypertension (high blood pressure) during pregnancy
The following symptoms usually start immediately or within a few hours after birth and include:
- Difficulty breathing, apnea
- Rapid, shallow breathing
- Delayed or weak cry
- Grunting noise with every breath
- Flaring of the nostrils
- Frothing at the lips
- Blue color around the lips
- Swelling of the extremities
- Decreased urine output
The doctor will ask about the mother's medical history and pregnancy. The baby will also be evaluated, as outlined here:
Amniotic fluid is fluid that surrounds the fetus. It may be tested for indicators of well-developed lungs such as:
- Lecithin:sphingomyelin ratio
- Phosphatidyl glycerol
- Laboratory studies—done to rule out infection
- Physical exam—includes checking the baby's breathing and looking for bluish color around the lips or on trunk
- Testing for blood gases—to check the levels of oxygen and carbon dioxide in the blood
- Chest x-ray —a test that uses radiation to take a picture of structures inside the body, in this case the heart and lungs
Treatment for a baby with RDS usually includes oxygen therapy and may also include:
A mechanical respirator is a breathing machine. It is used to keep the lungs from collapsing and support the baby's breathing. The respirator also improves the exchange of oxygen and other gases in the lungs. A respirator is almost always needed for infants with severe RDS.
Surfactant can be given to help the lungs open. Wider lungs will allow the infant to take in more oxygen and breathe normally. One type of surfactant comes from cows and the other is synthetic. Both options are delivered directly into the infant's windpipe.
Inhaled Nitric Oxide
Nitric oxide is a gas that is inhaled. It can make it easier for oxygen to pass into the blood. The gas is often delivered during mechanical ventilation.
Newborns with RDS may be given food and water by the following means:
- Tube feeding—a tube is inserted through the baby's mouth and into the stomach
- Parenteral feeding—nutrients are delivered directly into a vein
Preventing a premature birth is the best way to avoid RDS. To reduce your chance of having a premature baby:
- Get good prenatal care. Start as early as possible in pregnancy.
- Eat a healthful diet. Take vitamins as suggested by your doctor.
- Do not smoke. Avoid alcohol or drug use.
- Only take medicines that your doctor has approved.
If you are at high risk of giving birth to a premature baby:
- You may be given steroids about 24 hours before delivery. Steroids can help your baby's lungs develop faster.
- Your doctor may do an amniocentesis. This test will check the maturity of your baby's lungs. The results will help determine the best time for delivery.
- Reviewer: Michael Woods
- Review Date: 09/2012 -
- Update Date: 00/91/2012 -
|
<urn:uuid:7d8d5dec-597e-4805-b78e-8e623e0c5fb5>
|
CC-MAIN-2013-20
|
http://medtropolis.com/your-health/?/11599/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.926661
| 904
| 3.4375
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Darryl D’Monte continues his reportage of the Climate Action Network International meet on in Bangkok where, he says, two NGOs put forward blueprints that could be templates on which the new climate treaty is based
Two NGOs -- the World Wide Fund for Nature (WWF) and a consortium led by Greenpeace -- have put forward blueprints that could be templates on which the new climate treaty, due to be negotiated in Copenhagen in December, is based.
WWF employs the concept of greenhouse development rights (GDRs), which have earlier also been propagated by the Stockholm Environment Institute and others. This August, it released a report titled ‘Sharing the effort under a global carbon budget’.
WWF says: “A strict global carbon budget between now and 2050 based on a fair distribution between rich and poor nations has the potential to prevent dangerous climate change and keep temperature rise well below 2 degrees Celsius.”
The report is based on research and shows different ways to cut global emissions by at least 80% globally, by 2050, and by 30% by 2030, compared to 1990 levels. Both the EU and US have agreed to this 2050 target but differ drastically on the intermediate goals, which have a vital bearing on keeping global temperatures from rising above 2 degrees C, beyond which there will be catastrophic climate changes.
“In order to avoid the worst and most dramatic consequences of climate change, governments need to apply the strictest measures to stay within a tight and total long-term global carbon budget,” said Stephan Singer, director of global energy policy at WWF. “Ultimately, a global carbon budget is equal to a full global cap on emissions.”
According to the analysis, the total carbon budget -- the amount of tolerable global emissions over a period of time -- has to be set roughly at 1,600 Gt CO2 eq (gigatonnes of carbon dioxide equivalent) between 1990 and 2050.
As the world has already emitted a large part of this, the budget from today until 2050 is reduced to 970 Gt CO2 eq, excluding land use changes.
The report evaluates different pathways to reduce emissions, all in line with the budget. It describes three different methodologies which could be applied to distribute the burden and the benefits of a global carbon budget in a fair and equitable way.
- Greenhouse development rights (GDRs), where all countries need to reduce emissions below business-as-usual based on their per capita emissions, poverty thresholds, and GDP per capita.
- Contraction and convergence (C&C), where per capita allowances converge from a country’s current level to a level equal for all countries within a given period.
- Common but differentiated convergence (CDC), where developed countries’ per capita emissions converge to an equal level for all countries and others converge to the same level once their per capita emissions reach a global average.
The report says that by 2050, the GDR methodology requires developed nations as a group to reduce emissions by 157% (twice what they are contemplating). “Given that they cannot cut domestic emissions by more than 100%, they will need to finance emission reductions in other countries to reach their total.”
While the greenhouse development rights method allows an increase for most developing countries, at least for the initial period, the two other methods give less room for emissions increase. Under the C&C and CDC methodology, China, for example, would be required to reduce by at least 70% and India by 2-7% by 2050, compared to 1990.
The poorest countries will be allowed to continue to grow emissions until at least 2050 under the GDR methodology, but will be required to reduce them after 2025 under the two remaining allocation options.
The Greenpeace proposal, which has WWF and other partners, was released at an earlier UN climate meet in Bonn this year. It also talks of a global carbon budget. Industrial countries would have to phase out their fossil fuel energy consumption by 2050. The trajectory would be as follows: 23% between 2013 and 2017, 40% by 2020 (twice the EU commitment), and 95% by 2050.
Globally, deforestation emissions would need to be reduced by three-quarters by 2020, and fossil fuel consumption by developing countries would have to peak by 2020 and then decline.
The proposal envisages that industrial countries will provide at least $160 billion a year from 2013 to 2017, “with each country assuming national responsibility for an assessed portion of this amount as part of its binding national obligation for the same period”.
The main source of this funding, which could prove controversial, would be auctioning 10% of industrial countries’ emissions allocations. There would also be levies on aviation and shipping, since both add to global warming.
Greenpeace proposes a Copenhagen climate facility which would apportion $160 billion as follows:
- $56 billion for developing countries to adapt to climate change.
- $7 billion a year as insurance against such risks.
- $42 billion in reducing forest destruction and degradation.
- $56 billion on mitigation and technology diffusion.
Talks at Bangkok are deadlocked between the G77 and China that want to continue with the Kyoto Protocol, and the US which wants a new treaty. The EU is open to a continuation of the old treaty with a new track to include the US (which has not ratified Kyoto), as well as emerging developing countries. Where and how such proposals will dovetail with the document now being negotiated is by no means clear, and it will be nothing less than a catastrophe for the entire planet if Copenhagen ends in a stalemate.
Infochange News & Features, October 2009
|
<urn:uuid:6325c38b-7f5b-4153-9540-46ea973e92c7>
|
CC-MAIN-2013-20
|
http://infochangeindia.org/environment/news/greenhouse-development-rights-and-global-carbon-budgets.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.950729
| 1,164
| 3.234375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon budget",
"carbon dioxide",
"climate change",
"co2",
"global warming",
"temperature rise"
],
"nature": [
"deforestation"
]
}
|
{
"strong": 6,
"weak": 1,
"total": 7,
"decision": "accepted_strong"
}
|
The Ancient Forests of North America are extremely diverse. They include the boreal forest belt stretching between Newfoundland and Alaska, the coastal temperate rainforest of Alaska and Western Canada, and the myriad of residual pockets of temperate forest surviving in more remote regions.
Together, these forests store huge amounts of carbon, helping tostabilise climate change. They also provide a refuge for large mammalssuch as the grizzly bear, puma and grey wolf, which once ranged widelyacross the continent.
In Canada it is estimated that ancient forest provides habitat forabout two-thirds of the country's 140,000 species of plants, animalsand microorganisms. Many of these species are yet to be studied byscience.
The Ancient Forests of North America also provide livelihoods forthousands of indigenous people, such as the Eyak and Chugach people ofSouthcentral Alaska, and the Hupa and Yurok of Northern California.
Of Canada's one million indigenous people (First Nation, Inuit andMétis), almost 80 percent live in reserves and communities in boreal ortemperate forests, where historically the forest provided their foodand shelter, and shaped their way of life.
Through the Trees - The truth behind logging in Canada (PDF)
On the Greenpeace Canada website:
Interactive map of Canada's Borel forest (Flash)
Fun animation that graphically illustrates the problem (Flash)
Defending America's Ancient Forests
|
<urn:uuid:8b41dac0-bc58-4cef-9f95-8333e7c91598>
|
CC-MAIN-2013-20
|
http://www.greenpeace.org/international/en/campaigns/forests/north-america/?tab=0
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.906006
| 298
| 3.9375
| 4
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change"
],
"nature": [
"habitat"
]
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Blue Ocean Research & Conservation
Discovering the Ocean
Our scientific research has focused recently on an often overlooked yet ecologically important part of many ecosystems–sponges. Numbering over 8000 species, they are common to many aquatic habitats from the Ross Shelf in Antarctica, Lake Baikal in Russia, rocky reefs in South Africa, to coral reefs throughout the Caribbean.
Sponges are critical to coral reef survival. Sponges filter and clean the water, provide shelter for commercially important species like juvenile lobsters, and are eaten by many fish and turtles. On many coral reefs, sponges outnumber corals, with sponges providing three-dimensional structure that creates habitat and refuge for thousands of species. Because of their large size and kaleidoscope colors, sponges help fuel the popular and lucrative diving and tourism industries.
As climate change results in warmer, more acidic oceans, all marine life is potentially affected. It is well-known that coral health declines under these conditions, but the effect on sponge growth and survival is unknown. Significant declines in sponge health and biomass would be catastrophic to coral reefs, reducing water quality and severely impacting thousands of species from symbiotic microbes to foraging hawksbill turtles. A major loss of sponges would not only negatively impact marine life, but also local communities that depend on reefs for coastal protection and food.
Blue Ocean’s research scientist, Dr. Alan Duckworth, studied the effects of warmer, more acidic waters on the sponge Cliona celata, which bores into the shells of scallops and oysters, weakening and eventually killing them. Alan hypothesized that because climate change will result in shellfish having weaker shells, these sponges could cause greater losses of shellfish. This study has been done in collaboration with Dr. Bradley Peterson of Stony Brook University.
Alan’s other area of study was the first climate change experiment focused on tropical sponges. It investigated the effects of warmer, more acidic water on the growth, survival, and chemistry of several Caribbean coral reef sponges. This study was based at the Discovery Bay Marine Lab in Jamaica and chemical analysis of sponge samples was completed by Dr. Lyndon West from Florida Atlantic University.
Putting Teeth in Shark Conservation
The goal of this fellowship is to help small, island nations by strengthening their ability to identify illegal shark fishing and enforce recently established shark sanctuaries. It will help provide much needed scientific research, training, outreach and DNA-testing tools which can then be used to help protect valuable marine sanctuaries worldwide.
|
<urn:uuid:eeb90433-e247-4f67-89ff-aa11c0db8add>
|
CC-MAIN-2013-20
|
http://blueocean.org/programs/blue-ocean-research/?imgpage=1&showimg=491
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939833
| 525
| 3.703125
| 4
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change"
],
"nature": [
"conservation",
"ecosystems",
"habitat"
]
}
|
{
"strong": 4,
"weak": 0,
"total": 4,
"decision": "accepted_strong"
}
|
Robin Yapp, Contributor
December 10, 2012 | 5 Comments
Saudi Arabia's newly announced commitment to introducing solar-powered desalination plants marks a welcome and significant step in advancing the technology. In October 2012, Abdul Rahman Al-Ibrahim, governor of the country's Saline Water Conversion Corporation (SWCC), which procures the majority of its extensive municipal desalination assets, announced plans to establish three new solar-powered desalination plants in Haqel, Dhuba and Farasan. SWCC is the biggest producer of desalinated water worldwide, accounting for 18% of global output.
Energy-intensive desalination plants have traditionally run on fossil fuels, but renewables, particularly solar power, are now beginning to play a part.
Around half the operating cost of a desalination plant comes from energy use, and on current trends Saudi Arabia and many other countries in the region would consume most of the oil they produce on desalination by 2050.
The dominant desalination technology at present, with around 60% of global capacity, is Reverse Osmosis (RO), which pushes brine water through a membrane that retains the salt and other impurities.
Thermal desalination uses heat as well as electricity in distillation processes with saline feedwater heated to vaporise, so fresh water evaporates and the brine is left behind. Cooling and condensation are then used to obtain fresh water for consumption.
Multi Stage Flash (MSF), the most common thermal technique accounting for around 27% of global desalination capacity, typically consumes 80.6 kWh of heat energy plus 2.5-3.5 kWh of electricity per m3 of water. Large scale RO requires only around 3.5-5 kWh/m3 of electricity.
According to the International Renewable Energy Agency (IRENA), desalination with renewable energy can already compete cost-wise with conventional systems in remote regions where the cost of energy transmission is high. Elsewhere, it is still generally more expensive than desalination plants using fossil fuels, but IRENA states that it is 'expected to become economically attractive as the costs of renewable technologies continue to decline and the prices of fossil fuels continue to increase.'
Solar Reducing Costs
SWCC has taken a long view and aims to gradually convert all its desalination plants to run on solar power as part of a drive unveiled by the Saudi government earlier this year to install 41 GW of solar power by 2032.
The Al-Khafji solar desalination project, near the border with Kuwait, will become the first large-scale solar-powered seawater reverse osmosis (SWRO) plant in the world, producing 30,000 m3 of water per day for the town's 100,000 inhabitants.
Due for completion at the end of 2012, it has been constructed by King Abdulaziz City for Science and Technology (KACST), the Saudi national science agency, using technology developed in conjunction with IBM. Innovations include a new polymer membrane to make RO more energy efficient and protect the membrane from chlorine - which is used to pretreat seawater - and clogging with oil and marine organisms.
The use of solar power will bring huge cuts to the facility's contribution to global warming and smog compared to use of RO or MSF with fossil fuels, according to the developers.
Al-Khafji is the first step in KACST's solar energy programme to reduce desalination costs. For phase two, construction of a new plant to produce 300,000 m3 of water per day is planned by 2015, and phase three will involve several more plants by 2018.
Historically, desalination plants have been concentrated in the Persian Gulf region, where there is no alternative for maintaining the public water supply. The region has excellent solar power prospects, suggesting that coupling of the two technologies may become commonplace. A pilot project to construct 30 small-scale solar desalination plants by the Environment Agency Abu Dhabi has already seen 22 plants in operation, each producing 25 m3 of potable water per day.
But population increases and looming water scarcity have also prompted widespread investment in desalination. It is now practised in some 150 countries including the US, Europe, Australia, China and Japan and it is becoming an increasingly attractive option both financially and for supply security.
Over the past five years the capacity of operational desalination plants has increased by 57% to 78.4 million m3 per day, according to the International Development Agency. Sharply falling technology costs have been a key driver of the trend and an EU-funded project is examining the case for expanding solar-powered desalination.
Solar power may even offer a solution to an impending crisis in Yemen, where water availability per capita is less than 130 m3/year. Yemen's capital Sana'a, with a population of two million, faces running out of groundwater before 2025. It is estimated that a solar plant powered by a 1250 MW parabolic trough to desalinate water from the Red Sea and pump it 250 km to Sana'a could be constructed for around $6 billion.
Around 700 million people in 43 countries are classified by the UN as suffering from water scarcity today - but by 2025 the figure is forecast to rise to 1.8 billion. With the global population expected to reach nine billion by 2050 and the US secretary of state openly discussing the threat of water shortages leading to wars, desalinated water has never been more important.
Demand for desalinated water is projected to grow by 9% per year until 2016 due to increased consumption in the Middle East and North Africa (MENA) and in energy-importing countries such as the US, India and China.Population growth and depletion of surface and groundwater means desalination capacity in the MENA region is expected to grow from 21 million m3/day in 2007 to 110 million m3/day in 2030, according to the International Energy Agency.
US President John F Kennedy, speaking in 1962, said: 'If we could produce fresh water from salt water at a low cost, that would indeed be a great service to humanity, and would dwarf any other scientific accomplishment.' In the half century since, the need for innovation to satisfy humanity's demand for clean water has become ever more urgent. While technological advances continue to improve the efficiency of desalination methods, it is vital that the sources of power used by desalination plants also continue to evolve.
To add your comments you must sign-in or create a free account.
With over 57,000 subscribers and a global readership in 174 countries around the world, Renewable Energy World Magazine covers industry, policy, technology, finance and markets for all renewable technologies. Content is aimed decision makers...
|
<urn:uuid:fdd0471b-1300-4783-b88d-a84868202b69>
|
CC-MAIN-2013-20
|
http://www.renewableenergyworld.com/rea/news/article/2012/12/solar-energy-and-water-solar-powering-desalination?cmpid=rss
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.937564
| 1,390
| 3.53125
| 4
|
[
"climate"
] |
{
"climate": [
"global warming",
"renewable energy"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Close to standard
|Points of interest||
Ancient Rakatan ruins
Tandun III was originally surveyed approximately 12,293 BBY by Doctor Beramsh, who led an expedition from Ord Mantell. What Beramsh and his team discovered was a lush beautiful world with close to standard gravity and was suitable for both oxygen breathing humans and humanoids. Further exploration of the planet also revealed ruins of ancient population centers which Beramsh believed were constructed by the Rakata.
Despite the survey report of ideal atmospheric conditions, its distance from the Hydian Way prevented Tandun III from ever being colonized. Historical records would indicate that a possible second survey was conducted generations later during Chancellor Finis Valorum's second term.
At some point between 29 BBY and 19 BBY, the Republic Group took an Insignia of Unity that once stood on the podium of the Galactic Senate Rotunda. The group believed the insignia could be used as a symbol to restore Republic honor to the galaxy and brought it to Tandun III for safe-keeping. Enlisting the help of the Antarian Rangers, the Republic Group built a secret storehouse in one of the planet's ancient ruins and made the Stellar Envoy the key to accessing the facility. In 19 BBY the Republic Group and the Antarian Rangers were considered enemies of the Empire and thus the secret facility and the Insignia of Unity were left abandoned and forgotten for over six decades.
During the Yuuzhan Vong's invasion of the galaxy, the Yuuzhan Vong settled the world and began Vongforming it to meet their needs. The land was transformed into cliffs of yorik coral, tampasis of s'teeni, with populations of scherkil hla, sparkbees, and other Yuuzhan Vong biots.
Over the course of twenty years the Vongforming had taken its toll on the planet. Intact forests abundant with life still covered the southern hemisphere, while surface temperatures had rendered landmasses in the northern hemisphere unsuitable for all but the most extremophilic sentient species. Extensive regions of extreme volcanic and tectonic activity griped the planet in catastrophic forces that were likely to destroy it. The sky was filled with powerful winds and icy clouds that produced storms with sheets of rain, lightening, and hailstones. And while still breathable, dweebit beetles filled the atmosphere with high concentrations of carbon dioxide, methane, and sulfur.
In 43 ABY the Solos, accompanied by Tobb Jadak and Flitcher Poste, came to Tandun III as part of their quest to uncover the history of the Millennium Falcon. They landed the Falcon inside of the abandoned Republic Group warehouse and discovered the Insignia of Unity. It was at that time that Lestra Oxic appeared and revealed that he had followed the Solos to Tandun III in order to claim the emblem for himself. He told all present the history behind how the emblem ended up at Tandun III and inspected it only to realize that the emblem before him was a fake.
At the same time of the meeting, groundquakes increased in severity and volatility finally ending in the planet's strongest quake. Oxic and his associates, now accompanied by Jadak and Poste, left to continue the search for the true Insignia of Unity, as the Solos themselves fled the planet.
It was shortly after the Millennium Falcon's return to space that Tandun III flared and erupted in a shock wave that hurtled enormous chunks of itself into the vacuum of space.
Behind the scenesEdit
C-3PO stated the first survey of the world took place around 12,293 BBY and that the expedition was launched from Ord Mantell. This is odd due to the fact that Ord Mantell was not colonized until 12,000 BBY.
Additionally, C-3PO said Dr. Beramsh believed the ancient population centers were Rakatan in origin. Yet, the Rakata were unknown to the galaxy at large until the end of the Jedi Civil War in 3,956 BBY.
He continued to claim that the world was not colonized most likely due to its distance from the Hydian Way which was pioneered in 3,705 BBY. Thus the reason why the world was never colonized in the thousands of years between its exploration and the founding of the Hydian Way remains a mystery.
|
<urn:uuid:a9a8cdf5-ec0d-4a29-9793-af0701a63e8d>
|
CC-MAIN-2013-20
|
http://starwars.wikia.com/wiki/Tandun_III
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.961923
| 908
| 2.59375
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide",
"methane"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Hop into roo before it's too late
A call for Australians to eat kangaroos to combat climate change might be a case of tuck in now before it's too late, research by an Australian biologist suggests.
Writing in the December edition of Physiological and Biochemical Zoology, Dr Euan Ritchie, of James Cook University in Queensland, says population numbers of the iconic Australian marsupial are at threat from climate change.
Ritchie, from the School of Marine and Tropical Biology, says a rise in average temperatures in northern Australia of just 2°C could reduce suitable habitat for kangaroo populations by as much as 50%.
His findings follow a recent call by economist, Professor Ross Garnaut, of the Australian National University, that Australians help fight climate change by swapping their beef-eating habits for a taste of Skippy.
Ritchie, who admits to being a committed roo eater, says his findings should not deter people from kangaroo steaks and may even help the animal survive.
"The species [of kangaroo] currently being harvested are very well monitored," he says.
"So it means we will pick up differences [in range and population] very quickly and will be in a position to respond to that."
According to the study, the kangaroo species under greatest threat is the antilopine wallaroo.
Ritchie says it is more vulnerable because it has a very defined range across the tropical savannas of far northern Australia from Cape York in Queensland across to the Kimberleys of Western Australia.
Using climate change computer modelling, Ritchie and co-author Elizabeth Bolitho, also of James Cook University, found the 2°C temperature increase, predicted by 2030, would shrink the antilopine's range by 89%.
A 6°C increase, the upper end of temperature increase predictions to 2070, may lead to their extinction if they are unable to adapt to the arid environment which results, Ritchie says.
He says the main threat of climate change is not on the kangaroo itself, but on the habitat that sustains its populations.
Among the impacts that will affect their geographic range are increased prevalence of fires and changes to vegetation and the availability of water.
He says a 0.4°C increase would reduce the distribution of all species of kangaroos and wallaroos by 9%.
An increase of 2°C saw the geographic range of the kangaroos reduced by as much as 50%.
Weathering the changes
However the news is not all bad.
By contrast to the antilopine, Ritchie says the eastern gray kangaroo is in a strong position to weather climate changes because of its predominance in the cooler eastern seaboard of Australia.
And he says the red kangaroo and common wallaroo are better adapted to sustain hotter climates.
Professor Lesley Hughes, of the Climate Change Ecology Group at Macquarie University in Sydney, backs Ritchie's findings.
"Virtually every time we do bioclimatic modelling you get this result [of species under threat]," she says.
However she says few studies "go up to 6°C" because "the more you extrapolate into the future the more doubt you have".
Hughes adds however that a 6°C rise in temperature would "wipe out" most native Australian species.
|
<urn:uuid:5480eda0-6c9b-47da-84e6-b4482d8deaf3>
|
CC-MAIN-2013-20
|
http://www.abc.net.au/science/articles/2008/10/16/2392960.htm?site=science&topic=latest&listaction=unsubscribe
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955928
| 701
| 3.296875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"2°c",
"climate change"
],
"nature": [
"habitat"
]
}
|
{
"strong": 3,
"weak": 0,
"total": 3,
"decision": "accepted_strong"
}
|
The American Meteorological Society
) promotes the development and dissemination of information and education on the atmospheric
Atmospheric sciences is an umbrella term for the study of the atmosphere, its processes, the effects other systems have on the atmosphere, and the effects of the atmosphere on these other systems. Meteorology includes atmospheric chemistry and atmospheric physics with a major focus on weather...
and related oceanic
Oceanography , also called oceanology or marine science, is the branch of Earth science that studies the ocean...
and hydrologic sciences
Hydrology is the study of the movement, distribution, and quality of water on Earth and other planets, including the hydrologic cycle, water resources and environmental watershed sustainability...
and the advancement of their professional applications. Founded in 1919, the American Meteorological Society has a membership of more than 14,000 professionals, professors, students, and weather enthusiasts. Some members have attained the designation "Certified Consulting Meteorologist (CCM)", many of whom have expertise in the applied meteorology discipline of atmospheric dispersion modeling
Atmospheric dispersion modeling is the mathematical simulation of how air pollutants disperse in the ambient atmosphere. It is performed with computer programs that solve the mathematical equations and algorithms which simulate the pollutant dispersion...
. To the general public, however, the AMS is best known for its "Seal of Approval" to television and radio meteorologists.
The AMS publishes nine atmospheric and related oceanic and hydrologic journals (in print and online), issues position statements on scientific topics that fall within the scope of their expertise, sponsors more than 12 conferences annually, and offers numerous programs and services. There is also an extensive network of local chapters.
The AMS headquarters are located at Boston, Massachusetts. It was built by the famous Boston architect Charles Bulfinch
Charles Bulfinch was an early American architect, and has been regarded by many as the first native-born American to practice architecture as a profession....
, as the third Harrison Gray Otis House
There are three houses named the Harrison Gray Otis House in Boston, Massachusetts. All were built by noted American architect Charles Bulfinch for the same man, Harrison Gray Otis.-First Harrison Gray Otis House:...
in 1806 and was purchased and renovated by the AMS in 1958, with staff moving into the building in 1960. The AMS also maintains an office in Washington, D.C.
Washington, D.C., formally the District of Columbia and commonly referred to as Washington, "the District", or simply D.C., is the capital of the United States. On July 16, 1790, the United States Congress approved the creation of a permanent national capital as permitted by the U.S. Constitution....
, at 1120 G Street NW.
Seal of Approval
The AMS Seal of Approval program was established in 1957 as a means of recognizing television and radio weather forecasters who display informative, well-communicated, and scientifically-sound weather broadcast presentations. The awarding of a Seal of Approval is based on a demonstration tape submitted by the applicant to six members of a review panel after paying an application fee. Although a formal degree in meteorology is not a requirement to obtain the original Seal of Approval, the minimal requirements of meteorological courses including hydrology, basic meteorology & thermodynamic meteorology including at least 20 core college credits must have been taken first before applying (ensuring that the forecaster has at least a minimal required education in the field). There is no minimum amount of experience required, but previous experience in weather forecasting and broadcasting is suggested before applying. It is worthy to note that many broadcasters who have obtained the Seal of Approval do in fact have formal degrees in Meteorology or related sciences and/or certifications from accredited University programs. Upon meeting the core requirements, having the seal, and working in the field for 3 years that broadcaster may then be referred to as a Meteorologist in the broadcast community.
As of February 2007, more than 1,600 Seals of Approval have been granted, of which more than 700 are considered "active." Seals become inactive when a sealholder's membership renewal and annual seal fees are not paid.
The original Seal of Approval program will be phased out at the end of 2008. Current applicants may either apply for the original Seal of Approval or the Certified Broadcast Meteorologist (CBM) Seal until December 31, 2008. After that date, only the CBM Seal will be offered. Current sealholders retain the right to use their seal in 2009 and onward, but new applications for the original Seal of Approval will not be accepted after December 31, 2008.
Note: The NWA Seal of Approval is issued by the National Weather Association
The National Weather Association is an American professional association with a mission to support and promote excellence in operational meteorology and related activities...
and is independent of the AMS.
Certified Broadcast Meteorologist (CBM) Seal
The original Seal of Approval program was revamped in January 2005 with the introduction of the Certified Broadcast Meteorologist, or CBM, Seal. This seal introduced a 100-question multiple choice closed-book examination as part of the evaluation process. The questions on the exam cover many aspects of the science of meteorology, forecasting, and related principles. Applicants must answer at least 75 of the questions correctly before being awarded the CBM Seal.
Persons who obtained or applied for the original Seal of Approval before December 31, 2004 and were not rejected are eligible for an upgrade of their Seal of Approval to the CBM Seal upon the successful completion of the CBM exam and payment of applicable fees. Upgrading from the original Seal of Approval is not required. New applicants for the CBM Seal must pay the application fee, pass the exam, and then submit demonstration tapes to the review board before being considered for the CBM Seal. While original sealholders do not have to have a degree in meteorology or a related field of study to be upgraded, brand new applicants for the CBM seal must have a degree in meteorology or a related field of study to be considered.
In order to keep either the CBM Seal or the original Seal of Approval, sealholders must pay all annual dues and show proof of completing certain professional development programs every five years (such as educational presentations at schools, involvement in local AMS chapter events, attendance at weather conferences, and other activities of the like).
As of February 2007, nearly 200 CBM seals have been awarded to broadcast weather forecasters, either upgraded from the original Seal of Approval or granted to new applicants.
American Meteorological Society offers several awards in the fields of meteorology and oceanography.
Atmospheric Research Awards Committee
- The Carl-Gustaf Rossby Research Medal
The Carl-Gustaf Rossby Research Medal is the highest award for atmospheric science of the American Meteorological Society. It is presented to individual scientists, who receive a medal...
- The Jule G. Charney Award
- The Verner E. Suomi Award
- The Remote Sensing Prize
- The Clarence Leroy Meisinger
- The Henry G. Houghton
Oceanographic Research Awards Committee
- The Sverdrup Gold Medal
Sverdrup Gold Medal Award - is the American Meteorological Society's award granted to researchers who make outstanding contributions to the scientific knowledge of interactions between the oceans and the atmosphere.-Recipients:...
- The Henry Stommel Research Award
The Henry Stommel Research Award is awarded by the American Meteorological Society to researchers in recognition of outstanding contributions to the advancement of the understanding of the dynamics and physics of the ocean. The award is in the form of a medallion and was named for Henry...
- The Verner E. Suomi
- The Nicholas P. Fofonoff Award
The American Meteorological Society publishes the following scientific journals:
- Bulletin of the American Meteorological Society
The Bulletin of the American Meteorological Society is a scientific journal published by the American Meteorological Society.The official organ of the society, it is devoted to editorials, topical reports to members, articles, professional and membership news, conference announcements, programs and...
- Journal of the Atmospheric Sciences
The Journal of the Atmospheric Sciences is a scientific journal published by the American Meteorological Society...
- Journal of Applied Meteorology and Climatology
The Journal of Applied Meteorology and Climatology is a scientific journal published by the American Meteorological Society....
- Journal of Physical Oceanography
Journal of Physical Oceanography is a peer-reviewed scientific journal published by the American Meteorological Society . It was established in January 1971 and is available on the web since 1996...
- Monthly Weather Review
The Monthly Weather Review is a scientific journal published by the American Meteorological Society.Topics covered by the journal include research related to analysis and prediction of observed and modeled circulations of the atmosphere, including technique development, data assimilation, model...
- Journal of Atmospheric and Oceanic Technology
The Journal of Atmospheric and Oceanic Technology is a scientific publication by the American Meteorological Society.The journal includes papers describing the instrumentation and methodology used in atmospheric and oceanic research including computational techniques, methods for data acquisition,...
- Weather and Forecasting
Weather and Forecasting is a scientific journal published by the American Meteorological Society.Articles on forecasting and analysis techniques, forecast verification studies, and case studies useful to forecasters...
- Journal of Climate
The Journal of Climate is a scientific journal published by the American Meteorological Society.The journal publishes articles on climate research, in particular those concerned with large-scale atmospheric and oceanic variability, changes in the climate system , and climate simulation and...
- Journal of Hydrometeorology
The Journal of Hydrometeorology is a scientific journal published by the American Meteorological Society. It covers the modeling, observing, and forecasting of processes related to water and energy fluxes and storage terms, including interactions with the boundary layer and lower atmosphere, and...
- Weather, Climate, and Society (new journal, to start 2009)
- Earth Interactions
Earth Interactions is a scientific journal published by the American Meteorological Society, American Geophysical Union, and Association of American Geographers....
- Meteorological Monographs
Meteorological Monographs is a publication of the American Meteorological Society.The AMS Monograph Series has two parts, historical and meteorological...
The American Meteorological Society produces the following scientific databases:
- Meteorological and Geoastrophysical Abstracts
As a means of promoting "the development and dissemination of information and education on the atmospheric and related oceanic and hydrologic sciences and the advancement of their professional applications", the AMS periodically publishes policy statements on issues related to its competence on subjects such as drought
A drought is an extended period of months or years when a region notes a deficiency in its water supply. Generally, this occurs when a region receives consistently below average precipitation. It can have a substantial impact on the ecosystem and agriculture of the affected region...
Ozone depletion describes two distinct but related phenomena observed since the late 1970s: a steady decline of about 4% per decade in the total volume of ozone in Earth's stratosphere , and a much larger springtime decrease in stratospheric ozone over Earth's polar regions. The latter phenomenon...
and acid deposition
Acid rain is a rain or any other form of precipitation that is unusually acidic, meaning that it possesses elevated levels of hydrogen ions . It can have harmful effects on plants, aquatic animals, and infrastructure. Acid rain is caused by emissions of carbon dioxide, sulfur dioxide and nitrogen...
In 2003, the AMS issued the position statement Climate Change Research: Issues for the Atmospheric and Related Sciences
- Human activities have become a major source of environmental change. Of great urgency are the climate consequences of the increasing atmospheric abundance of greenhouse gases... Because greenhouse gases continue to increase, we are, in effect, conducting a global climate experiment, neither planned nor controlled, the results of which may present unprecedented challenges to our wisdom and foresight as well as have significant impacts on our natural and societal systems.
- The Maury Project (a comprehensive national program of teacher enhancement based on studies of the physical foundations of oceanography)
|
<urn:uuid:969ea0b8-79ed-42ee-804d-ad3c197850b0>
|
CC-MAIN-2013-20
|
http://www.absoluteastronomy.com/topics/American_Meteorological_Society
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.92667
| 2,528
| 2.765625
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon dioxide",
"climate change",
"climate system",
"drought"
],
"nature": [
"ecosystem"
]
}
|
{
"strong": 4,
"weak": 1,
"total": 5,
"decision": "accepted_strong"
}
|
To be sustainable, old cities need new, smarter infrastructures, says HP Labs sustainability visionary Chandrakant Patel
Since arriving at HP Labs in 1991, HP Fellow and director of HP’s Sustainable IT Ecosystem Lab Chandrakant Patel has worked to make IT systems more energy efficient. His early research in microprocessor system design led Patel to pioneer the concept of ‘smart data centers’ – data centers in which compute, power and cooling resources are provisioned based on the need. He now extends his vision of energy efficiency beyond the data center to what he calls ‘City 2.0.’
As nations look to rebuild their aging infrastructures and at the same time take on the challenge of global climate change, Patel argues that resource usage needs to be at the heart of their thinking. And, we must take a fundamental perspective in examining “available energy” in building and operating the infrastructure. Only if we use fewer resources to both build and run our infrastructures, he says, will we create cities that can thrive for generations to come. And we can only build in that way, he suggests, if we seamlessly integrate IT into the physical infrastructure to provision the resources – power, water, waste, etc - at city scale based on the need
Chandrakant Patel recently described his vision of building City 2.0, enabled by a Sustainable IT Ecosystem.
So you started out by addressing energy use in the data center?
That’s right. When we created the Thermal Technology Research Program at HP Labs in the early 90s, our industry was not addressing power and cooling in the data center at all. But we thought the data center should be looked at as a system. And if you look at it that way, there are three key components to the data center: computing, power, and cooling. We felt all of these should be provisioned based on need. Just as you dedicate the right computing instrument to the workload, you supply the power and cooling on an as-needed basis. You use sensors and controls, so that when workload comes in, you decide what kind of workload it is and give it the right level of compute, power, and cooling.
What kind of impact does this have on energy use?
Well, we built a “smart” data center in Palo Alto and a large data center in Southern India as a proof of concept. In the data center in Southern India, we used 7,500 sensors to record the temperature of its various parts, which feed back to a system that automatically controls all the air conditioners. In addition to saving 40% in energy used by the cooling system, the fine grained sensing allowed us to dynamically place workloads and shut machines down that are not being used.
Furthermore, with 7500 sensors polling every few seconds, we are able to mine sensor data to detect “anomalies” so we can extend the life of large scale physical systems such as compressors in the cooling plant. This type of sensing and control is critical for large scale physical installations. One wouldn’t run a house without a thermostat, so why should one run a multi-megawatt data center without fine grained measurement and control? A ceiling fan in a house uses a few hundred watts, and it has a knob so one can change its speed based on the need.
The blowers in air handling units inside a data center use10 kilowatts, and are often running at full speed all the time regardless of the data center’s needs!
How do you apply this kind of approach over the entire IT ecosystem?
First, you need to ask: what is the ecosystem? The world has billions of service-oriented client devices, like our laptops and handhelds. Then it has thousands of data centers, and thousands of print factories. That’s the ecosystem. Then you need to ask if that ecosystem is as energy efficient as it can be. To do that we take a life cycle approach. We look at the energy it takes to build and operate IT products over their life-cycle. If you do that, you can see that you might design, build and operate them in completely different ways – through appropriate choice of energy conversion means and appropriate choice of materials - ultimately leading to least energy, least material designs. Indeed, we believe that taking such an “end to end” view in design and management is required to reduce the cost of IT services that will enable the billions to use IT ecosystem to meet their needs.
Can you give an example?
Take a laptop as an example.
How much energy is required to build a laptop - to extract the material, to manufacture it, operate it and ultimately reclaim it? Using Joules of available energy consumed as the currency, one can examine the supply chain and design the laptop with appropriate choice of materials to minimize the consumption of available energy. Such a technique also allows one to examine the carbon emission across a product life cycle. This type of proactive approach is good for the environment and good for business. Good for business because, in our opinion, such an approach will lead to lowest-cost products and services.
Is there an impact on IT services too?
Absolutely. Today, I can reserve train tickets online for rail travel in India from my home in the US. But most of the 700 million people in India must take a motorized rickshaw to the train station, and spend half a day, to get the ticket. They can ill afford to spend the time. Couldn't we give them appropriately priced IT services so they can do it online? That's what Web 2.0 is about for me -- meeting the fundamental needs of a society. Furthermore, these kinds of services would reduce congestion and reduce consumption of available energy. We can ask - and we need to ask - the same kinds of questions when we are talking about bringing people all kinds of resources more effectively.
How do you get the information you need to make decisions based on energy used over the life of a product?
Firstly, at design time, the IT ecosystem enables us to create a tool for analysis based on scientific principles rather than anecdotes and rules of thumb. Secondly, the IT ecosystem provides us the ability to avail energy and material data for lifecycle analysis in design phase e.g. the available energy used in extracting Aluminum from Bauxite. Next, during operation, you use sensors and controls to manage your resources. Take traffic flow in a city. All you need to manage it is a backbone, the sensors, the data center and a panel where you can collect all that information and manage it. With that we can manage the flow so that available energy is being provisioned based on the need. You can do the same with electricity, water, waste, etc. Thus, you are using the IT ecosystem to have a net positive impact by deconstructing conventional business models – you're creating a sustainable ecosystem using IT.
Is that what you mean by the City 2.0 ?
Yes. We started the Sustainable IT Ecosystem Lab at HP Labs because we wanted to integrate the IT ecosystem into the next generation of cities - what I've called City 2.0. If you had to build a city all over again, how would you build it? Are you going to just build a city with more roads, more bridges? Or are you going to use the IT ecosystem so that more people can use less of those physical resources more effectively? Wouldn't you think it would be better if a data center was there, and it managed all the resources? Wouldn't it be better to harvest the rain that falls in the area and have a lot of local reservoirs? Wouldn't it be good to have a local power grid instead of bringing power from somewhere else? Those are the kinds of questions that we are wrestling with.
How can HP contribute to building the City 2.0?
HP has the breadth and the depth – the billions of service-oriented client devices, the thousands of data centers and the thousands of print factories. HP covers all aspects of the IT ecosystem. And we have a great history in measurement, communication, and computation. What I’d like to see us do is leverage the past to create the future. A future where we address the fundamental needs of society by right provisioning the resources so that future generations can have the same quality of life as we do.
The US and many other countries are in recession. Building the City 2.0 is an expensive proposition, so why is it worth doing?
First of all, I think building a smart infrastructure could revitalize our economy by providing businesses with the opportunity to apply their new technologies for solving age-old problems like water distribution and energy management. And secondly, if governments around the world are going to spend on infrastructure, we probably want to do it in a smart way: not just building things for the sake of building them. We can - and should - do it in a planned, sustainable way where we also create new, high-paying and long-lasting jobs.
More information about HP Labs is available at: www.hpl.hp.com/about/
please sign in to rate this article
|
<urn:uuid:21b3c90f-240f-44e0-a781-bc179e8931d4>
|
CC-MAIN-2013-20
|
http://www.telecomtv.com/comspace_newsDetail.aspx?n=45357&id=c26cc842-5ba0-470e-9b9d-c92b4a93db96
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.944234
| 1,891
| 2.59375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change"
],
"nature": [
"ecosystem"
]
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
In 1929, the wild financial speculation of the Roaring Twenties came to a sudden halt in October when the stock market began to slide.
Banker's Committee Stops Panic of '29 Worries spread through the economic community about the passing of the Smoot-Hawley Tariff Act. Tariffs had always been a point of contention among Americans, even spurring South Carolina to threaten secession over the Tariff Act of 1828. Producers such as farmers and manufacturers called for protective tariffs while merchants and consumers demanded low prices. The American economy soared while post-war Europe rebuilt in the '20s, and the Tariff Act of 1922 skimmed valuable revenue from the nation's income that would otherwise have been needed as taxes. The country barely noticed, and the economy surged forward as new technological luxuries became available as well as new disposable income.
Meanwhile, however, the nation faced an increasingly difficult drought while food prices continued to drop during Europe's recovery. Farmers were stretched thinner and thinner, prompting calls for protective agricultural tariffs and cheaper manufactured goods. In his 1928 presidential campaign, Herbert Hoover promised just that, and as the legislature met in 1929, talks on a new tariff began. Led by Senator Reed Smoot (R-Utah) and Representative Willis C. Hawley (R-Oregon), the bill quickly became more than Hoover and the farmers had bargained for as rates would increase to a level exceeding 1828 for industrial products as well as agricultural. A new story by Jeff ProvineThe revenue would be a great boon, but it unnerved economists, who wondered if it could kill the economic growth already slowing by a dipping real estate market.
The weakened nerves shifted from economists to investors, who took the heated debate in the Senate as a clue that times may become rough and decided to get out of the stock market while they could. Prices had skyrocketed over the course of the '20s as the middle class blossomed and minor investors came into being. Another hallmark of the '20s, credit, enabled people to buy stock on margin, borrowing money they could invest at what they hoped would be a higher percentage. The idea of a "money-making machine" spread, and August of 1929 showed more than $8.5 billion in loans, more than all of the money in circulation in the United States. The market peaked on September 3 at 381.17 and then began a downward correction. At the rebound in late October, panicked selling began. On October 24, what became known as "Black Thursday", the market fell more than ten percent. On Friday, it did the same, and the initial outlook for the next week was dire.
Amid the early selling in October, financiers noted that a crash was coming and met on October 24 while the market plummeted. The heads of firms and banks such as Chase, Morgan, and the National City Bank of New York collaborated and finally placed vice-president of the New York Stock Exchange Richard Whitney in charge of stopping the disaster. Forty-one-year-old Whitney was a successful financier with an American family dating back to 1630 and numerous connections in the banking world who had purchased a seat on the NYSE Board of Governors only two years after starting his own firm. Whitney's initial strategy was to replicate the cure for the Panic of 1907: purchasing large amounts of valuable stock above market price, starting with the "blue chip" favorite U.S. Steel, the world's first billion-dollar corporation.
On his way to make the purchase, however, Whitney bumped into a junior who was analyzing the banking futures based on the increase of failing mortgages from failing farms and a weakening real estate market. He suggested that the problems of the new market were caused from the bottom-up, and a top-down solution would only put off the inevitable. Instead of his ostentatious show of purchasing to show the public money was still to be had, Whitney decided to use the massive banking resources behind him to support the falling. He made key purchases late on the 24th, and then his staff worked through the night determining what stocks were needlessly inflated, what were solid, and what could be salvaged (perhaps even at a profit). Stocks continued to tumble that Friday, but by Monday thanks to word-of-mouth and glowing press from newspapers and the new radio broadcasts, Tuesday ended with a slight upturn in the market of .02 percent. Numerically unimportant, the recovery of public support was the key success.
With the initial battle won, Whitney spearheaded a plan to salvage the rest of the crisis as real estate continued to fall and banks (which were quickly running out of funds as they seized more and more of the market) would soon have piles of worthless mortgaged homes and farms. Banks organized themselves around the Federal Reserve, founded in 1913 after a series of smaller panics and determined rules that would keep banks afloat. Further money came from lucrative deals with the wealthiest men in the country such as John D. Rockefeller, Henry Ford, and the Mellons of Pittsburgh. Businesses managed to continue work despite down-turning sales through loans, though the unemployment rate did increase from 3 to 5 percent over the winter.
The final matter was the question of international trade. As the Smoot-Hawley Tariff Act continued in the Senate, economists predicted retaliatory tariffs from other countries to kill American exports, but Washington turned a deaf ear. Whitney decided to protect his investments in propping up the economy by investing with campaign contributions. Democrats took the majority as the Republicans fell to Whitney's use of the press to blame the woes of the economy on Congressional "airheads". Representative Hawley himself lost his seat in the House, which he had held since 1907, to Democrat William Delzell. President Hoover, a millionaire businessman before entering politics, noted the shift, but remained quiet and dutifully vetoed the new tariff.
By 1931, it became steadily obvious that America had shifted to an oligarchy. The banks propped up the market and were propped up themselves by a handful of millionaires. If Rockefeller wanted, he could single-handedly pull his money and collapse the whole of the American nation. Whitney took greater power as Chairman of the Federal Reserve, whose new role controlled indirectly everything of economic and political worth. As the Thirties dragged on, the havoc of the Dust Bowl made food prices increase while simultaneously weakening the farming class, and Whitney gained further power by ousting Secretary of Agriculture Arthur Hyde and installing his own man as a condition for Hoover's reelection in '32.
Chairman Whitney would "rule" the United States, wielding public relations power and charisma to give Americans a strong sense of national emergency and patriotism during times like the Japanese War in '35 (which secured new markets in East Asia) and the European Expedition in '39. He employed the Red Scare to keep down ideas of insurrection and used the FBI as a secret police, but his ultimate power would be that, at any point, he could tamper with interest rates or stock and property value, and the country would spiral into rampant unemployment and depression, dragging the rest of the world with it.
|
<urn:uuid:c15e4abb-1d2c-4120-8226-b38f82c7defa>
|
CC-MAIN-2013-20
|
http://www.todayinah.co.uk/[email protected]&story=39750-Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.978055
| 1,457
| 3.765625
| 4
|
[
"climate"
] |
{
"climate": [
"drought"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
I love the fall and how the leaves change from deep greens to reds and orange and gold. This natural riot of color takes place wherever there are trees with leaves and there’s almost no place better to watch the leaves change than in the Northeast. This part of the four-seasoned ritual of life attracts tourists from far and wide and tugs at me to make a special trip to our home in the mountains there. And this reminds me every year about the natural changes that are a constant in our lives.
Ever wonder why and how the leaves change colors?
• As summer ends and autumn comes, the days get shorter and shorter. This is how the trees "know" to begin getting ready for winter. The trees will begin to rest and live off the food they stored during the summer. The green chlorophyll disappears from the leaves. As the bright green fades away, we begin to see yellow and orange colors. Small amounts of these colors have been in the leaves all along - we didn’t them in the summer because they were covered up by the green chlorophyll. The bright reds and purples we see in leaves are made mostly in the fall. In some trees, like maples, glucose is trapped in the leaves after photosynthesis stops. Sunlight and the cool nights of autumn cause the leaves to turn this glucose into a red color. It’s the combination of all these things that makes the beautiful fall colors we enjoy each year.
Ever hear of Thomas Cole’s The Voyage of Life series? In 1840 he did this series of paintings that represent an allegory of the four stages, or seasons, of human life:
• In childhood, the infant glides from a dark cave into a rich, green landscape.
• As a youth, the boy takes control of the boat and aims for a shining castle in the sky.
• In manhood, the adult relies on prayer and religious faith to sustain him through rough waters and a threatening landscape.
• Finally, the man becomes old and the angel guides him to heaven across the waters of eternity.
In each painting, accompanied by a guardian angel, the voyager rides the boat on the River of Life. The landscape, corresponding to the seasons of the year, plays a major role in telling the story. And in those paintings you can clearly see the leaves changing colors in the season (manhood) that represents the fall of the voyager’s life.
So what’s this mean to you and me? Things change! Always! Life is full of changes and most of us are creatures of habit. And because we don’t know what’s next, we tend to cling to what we already have and know and are comfortable with. We reminisce about and cherish the past because it’s familiar, it’s already happened and we know how the movie ends. And while that’s generally true, it’s the half of the story that we tend to recognize. The other half is that the things we learn from the past should continually be updating our knowledge of life, and how to process the new things we see and experience, and how to better understand the meaning of who and what we are – that’s the harder part of the story to accept.
With each passing season, and the changes that occur, we need to grow and become wiser. And that wisdom should create the stuff we need to constantly be better, to do the things we’re called upon to do each day better, and to help those around us to become better. But you won’t learn anything or get better if you’re not open to the changes – natural or man-made – that occur every day.
I wish you could join me here at our camp to look across the lake at the beauty that is unfolding. The scene is constant; the colors let me know that time is marching on. On the one hand I could worry that the seasons of my life are marching on, or, on the other, I could be challenged by the things I’ve learned this year that will help me to be wiser and more thoughtful in the future. One stunts natural growth; the other invigorates a sense of wonder about the world around us and the endless possibilities that potentially exist. The choice is ours. And while these leaves will begin to fade and fall soon, the inspiration that they trigger should last a lifetime. That’s the voyage of life, and I’m sure glad to be on it!
My message this week is about being inspired to dream about improving our lives:
“You are never too old to set another goal or to dream a new dream.” -C.S. Lewis
Clive Staples Lewis (1898 – 1963), commonly referred to as C. S. Lewis and known to his friends and family as "Jack", was a British novelist, academic, medievalist, literary critic, essayist, lay theologian and Christian apologist from Ireland.
Got any new dreams today? Not the ones you try to remember and think about when you wake, but the kind that have you excited to try something really new. Everyone can dream, but not everyone has the curiosity, energy, courage and stamina to try to attempt and achieve their dreams. Most want things to be smooth and easy, with no surprises or challenges that can potentially make you look silly. Fact is, without those challenges or knowing how to recover from looking silly you’ll never get to experience what it is to learn from trying something new. You can tell the ones who are into this – the twinkle in their eye, the bounce in their step, the way they carry themselves. If that’s you, and you’ll know if it is, then set another goal today, dream another dream today and make a pledge to be creative and innovative today. Go ahead – you’re never too old!
Friday, September 30, 2011
at 5:24 AM
Friday, September 23, 2011
“Everyone wants to be true to something, and we’re true to you” - that’s the marketing tagline for Jet Blue’s travel rewards program. I know because it kept scrolling across the little screen on the back of the seat in front of me when I recently flew across country. It’s okay in the context of what they’re trying to promote, but it also might apply to more than just loyalty programs. And it may be that because people naturally want to be ‘true blue’ to so many things, it becomes overused and almost trite. That’s too bad. Because being ‘true blue’ can be a good thing.
First: ever wonder where the term ‘true blue’ comes from?
• Loyal and unwavering in one's opinions or support for a cause or product.
• 'True blue' is supposed to derive from the blue cloth that was made at Coventry, England in the late middle-ages. The town's dyers had a reputation for producing material that didn't fade with washing, i.e. it remained 'fast' or 'true'. The phrase 'as true as Coventry blue' originated then and is still used (in Coventry at least).
• True Blue is an old naval/sailing term meaning honest and loyal to a unit or cause.
• And dictionaries say that true blue refers to “people of inflexible integrity or fidelity”.
And second: does ‘true blue’ really mean anything in this era of fast food and slick advertising?
There are lots of loyalty programs – hotels, airlines, slot clubs, retail stores, pop food brands, credit cards, clothing, wine, restaurants, movie theaters, travel sites, theme parks, computer games and countless more – and they all try to get you to stick with them by rewarding you in all kinds of ways: points, miles, free gifts, shows, food and on and on. But it seems a bit contrived, as if there’s some Oz-like character behind a curtain trying to entice you with these awards (read: bribes).
Imagine if this kind of thing were done with going to school or work, singing in a choir, participating in some community event, volunteering your time to some worthy cause, remaining friends or staying in a relationship… doesn’t seem as appropriate in those, does it? Think of someone or something you really like: do you really and truly like them or it, or do you need to be bribed with rewards to feel that way. Of course you don’t. So why do the airlines and hotels and all those other things we purchase have to bribe us like them?
But – there are companies out there that do understand what it takes to win your loyalty:
• Southwest Airlines was one of the first companies that made having fun and using common sense part of their strategy for success. Singing the safety jingle, devising a different boarding routine and setting the record for on-time departures set them apart and won over customers. They got it!
• Zappos doesn’t give you anything extra to make you want to come back – they believe that great service plus free shipping and returns will do that. Everyone said that nobody would buy shoes online – wrong. Zappos gets it!
• Apple wins and keeps their customer’s loyalty by incubating and introducing cool new ideas and products all the time. And they’re just about the biggest and most successful and most admired company on the planet. They get it!
But for every Southwest Airlines-type great experience there are hundreds of others that under perform and underwhelm. So they sign you up and hope that rewarding your loyalty overcomes the other things they do that destroys your loyalty. Seems to me they just don’t get it?
Jet Blue says they give you more leg room – that’s true if you pay extra for those few rows that have it. How come they just don’t make eye contact and smile more? How come they can’t get the bags to the conveyor in less than 30 minutes (which may not seem like much to them but after a cross country flight an extra 30 minutes is painful). How come they don’t get it? I want to join their loyalty program so I can get another trip with them like I want to have my teeth drilled. And then they spend so much time and energy trying to give you that free round trip ticket if you apply for their credit card – you know, the one that has annual fees and high interest rates. How come they don’t get it? Why can’t they just treat me like a loyal and valued customer, like someone they genuinely like and appreciate, like they’d like to be treated if they had to fly on someone else’s airline. Seems to me they just don’t get it.
Most of the good things in life are rooted in quality, trust and respect. People you work with and for, family that you live with and love, things you do for fun and relaxation, games you gladly play with others, friendships you’re lucky enough to have, clubs you join and actively participate in, activities you sign up for – they’re all based on the simple premise that things that are good are that way because they are genuinely good and fun and worthwhile. And that’s why you stick with them loyally.
But all these other kinds of loyalty programs are contrived. And yet we sign up for them like they’re free and worthwhile. They’re not free – we pay for the increased costs of these rewards. And they’re not worthwhile - we’re treated poorly by those who have the attitude that the cheap rewards they give are enough to overcome the thoughtless and robotic service they go through the motions of providing. Next time someone asks if I’ve signed up for their loyalty program I’m going to give them a tip: treat me nicely, treat me fairly, treat me respectfully, act like you really do care, thank me like you really mean it and treat me like you really do want me as a customer – and I’ll come back as often as I can or need to, willingly and freely. When are all these marketing geniuses going to wake up? When are they going to be ‘true blue’ to the Golden Rule?
My message this week is about how excellence can lead to greatness:
”If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work.” -Thomas J. Watson
Thomas John Watson, Sr. (1874 – 1956) was president of International Business Machines (IBM) and oversaw that company's growth into a global force from 1914 to 1956. Watson developed IBM's distinctive management style and corporate culture, and turned the company into a highly-effective selling organization. He was called the world's greatest salesman.
Do you want to achieve excellence? Some people don’t – they’re content to work alongside others, doing just enough to get by and satisfy their basic needs, content to have a few toys, take life easy and not make waves. But is that what you want – would that be enough for you? If not, then you’ve got to decide right now to start going farther, looking to help others, caring more, trying harder, and being more of what you can be today. You’ve got to take it to the next level – in commitment, in energy, in enthusiasm, in being a role model, in paying closer attention to details, in always striving to do and be all that you’re capable of. As of this second, you’ve got to quit doing less-than-excellent work. That’s how YOU can achieve excellence - (note: the emphasis is on YOU)!
at 5:34 AM
Friday, September 16, 2011
Where were you on 9/11? For most of us the answers are permanently etched in our minds. Like the attack on Pearl Harbor and VE Day for our parents, or the moment John Kennedy was shot or Armstrong set foot on the moon for the baby boomers, 9/11 has become one of the iconic moments in time for all who were alive then.
I remember exactly where I was, what I was doing, who told me and how I felt the day Kennedy was killed; and like most people I was watching on our little black and white TV when Ruby shot Oswald the next day. I remember my teacher bringing me into the assembly hall to watch when Armstrong took “one small step for man, one giant leap for mankind”. There have been literally trillions of moments in my life, but these iconic ones stand out, frozen in time and in my mind. And then there was 9/11.
In these weekly blogs I try to write about things that catch my attention. These stories tend to take on meanings beyond the specific incidents I mention, meanings that relate to life’s larger issues and that can possibly teach us something. But this one goes way beyond any of the moments and incidents that caught my attention - 9/11 caught the attention of everyone on the planet. There aren’t many things that reach that level, things that stop time, that leave indelible memories about where we were and who we were with, that immediately bring back visceral feelings and emotions of a long ago but clearly remembered moment in time. 9/11 does all of those things and more.
My wife and I were in NYC: preparing to get on the George Washington Bridge to go into Manhattan when the first plane hit; coming to a complete stop on the road and in our lives; watching in fear and confusion as the second plane hit; staring in horror as first one and then the other building fell; hearing about the other plane crashes in Washington and Pennsylvania; staying glued to the radio and then the television while the world stood still.
We drove away from the City that day in fear and confusion – trying to get as far away as possible and to make sense of how and why this happened. As we drove we came upon a rise in the road where all the cars were stopped; people were standing beside their cars and looking back in the direction we came from, so we stopped too. In the distance there was smoke where the towers so recently stood; nobody was talking; everyone was crying. We eventually made it to our home in the Adirondack Mountains, safe and overwhelmed by the fear and confusion that enveloped the world as we knew it. I can see and feel that day now as if were yesterday. I guess that’s what an iconic moment is: something we remember – clearly and forever.
And now, in what seems like no time at all, ten years have passed and the memorial to those killed has been unveiled. The reading of the names this past Sunday stopped and stunned us all over again. The tolling of the bells in New York, Washington and Shanksville brought us back to that moment in time. The sight of the grieving families and friends as they touched and etched the names of their fathers, brothers, mothers, sisters, relatives and friends brought us together now as we were back then. The pettiness and partisanship that dominates the news was pushed aside for just a moment as we all stood in solemn and shared tribute to something that transcended all the comparatively meaningless stuff that normally seeks to grab our attention. As sad as the memories are, the togetherness helps us get through the memories now like it did when this terrible tragedy first happened. Why can’t we make that feeling last?
A man named Al DiLascia from Chicopee, Mass. wrote a letter to the editor of the New York Times this week that summed this up:
For one brief moment on September 11, 2011, time seemed to stand still. People sought family members and recognized the importance of family. Acts of charity were plentiful. There was an assessment of life and what is really important. Places of worship were full. People unashamedly prayed. For one brief moment...
Let’s try to remember – not just the events that make up these iconic moments, but what they really mean, and what’s really important. Don’t let a day pass that you don’t tell those you love how much you care and to show it in thoughtful and meaningful ways, to touch the people and things that are most important to you, to reach out and give to those in need, and to quietly count and give thanks for all the blessings that are in your life. Do whatever you have to do to make the meaning of your iconic moments last!
My message this week is about being loyal to the people and things that are important in your life:
“Loyalty is something you give regardless of what you get back, and in giving loyalty, you're getting more loyalty; and out of loyalty flow other great qualities.”
Colonel Charles Edward ("Chuck") Jones (1952 – 2001) was a United States Air Force officer, a computer programmer, and an astronaut in the USAF Manned Spaceflight Engineer Program. He was killed in the attacks of September 11, 2001 aboard American Airlines Flight 11, the first plane to hit the first World Trade Center building at 8:46am.
All of the great values we read and write about seem to be interconnected, and loyalty may be the one at the hub of them all. Think of the people and things you’re loyal to, and then note the other great qualities that come from that loyalty. Friendship, success, pride, humility, professionalism, integrity, team spirit and passion are a few that immediately come to mind. These are the qualities and values that you hope to find in others, and certainly they’re the ones to which you should always aspire. But to get loyalty you need to give it, and that means you must be true to your work and family and friends, forgiving in your nature, humble in your approach to others, sincere in your dealings with all, and understanding in the complex and competitive world that we live in. Look for ways to give loyalty today without attaching any strings for reciprocity. And don’t be surprised if you then start to get loyalty and all the other great qualities flowing back to you in return.
Stay well. And please say a prayer for these heroes and all the others in your life who’ve passed.
at 6:20 AM
Friday, September 9, 2011
Vacation homes in the Adirondacks are commonly referred to as camps – my family is fortunate to have one and, as you know from some of my previous blogs, we’ve spent a lot of time there this year. These are not to be confused with day and overnight camps that parents send their kids to. This is about the second kind of camp.
I went to an overnight camp as a kid and loved it, but that’s a story for another time. This tale begins at Camp Nazareth (that’s the name of the overnight camp at the end of our lake). Its run by the local Catholic Diocese which has had little success in recent years attracting enough kids. More often than not, this wonderful facility – it can hold up to 300 kids at any one time - is terribly under used. Fortunately, it seems that they’ve now discovered ways to attract alternate users like family reunions, corporate retreats and, just this past week, a high school crew team (Google “rowing sport” to learn more about this sport on Wikipedia). And that crew team caught our attention.
Our family’s camp (we call it “The Point”) is on the water and we can easily see when anyone is on the lake. While sitting on our dock one morning we were surprised to see this crew team go by. If you’ve never seen a crew team before, they operate in long narrow boats (like large kayaks) that are referred to as “sculls” – these are two to eight-person boats that are rowed by that many team members, each of whom operates one oar. In this case, there were two eight-person sculls (one with all men and the other all women) that were practicing. Mind you, this is not an everyday sight – there are a few motorboats and a lot of canoes and kayaks on our lake, so the sight of these two sculls was a bit of a surprise. Alongside these two sculls was a small motorboat in which sat the coach who had a megaphone and was giving instructions and commands. On the first day of what appeared to be one of their initial practice sessions, these two sculls were having what was obviously some beginner’s training. And here’s another key bit of information: the team has to row in very close order for the boat to move along smoothly. If any of the rowers is out of synch (even a little) the boat can very easily (and visibly) miss a beat. And if any of those misses are overly pronounced the boats can stop altogether or even capsize. So at the beginning of this training the coach definitely wanted to take it slow.
As the week progressed, however, the boats began to move more smoothly, and over time they got smoother and faster. And since the object of crew is to beat the competition, smooth and fast is definitely better. In order to get smoother and faster, the individual team members all have to practice at learning not only how to improve their own skills but also how to be in better synch with all the other members of their team. In crew, as in so many other aspects of life, both are critical (as in one without the other is not worth much).
As we watched this unfold before us, we started to reflect on how the basic lessons being learned out on the lake apply to just about everything we do in life (and here I need to confess that my wife realized this before I did). Being effective and functional at anything – playing with friends on the school yard, getting along as a family, working with colleagues, participating on a sports team, singing in a choir, building something with others, participating in community events – really is about learning how to improve your own skills while also performing in concert with others. Learning anything alone is one thing, learning it together and then interacting with others is a whole different thing. The key to life is learning both, because one without the other is really not worth much. And here was a live metaphor for this right on the lake in front of us – and just like that my whole professional life flashed before me as I watched this training unfold.
Each of these young athletes was working hard to learn how to be the best they could be, they and their team mates were learning how to interact with each other more effectively, the coaches were seeing the results of their hard work and practice, and those of us on the sidelines were rewarded by seeing how things can and should work when effective instructions, practice and coaching all come together. We don’t often get to see things so clearly, or watch how the rituals of cause and effect play out so clearly. Simply put: this was a real lesson about life. And, in part because of where we were, and also because of what we saw and then realized, we were again moved to exclaim “that’s the Point!
My message this week is about finding things you can be passionate about, because they define who and what you are.
“I know that I have found fulfillment. I have an object in life, a task ... a passion.”
Amantine Lucile Aurore Dupin, later Baroness Dudevant (1804 – 1876), best known by her pseudonym George Sand, was a French novelist and memoirist.
Have you found fulfillment? Not just a momentary or fleeting sense of accomplishment, but a lasting and on-going feeling that “this is it”. We all do lots of little and mostly disconnected things – chores, work, hobbies – and these achieve short-term goals or complete individual assignments. But every now and then one big thing comes along that is more about defining our style or purpose, and these make us who and what we are. Now it could be a car or a job – those certainly say a lot about you. But to find fulfillment – to know that something is really about the “you” that is truly you – that’s a real find. And that’s the kind of thing that passion is truly built upon. Something you love deeply, that you can’t stop thinking about, that you can’t wait to get up and do each day, and that you truly care more about than almost anything else. That’s the kind of passion that is truly a treasure – and that’s the kind of object in life that you want to be on the lookout for – today and every day. That’s the Point!
at 5:14 AM
Friday, September 2, 2011
Last week was something else – an earthquake and a hurricane and tornados and sunshine and hot and cold… I'm having trouble remembering where I am.
I grew up in upstate New York and experienced four distinct seasons each year – but there were no earthquakes or tornados. I later moved to Nevada for nearly a quarter century and experienced dry heat – but there were never any hurricanes or tornados. I then moved to the beaches of California where the sun shines 300+ days a year, the temperature rarely gets above 75 and earthquakes and wild fires are a nuisance – but there are no tornados or hurricanes. And now I’m back in New York (city and upstate) and just about everything but wild fires have hit here in the past 8 months. What’s going on?
I didn’t own a winter coat – and the record snow falls and cold last winter drove me to Land’s End with a singleness of purpose. I didn’t own boots or an umbrella, and the wet snow and rains taught me a lot about what it means to stay dry. I’m used to driving wherever I want to go and not having a car here to help navigate through the varying weather patterns has made me a fan of the Weather Channel. I never thought about the weather, never worried about what I’d wear or looked at the skies for clues to what’s coming, and now that the weather changes in the blink of an eye I am obsessed with meteorology.
But last week, depending where you were in the path of all this weather, meteorologists either got it right, mostly right, or wrong. Hey – they’re human so maybe we shouldn’t hold them to such a high standard as always being right. I mean, is anybody always right? Maybe we should take what they say and apply some old fashioned lore to this inexact science – such as:
Red sky at night, sailor's delight,
Red sky in the morning, sailors take warning.
When the wind is blowing in the North
No fisherman should set forth,
When the wind is blowing in the East,
'Tis not fit for man nor beast,
When the wind is blowing in the South
It brings the food over the fish's mouth,
When the wind is blowing in the West,
That is when the fishing's best!
When halo rings the moon or sun, rain's approaching on the run.
When windows won't open, and the salt clogs the shaker,
The weather will favor the umbrella maker!
No weather is ill, if the wind be still.
When sounds travel far and wide,
A stormy day will betide.
If clouds move against the wind, rain will follow.
A coming storm your shooting corns presage,
And aches will throb, your hollow tooth will rage.
I wouldn’t normally be thinking about these things, but all this crazy weather has me spooked. Is it global warming or just the fact that weather seems unpredictable? Were the winters way more intense when we were kids, or did it just seem that way because we were kids? Can weather really be predicted correctly all the time by these meteorologists, or should we take what they say with a “grain of salt”? Or should we rely more on our own common sense as aided by some of these old fashioned sayings?
Here in New York last week the mayor and the meteorologists got it wrong – but not by much. The winds blew and the rains fell and, though there was less flooding and damage than predicted here, they made damn sure we were prepared by scaring the daylights out of us with their dire warnings. Now some people are complaining because they scared us; but those same people complained when they didn’t scare us before last winter’s massive snow storm, or that they didn’t scare others enough before Katrina.
Fact is, lots of people are never happy, especially if they’re inconvenienced. But potentially saving lives is better than trying to apologize for not saving lives: isn’t that what ‘better safe than sorry’ is all about? Maybe we expect too much from the elected officials who we don’t really like or trust anyways (especially when they are inconveniencing us). I guess they’re damned if they do and damned if they don’t. I’ve even read some editorials about how this should make us either for or against big government. Come on, it was just a storm. And even though lots of people got flooded out, and there was lots of damage to homes and fields and trees and power lines, and lots of high water and wind, I’m relieved because it was less than predicted here on my street. I’m really sad for those to whom it was as much or more than predicted. And even though I don’t blame anyone, I sure as hell would like to know what all this crazy weather means, and whether a red sky at night really does mean a sailor’s delight?
My message this week is about loyalty, and whether we need to think about how loyal we are to others and how loyal we need to be to ourselves:
“Loyalty to petrified opinion never yet broke a chain or freed a human soul.” -Mark Twain
Mark Twain achieved great success as a writer and public speaker. His wit and satire earned praise from critics and peers, and he was a friend to presidents, artists, industrialists, and European royalty.
Loyalty can be both good and bad. People often remain loyal long after the reason for doing so has ended. If the reason you became loyal has petrified then you need to re-examine your motives and goals; you need to break free when the times demand it and it’s the right thing to do. Loyalty should be given to the best ideas, the highest principles, the most ethical leaders, the greatest challenges, and to the most extraordinary opportunities. But sometimes we remain loyal just because we are afraid to appear disloyal or we’re afraid to re-examine that loyalty. This conflict can be a Catch 22, or it can be a moment of re-commitment and rebirth. And just like a plant that’s been sitting for a long time, it’s a good idea to re-pot our beliefs to make sure that our roots continue to grow deeper and stronger. So look at your loyalties today and make sure they’re where they should be.
Stay warm, dry and well!
at 5:36 AM
|
<urn:uuid:bdc6dcf7-611f-45db-9e3c-f35343ffd87a>
|
CC-MAIN-2013-20
|
http://www.thearteofmotivation.blogspot.com/2011_09_01_archive.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.968623
| 7,071
| 2.859375
| 3
|
[
"climate"
] |
{
"climate": [
"global warming"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Acidosis is a condition in which there is excessive acid in the body fluids. It is the opposite of alkalosis (a condition in which there is excessive base in the body fluids).
Causes, incidence, and risk factors:
The kidneys and lungs maintain the balance (proper pH level) of chemicals called acids and bases in the body. Acidosis occurs when acid builds up or when bicarbonate (a base) is lost. Acidosis is classified as either respiratory acidosis or metabolic acidosis .
Respiratory acidosis develops when there is too much carbon dioxide (an acid) in the body. This type of acidosis is usually caused by a decreased ability to remove carbon dioxide from the body through effective breathing. Other names for respiratory acidosis are hypercapnic acidosis and carbon dioxide acidosis. Causes of respiratory acidosis include:
- Chest deformities, such as kyphosis
- Chest injuries
- Chest muscle weakness
- Chronic lung disease
- Overuse of sedative drugs
Metabolic acidosis develops when too much acid is produced or when the kidneys cannot remove enough acid from the body. There are several types of metabolic acidosis:
Diabetic acidosis (also called diabetic ketoacidosis and DKA) develops when substances called ketone bodies (which are acidic) build up during uncontrolled diabetes .
- Hyperchloremic acidosis results from excessive loss of sodium bicarbonate from the body, as can happen with severe diarrhea.
Lactic acidosis is a buildup of lactic acid . This can be caused by:
- Exercising vigorously for a very long time
- Liver failure
- Low blood sugar (hypoglycemia)
- Medications such as salicylates
- Prolonged lack of oxygen from shock, heart failure, or severe anemia
Other causes of metabolic acidosis include:
Signs and tests:
- Arterial or venous blood gas analysis
- Serum electrolytes
- Urine pH
An arterial blood gas analysis or serum electrolytes test, such as a basic metabolic panel, will confirm that acidosis is present and indicate whether it is metabolic acidosis or respiratory acidosis. Other tests may be needed to determine the cause of the acidosis.
Treatment depends on the cause. See the specific types of acidosis.
Acidosis can be dangerous if untreated. Many cases respond well to treatment.
See the specific types of acidosis.
Calling your health care provider:
Although there are several types of acidosis, all will cause symptoms that require treatment by your health care provider.
Prevention depends on the cause of the acidosis. Normally, people with healthy kidneys and lungs do not experience significant acidosis.
Seifter JL. Acid-base disorders. In: Goldman L, Ausiello D, eds. Cecil Medicine. 23rd ed. Philadelphia, Pa: Saunders Elsevier; 2007:chap 119.
|Review Date: 11/15/2009|
Reviewed By: David C. Dugdale, III, MD, Professor of Medicine, Division of General Medicine, Department of Medicine, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997-
A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited.
|
<urn:uuid:56f207a8-441d-4c6f-8ce6-55bdb1401bf6>
|
CC-MAIN-2013-20
|
http://www.texashealth.org/well-being_template_connected-inner.cfm?id=5351&action=detail&AEProductID=Adam2004_1&AEArticleID=001181&AEArticleType=Disease
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.881313
| 782
| 3.9375
| 4
|
[
"climate"
] |
{
"climate": [
"carbon dioxide"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Part 2 - Those Who Are Unable to See the Fact of Creation
The theory of evolution is a philosophy and a conception of the world that produces false hypotheses, assumptions and imaginary scenarios in order to explain the existence and origin of life in terms of mere coincidences. the roots of this philosophy go back as far as antiquity and ancient Greece.
All atheist philosophies that deny creation, directly or indirectly embrace and defend the idea of evolution. the same condition today applies to all the ideologies and systems that are antagonistic to religion.
The evolutionary notion has been cloaked in a scientific disguise for the last century and a half in order to justify itself. Though put forward as a supposedly scientific theory during the mid-19th century, the theory, despite all the best efforts of its advocates, has not so far been verified by any scientific finding or experiment. Indeed, the "very science" on which the theory depends so greatly has demonstrated and continues to demonstrate repeatedly that the theory has no merit in reality.
Laboratory experiments and probabilistic calculations have definitely made it clear that the amino acids from which life arises cannot have been formed by chance. the cell, which supposedly emerged by chance under primitive and uncontrolled terrestrial conditions according to evolutionists, still cannot be synthesised even in the most sophisticated, high-tech laboratories of the 20th century. Not a single "transitional form", creatures which are supposed to show the gradual evolution of advanced organisms from more primitive ones as neo-Darwinist theory claims, has ever been found anywhere in the world despite the most diligent and prolonged search in the fossil record.
In their attempts to gather evidence for evolution, evolutionists have unwittingly proven by their own efforts that evolution cannot have happened at all!
The person who originally put forward the theory of evolution, essentially in the form that it is defended today, was an amateur English biologist by the name of Charles Robert Darwin. Darwin first published his ideas in a book entitled the Origin of Species by Means of Natural Selection in 1859. Darwin claimed in his book that all living beings had a common ancestor and that they evolved from one another by means of natural selection. Those that best adapted to the habitat transferred their traits to subsequent generations, and by accumulating over great epochs, these advantageous qualities transformed individuals into totally different species from their ancestors. the human being was thus the most developed product of the mechanism of natural selection. in short, the origin of one species was another species.
Darwin's fanciful ideas were seized upon and promoted by certain ideological and political circles and the theory became very popular. the main reason was that the level of knowledge of those days was not yet sufficient to reveal that Darwin's imaginary scenarios were false. When Darwin put forward his assumptions, the disciplines of genetics, microbiology, and biochemistry did not yet exist. If they had, Darwin might easily have recognised that his theory was totally unscientific and thus would not have attempted to advance such meaningless claims: the information determining species already exists in the genes and it is impossible for natural selection to produce new species by altering genes.
While the echoes of Darwin's book reverberated, an Austrian botanist by the name of Gregor Mendel discovered the laws of inheritance in 1865. Although little known before the end of the century, Mendel's discovery gained great importance in the early 1900s with the birth of the science of genetics. Some time later, the structures of genes and chromosomes were discovered. the discovery, in the 1950s, of the DNA molecule, which incorporates genetic information, threw the theory of evolution into a great crisis, because the origin of the immense amount of information in DNA could not possibly be explained by coincidental happenings.
Besides all these scientific developments, no transitional forms, which were supposed to show the gradual evolution of living organisms from primitive to advanced species, have ever been found despite years of search.
These developments ought to have resulted in Darwin's theory being banished to the dustbin of history. However, it was not, because certain circles insisted on revising, renewing, and elevating the theory to a scientific platform. These efforts gain meaning only if we realise that behind the theory lie ideological intentions rather than scientific concerns.
Nevertheless, some circles that believed in the necessity of upholding a theory that had reached an impasse soon set up a new model. the name of this new model was neo-Darwinism. According to this theory, species evolved as a result of mutations, minor changes in their genes, and the fittest ones survived through the mechanism of natural selection. When, however, it was proved that the mechanisms proposed by neo-Darwinism were invalid and minor changes were not sufficient for the formation of living beings, evolutionists went on to look for new models. They came up with a new claim called "punctuated equilibrium" that rests on no rational or scientific grounds. This model held that living beings suddenly evolved into another species without any transitional forms. in other words, species with no evolutionary "ancestors" suddenly appeared. This was a way of describing creation, though evolutionists would be loath to admit this. They tried to cover it up with incomprehensible scenarios. for instance, they said that the first bird in history could all of a sudden inexplicably have popped out of a reptile egg. the same theory also held that carnivorous land-dwelling animals could have turned into giant whales, having undergone a sudden and comprehensive transformation.
These claims, totally contradicting all the rules of genetics, biophysics, and biochemistry are as scientific as fairy-tales of frogs turning into princes! Nevertheless, being distressed by the crisis that the neo-Darwinist assertion was in, some evolutionist paleontologists embraced this theory, which has the distinction of being even more bizarre than neo-Darwinism itself.
The sole purpose of this model was to provide an explanation for the gaps in the fossil record that the neo-Darwinist model could not explain. However, it is hardly rational to attempt to explain the gap in the fossil record of the evolution of birds with a claim that "a bird popped all of a sudden out of a reptile egg", because, by the evolutionists' own admission, the evolution of a species to another species requires a great and advantageous change in genetic information. However, no mutation whatsoever improves the genetic information or adds new information to it. Mutations only derange genetic information. Thus, the "gross mutations" imagined by the punctuated equilibrium model, would only cause "gross", that is "great", reductions and impairments in the genetic information.
The theory of punctuated equilibrium was obviously merely a product of the imagination. Despite this evident truth, the advocates of evolution did not hesitate to honour this theory. the fact that the model of evolution proposed by Darwin could not be proved by the fossil record forced them to do so. Darwin claimed that species underwent a gradual change, which necessitated the existence of half-bird/half-reptile or half-fish/half-reptile freaks. However, not even one of these "transitional forms" was found despite the extensive studies of evolutionists and the hundreds of thousands of fossils that were unearthed.
Evolutionists seized upon the model of punctuated equilibrium with the hope of concealing this great fossil fiasco. As we have stated before, it was very evident that this theory is a fantasy, so it very soon consumed itself. the model of punctuated equilibrium was never put forward as a consistent model, but rather used as an escape in cases that plainly did not fit the model of gradual evolution. Since evolutionists today realise that complex organs such as eyes, wings, lungs, brain and others explicitly refute the model of gradual evolution, in these particular points they are compelled to take shelter in the fantastic interpretations of the model of punctuated equilibrium.
Is there any Fossil Record to Verify the Theory of Evolution?
The theory of evolution argues that the evolution of a species into another species takes place gradually, step-by-step over millions of years. the logical inference drawn from such a claim is that monstrous living organisms called "transitional forms" should have lived during these periods of transformation. Since evolutionists allege that all living things evolved from each other step-by-step, the number and variety of these transitional forms should have been in the millions.
If such creatures had really lived, then we should see their remains everywhere. in fact, if this thesis is correct, the number of intermediate transitional forms should be even greater than the number of animal species alive today and their fossilised remains should be abundant all over the world.
Since Darwin, evolutionists have been searching for fossils and the result has been for them a crushing disappointment. Nowhere in the world – neither on land nor in the depths of the sea – has any intermediate transitional form between any two species ever been uncovered.
Darwin himself was quite aware of the absence of such transitional forms. It was his greatest hope that they would be found in the future. Despite his hopefulness, he saw that the biggest stumbling block to his theory was the missing transitional forms. This is why, in his book the Origin of Species, he wrote:
Darwin was right to be worried. the problem bothered other evolutionists as well. A famous British paleontologist, Derek V. Ager, admits this embarrassing fact:
The gaps in the fossil record cannot be explained away by the wishful thinking that not enough fossils have yet been unearthed and that these missing fossils will one day be found. Another evolutionist paleontologist, T. Neville George, explains the reason:
Life Emerged on Earth Suddenly and in Complex Forms
When terrestrial strata and the fossil record are examined, it is seen that living organisms appeared simultaneously. the oldest stratum of the earth in which fossils of living creatures have been found is that of the "Cambrian", which has an estimated age of 530-520 million years.
Living creatures that are found in the strata belonging to the Cambrian period emerged in the fossil record all of a sudden without any pre-existing ancestors. the vast mosaic of living organisms, made up of such great numbers of complex creatures, emerged so suddenly that this miraculous event is referred to as the "Cambrian Explosion" in scientific literature.
Most of the organisms found in this stratum have highly advanced organs like eyes, or systems seen in organisms with a highly advanced organisation such as gills, circulatory systems, and so on. There is no sign in the fossil record to indicate that these organisms had any ancestors. Richard Monestarsky, the editor of Earth Sciences magazine, states about the sudden emergence of living species:
Not being able to find answers to the question of how earth came to overflow with thousands of different animal species, evolutionists posit an imaginary period of 20 million years before the Cambrian Period to explain how life originated and "the unknown happened". This period is called the "evolutionary gap". No evidence for it has ever been found and the concept is still conveniently nebulous and undefined even today.
In 1984, numerous complex invertebrates were unearthed in Chengjiang, set in the central Yunnan plateau in the high country of southwest China. Among them were trilobites, now extinct, but no less complex in structure than any modern invertebrate.
The Swedish evolutionist paleontologist, Stefan Bengston, explains the situation as follows:
The sudden appearance of these complex living beings with no predecessors is no less baffling (and embarrassing) for evolutionists today than it was for Darwin 135 years ago. in nearly a century and a half, they have advanced not one step beyond the point that stymied Darwin.
As may be seen, the fossil record indicates that living things did not evolve from primitive to advanced forms, but instead emerged all of a sudden and in a perfect state. the absence of the transitional forms is not peculiar to the Cambrian period. Not a single transitional form verifying the alleged evolutionary "progression" of vertebrates – from fish to amphibians, reptiles, birds, and mammals – has ever been found. Every living species appears instantaneously and in its current form, perfect and complete, in the fossil record.
In other words, living beings did not come into existence through evolution. They were created.
Deceptions in Drawings
The fossil record is the principal source for those who seek evidence for the theory of evolution. When inspected carefully and without prejudice, the fossil record refutes the theory of evolution rather than supporting it. Nevertheless, misleading interpretations of fossils by evolutionists and their prejudiced representation to the public have given many people the impression that the fossil record indeed supports the theory of evolution.
The susceptibility of some findings in the fossil record to all kinds of interpretations is what best serves the evolutionists' purposes. the fossils unearthed are most of the time unsatisfactory for reliable identification. They usually consist of scattered, incomplete bone fragments. for this reason, it is very easy to distort the available data and to use it as desired. Not surprisingly, the reconstructions (drawings and models) made by evolutionists based on such fossil remains are prepared entirely speculatively in order to confirm evolutionary theses. Since people are readily affected by visual information, these imaginary reconstructed models are employed to convince them that the reconstructed creatures really existed in the past.
Evolutionist researchers draw human-like imaginary creatures, usually setting out from a single tooth, or a mandible fragment or a humerus, and present them to the public in a sensational manner as if they were links in human evolution. These drawings have played a great role in the establishment of the image of "primitive men" in the minds of many people.
These studies based on bone remains can only reveal very general characteristics of the creature concerned. the distinctive details are present in the soft tissues that quickly vanish with time. with the soft tissues speculatively interpreted, everything becomes possible within the boundaries of the imagination of the reconstruction's producer. Earnst A. Hooten from Harvard University explains the situation like this:
Studies Made to Fabricate False Fossils
Unable to find valid evidence in the fossil record for the theory of evolution, some evolutionists have ventured to manufacture their own. These efforts, which have even been included in encyclopaedias under the heading "evolution forgeries", are the most telling indication that the theory of evolution is an ideology and a philosophy that evolutionists are hard put to defend. Two of the most egregious and notorious of these forgeries are described below.
Charles Dawson, a well-known doctor and amateur paleoanthropologist, came forth with a claim that he had found a jawbone and a cranial fragment in a pit in the area of Piltdown, England, in 1912. Although the skull was human-like, the jawbone was distinctly simian. These specimens were christened the "Piltdown Man". Alleged to be 500 thousand years old, they were displayed as absolute proofs of human evolution. for more than 40 years, many scientific articles were written on the "Piltdown Man", many interpretations and drawings were made and the fossil was presented as crucial evidence of human evolution.
In 1949, scientists examined the fossil once more and concluded that the "fossil" was a deliberate forgery consisting of a human skull and the jawbone of an orang-utan.
Using the fluorine dating method, investigators discovered that the skull was only a few thousand years old. the teeth in the jawbone, which belonged to an orang-utan, had been artificially worn down and the "primitive" tools that had conveniently accompanied the fossils were crude forgeries that had been sharpened with steel implements. in the detailed analysis completed by Oakley, Weiner and Clark, they revealed this forgery to the public in 1953. the skull belonged to a 500-year-old man, and the mandibular bone belonged to a recently deceased ape! the teeth were thereafter specially arranged in an array and added to the jaw and the joints were filed in order to make them resemble that of a man. Then all these pieces were stained with potassium dichromate to give them a dated appearance. (These stains disappeared when dipped in acid.) Le Gros Clark, who was a member of the team that disclosed the forgery, could not hide his astonishment:
In 1922, Henry Fairfield Osborn, the director of the American Museum of Natural History, declared that he had found a molar tooth fossil in western Nebraska near Snake Brook belonging to the Pliocene period. This tooth allegedly bore the common characteristics of both man and ape. Deep scientific arguments began in which some interpreted this tooth to be that of Pithecanthropus erectus while others claimed it was closer to that of modern human beings. This fossil, which aroused extensive debate, was popularly named "Nebraska Man". It was also immediately given a "scientific name": "Hesperopithecus Haroldcooki".
Many authorities gave Osborn their support. Based on this single tooth, reconstructions of Nebraska Man's head and body were drawn. Moreover, Nebraska Man was even pictured with a whole family.
In 1927, other parts of the skeleton were also found. According to these newly discovered pieces, the tooth belonged neither to a man nor to an ape. It was realised that it belonged to an extinct species of wild American pig called Prosthennops.
Did Men and Apes Come from a Common Ancestor?
According to the claims of the theory of evolution, men and modern apes have common ancestors. These creatures evolved in time and some of them became the apes of today, while another group that followed another branch of evolution became the men of today.
Evolutionists call the so-called first common ancestors of men and apes "Australopithecus" which means "South African ape". Australopithecus, nothing but an old ape species that has become extinct, has various types. Some of them are robust, while others are small and slight.
Evolutionists classify the next stage of human evolution as "Homo", that is "man". According to the evolutionist claim, the living beings in the Homo series are more developed than Australopithecus, and not very much different from modern man. the modern man of our day, Homo sapiens, is said to have formed at the latest stage of the evolution of this species.
The fact of the matter is that the beings called Australopithecus in this imaginary scenario fabricated by evolutionists really are apes that became extinct, and the beings in the Homo series are members of various human races that lived in the past and then disappeared. Evolutionists arranged various ape and human fossils in an order from the smallest to the biggest in order to form a "human evolution" scheme. Research, however, has demonstrated that these fossils by no means imply an evolutionary process and some of these alleged ancestors of man were real apes whereas some of them were real humans.
Now, let us have a look at Australopithecus, which represents to evolutionists the first stage of the scheme of human evolution.
Australopithecus: Extinct Apes
Evolutionists claim that Australopithecus are the most primitive ancestors of modern men. These are an old species with a head and skull structure similar to that of modern apes, yet with a smaller cranial capacity. According to the claims of evolutionists, these creatures have a very important feature that authenticates them as the ancestors of men: bipedalism.
The movements of apes and men are completely different. Human beings are the only living creatures that move freely about on two feet. Some other animals do have a limited ability to move in this way, but those that do have bent skeletons.
According to evolutionists, these living beings called Australopithecus had the ability to walk in a bent rather than an upright posture like human beings. Even this limited bipedal stride was sufficient to encourage evolutionists to project onto these creatures that they were the ancestors of man.
However, the first evidence refuting the allegations of evolutionists that Australopithecus were bipedal came from evolutionists themselves. Detailed studies made on Australopithecus fossils forced even evolutionists to admit that these looked "too" ape-like. Having conducted detailed anatomical research on Australopithecus fossils in the mid-1970s, Charles E. Oxnard likened the skeletal structure of Australopithecus to that of modern orang-utans:
What really embarrassed evolutionists was the discovery that Australopithecus could not have walked on two feet and with a bent posture. It would have been physically very ineffective for Australopithecus, allegedly bipedal but with a bent stride, to move about in such a way because of the enormous energy demands it would have entailed. By means of computer simulations conducted in 1996, the English paleoanthropologist Robin Crompton also demonstrated that such a "compound" stride was impossible. Crompton reached the following conclusion: a living being can walk either upright or on all fours. A type of in-between stride cannot be sustained for long periods because of the extreme energy consumption. This means that Australopithecus could not have been both bipedal and have a bent walking posture.
Probably the most important study demonstrating that Australopithecus could not have been bipedal came in 1994 from the research anatomist Fred Spoor and his team in the Department of Human Anatomy and Cellular Biology at the University of Liverpool, England. This group conducted studies on the bipedalism of fossilised living beings. Their research investigated the involuntary balance mechanism found in the cochlea of the ear, and the findings showed conclusively that Australopithecus could not have been bipedal. This precluded any claims that Australopithecus was human-like.
The Homo Series: Real Human Beings
The next step in the imaginary human evolution is "Homo", that is, the human series. These living beings are humans who are no different from modern men, yet who have some racial differences. Seeking to exaggerate these differences, evolutionists represent these people not as a "race" of modern man but as a different "species". However, as we will soon see, the people in the Homo series are nothing but ordinary human racial types.
According to the fanciful scheme of evolutionists, the internal imaginary evolution of the Homo species is as follows: First Homo erectus, then Homo sapiens archaic and Neanderthal Man, later Cro-Magnon Man and finally modern man.
Despite the claims of evolutionists to the contrary, all the "species" we have enumerated above are nothing but genuine human beings. Let us first examine Homo erectus, who evolutionists refer to as the most primitive human species.
The most striking evidence showing that Homo erectus is not a "primitive" species is the fossil of "Turkana Boy", one of the oldest Homo erectus remains. It is estimated that the fossil was of a 12-year-old boy, who would have been 1.83 meters tall in his adolescence. the upright skeletal structure of the fossil is no different from that of modern man. Its tall and slender skeletal structure totally complies with that of the people living in tropical regions in our day. This fossil is one of the most important pieces of evidence that Homo erectus is simply another specimen of the modern human race. Evolutionist paleontologist Richard Leakey compares Homo erectus and modern man as follows:
Leakey means to say that the difference between Homo erectus and us is no more than the difference between Negroes and Eskimos. the cranial features of Homo erectus resulted from their manner of feeding, and genetic emigration and from their not assimilating with other human races for a lengthy period.
Another strong piece of evidence that Homo erectus is not a "primitive" species is that fossils of this species have been unearthed aged twenty-seven thousand years and even thirteen thousand years. According to an article published in Time – which is not a scientific periodical, but nevertheless had a sweeping effect on the world of science – Homo erectus fossils aged twenty-seven thousand years were found on the island of Java. in the Kow swamp in Australia, some thirteen thousand year-old fossils were found that bore Homo Sapiens-Homo Erectus characteristics. All these fossils demonstrate that Homo erectus continued living up to times very close to our day and were nothing but a human race that has since been buried in history.
Archaic Homo Sapiens and Neanderthal Man
Archaic Homo sapiens is the immediate forerunner of contemporary man in the imaginary evolutionary scheme. in fact, evolutionists do not have much to say about these men, as there are only minor differences between them and modern men. Some researchers even state that representatives of this race are still living today, and point to the Aborigines in Australia as an example. Like Homo sapiens, the Aborigines also have thick protruding eyebrows, an inward-inclined mandibular structure, and a slightly smaller cranial volume. Moreover, significant discoveries have been made hinting that such people lived in Hungary and in some villages in Italy until not very long ago.
Evolutionists point to human fossils unearthed in the Neander valley of Holland which have been named Neanderthal Man. Many contemporary researchers define Neanderthal Man as a sub-species of modern man and call it "Homo sapiens neandertalensis". It is definite that this race lived together with modern humans, at the same time and in the same areas. the findings testify that Neanderthals buried their dead, fashioned musical instruments, and had cultural affinities with the Homo sapiens sapiens living during the same period. Entirely modern skulls and skeletal structures of Neanderthal fossils are not open to any speculation. A prominent authority on the subject, Erik Trinkaus from New Mexico University writes:
In fact, Neanderthals even had some "evolutionary" advantages over modern men. the cranial capacity of Neanderthals was larger than that of the modern man and they were more robust and muscular than we are. Trinkaus adds: "One of the most characteristic features of the Neanderthals is the exaggerated massiveness of their trunk and limb bones. All of the preserved bones suggest a strength seldom attained by modern humans. Furthermore, not only is this robustness present among the adult males, as one might expect, but it is also evident in the adult females, adolescents, and even children."
To put it precisely, Neanderthals are a particular human race that assimilated with other races in time.
All of these factors show that the scenario of "human evolution" fabricated by evolutionists is a figment of their imaginations, and that men have always been men and apes always apes.
Can Life Result from Coincidences as Revolution Argues?
The theory of evolution holds that life started with a cell that formed by chance under primitive earth conditions. Let us therefore examine the composition of the cell with simple comparisons in order to show how irrational it is to ascribe the existence of the cell – a structure which still maintains its mystery in many respects, even at a time when we are about to set foot in the 21st century – to natural phenomena and coincidences.
With all its operational systems, systems of communication, transportation and management, a cell is no less complex than any city. It contains power stations producing the energy consumed by the cell, factories manufacturing the enzymes and hormones essential for life, a databank where all necessary information about all products to be produced is recorded, complex transportation systems and pipelines for carrying raw materials and products from one place to another, advanced laboratories and refineries for breaking down imported raw materials into their usable parts, and specialised cell membrane proteins for the control of incoming and outgoing materials. These constitute only a small part of this incredibly complex system.
Far from being formed under primitive earth conditions, the cell, which in its composition and mechanisms is so complex, cannot be synthesised in even the most sophisticated laboratories of our day. Even with the use of amino acids, the building blocks of the cell, it is not possible to produce so much as a single organelle of the cell, such as mitochondria or ribosome, much less a whole cell. the first cell claimed to have been produced by evolutionary coincidence is as much a figment of the imagination and a product of fantasy as the unicorn.
Proteins Challenge Coincidence
And it is not just the cell that cannot be produced: the formation, under natural conditions, of even a single protein of the thousands of complex protein molecules making up a cell is impossible.
Proteins are giant molecules consisting of amino acids arranged in a particular sequence in certain quantities and structures. These molecules constitute the building blocks of a living cell. the simplest is composed of 50 amino acids; but there are some proteins that are composed of thousands of amino acids. the absence, addition, or replacement of a single amino acid in the structure of a protein in living cells, each of which has a particular function, causes the protein to become a useless molecular heap. Incapable of demonstrating the "accidental formation" of amino acids, the theory of evolution founders on the point of the formation of proteins.
We can easily demonstrate, with simple probability calculations anybody can understand, that the functional structure of proteins can by no means come about by chance.
There are twenty different amino acids. If we consider that an average-sized protein molecule is composed of 288 amino acids, there are 10300 different combinations of acids. of all of these possible sequences, only "one" forms the desired protein molecule. the other amino-acid chains are either completely useless or else potentially harmful to living things. in other words, the probability of the coincidental formation of only one protein molecule cited above is "1 in 10300". the probability of this "1" occurring out of an "astronomical" number consisting of 1 followed by 300 zeros is for all practical purposes zero; it is impossible. Furthermore, a protein molecule of 288 amino acids is rather a modest one compared with some giant protein molecules consisting of thousands of amino acids. When we apply similar probability calculations to these giant protein molecules, we see that even the word "impossible" becomes inadequate.
If the coincidental formation of even one of these proteins is impossible, it is billions of times more impossible for approximately one million of those proteins to come together by chance in an organised fashion and make up a complete human cell. Moreover, a cell is not merely a collection of proteins. in addition to proteins, cells also include nucleic acids, carbohydrates, lipids, vitamins, and many other chemicals such as electrolytes, all of which are arranged harmoniously and with design in specific proportions, both in terms of structure and function. Each functions as a building block or component in various organelles.
As we have seen, evolution is unable to explain the formation of even a single protein out of the millions in the cell, let alone explain the cell.
Prof. Dr. Ali Demirsoy, one of the foremost authorities of evolutionist thought in Turkey, in his book Kalitim ve Evrim (Inheritance and Evolution), discusses the probability of the accidental formation of Cytochrome-C, one of the essential enzymes for life:
After these lines, Demirsoy admits that this probability, which he accepted just because it was "more appropriate to the goals of science", is unrealistic:
The correct sequence of proper amino acids is simply not enough for the formation of one of the protein molecules present in living things. Besides this, each of the twenty different types of amino acid present in the composition of proteins must be left-handed. Chemically, there are two different types of amino acids called "left-handed" and "right-handed". the difference between them is the mirror-symmetry between their three dimensional structures, which is similar to that of a person's right and left hands. Amino acids of either of these two types are found in equal numbers in nature and they can bond perfectly well with one another. Yet, research uncovers an astonishing fact: all proteins present in the structure of living things are made up of left-handed amino acids. Even a single right-handed amino acid attached to the structure of a protein renders it useless.
Let us for an instant suppose that life came into existence by chance as evolutionists claim. in this case, the right and left-handed amino acids that were generated by chance should be present in nature in roughly equal amounts. the question of how proteins can pick out only left-handed amino acids, and how not even a single right-handed amino acid becomes involved in the life process is something that still confounds evolutionists. in the Britannica Science Encyclopaedia, an ardent defender of evolution, the authors indicate that the amino acids of all living organisms on earth and the building blocks of complex polymers such as proteins have the same left-handed asymmetry. They add that this is tantamount to tossing a coin a million times and always getting heads. in the same encyclopaedia, they state that it is not possible to understand why molecules become left-handed or right-handed and that this choice is fascinatingly related to the source of life on earth.13
It is not enough for amino acids to be arranged in the correct numbers, sequences, and in the required three-dimensional structures. the formation of a protein also requires that amino acid molecules with more than one arm be linked to each other only through certain arms. Such a bond is called a "peptide bond". Amino acids can make different bonds with each other; but proteins comprise those and only those amino acids that join together by "peptide" bonds.
Research has shown that only 50 % of amino acids, combining at random, combine with a peptide bond and that the rest combine with different bonds that are not present in proteins. to function properly, each amino acid making up a protein must join with other amino acids with a peptide bond, as it has only to be chosen from among the left-handed ones. Unquestionably, there is no control mechanism to select and leave out the right-handed amino acids and personally make sure that each amino acid makes a peptide bond with the other.
Under these circumstances, the probabilities of an average protein molecule comprising five hundred amino acids arranging itself in the correct quantities and in sequence, in addition to the probabilities of all of the amino acids it contains being only left-handed and combining using only peptide bonds are as follows:
As you can see above, the probability of the formation of a protein molecule comprising five hundred amino acids is "1" divided by a number formed by placing 950 zeros after a 1, a number incomprehensible to the human mind. This is only a probability on paper. Practically, such a possibility has "0" chance of realisation. in mathematics, a probability smaller than 1 over 1050 is statistically considered to have a "0" probability of realisation.
While the improbability of the formation of a protein molecule made up of five hundred amino acids reaches such an extent, we can further proceed to push the limits of the mind to higher levels of improbability. in the "haemoglobin" molecule, a vital protein, there are five hundred and seventy-four amino acids, which is a much larger number than that of the amino acids making up the protein mentioned above. Now consider this: in only one out of the billions of red blood cells in your body, there are "280,000,000" (280 million) haemoglobin molecules. the supposed age of the earth is not sufficient to afford the formation of even a single protein, let alone a red blood cell, by the method of "trial and error". the conclusion from all this is that evolution falls into a terrible abyss of improbability right at the stage of the formation of a single protein.
Looking for Answers to the Generation of Life
Well aware of the terrible odds against the possibility of life forming by chance, evolutionists were unable to provide a rational explanation for their beliefs, so they set about looking for ways to demonstrate that the odds were not so unfavourable.
They designed a number of laboratory experiments to address the question of how life could generate itself from non-living matter. the best known and most respected of these experiments is the one known as the "Miller Experiment" or "Urey-Miller Experiment", which was conducted by the American researcher Stanley Miller in 1953.
With the purpose of proving that amino acids could have come into existence by accident, Miller created an atmosphere in his laboratory that he assumed would have existed on primordial earth (but which later proved to be unrealistic) and he set to work. the mixture he used for this primordial atmosphere was composed of ammonia, methane, hydrogen, and water vapour.
Miller knew that methane, ammonia, water vapour and hydrogen would not react with each other under natural conditions. He was aware that he had to inject energy into the mixture to start a reaction. He suggested that this energy could have come from lightning flashes in the primordial atmosphere and, relying on this supposition, he used an artificial electricity discharge in his experiments.
Miller boiled this gas mixture at 100 0C for a week, and, in addition, he introduced an electric current into the chamber. At the end of the week, Miller analysed the chemicals that had been formed in the chamber and observed that three of the twenty amino acids, which constitute the basic elements of proteins, had been synthesised.
This experiment aroused great excitement among evolutionists and they promoted it as an outstanding success. Encouraged by the thought that this experiment definitely verified their theory, evolutionists immediately produced new scenarios. Miller had supposedly proved that amino acids could form by themselves. Relying on this, they hurriedly hypothesised the following stages. According to their scenario, amino acids had later by accident united in the proper sequences to form proteins. Some of these accidentally formed proteins placed themselves in cell membrane-like structures, which "somehow" came into existence and formed a primitive cell. the cells united in time and formed living organisms. the greatest mainstay of the scenario was Miller's experiment.
However, Miller's experiment was nothing but make-believe, and has since been proven invalid in many respects.
The Invalidity of Miller's Experiment
Nearly half a century has passed since Miller conducted his experiment. Although it has been shown to be invalid in many respects, evolutionists still advance Miller and his results as absolute proof that life could have formed spontaneously from non-living matter. When we assess Miller's experiment critically, without the bias and subjectivity of evolutionist thinking, however, it is evident that the situation is not as rosy as evolutionists would have us think. Miller set for himself the goal of proving that amino acids could form by themselves in earth's primitive conditions. Some amino acids were produced, but the conduct of the experiment conflicts with his goal in many ways, as we shall now see.
F Miller isolated the amino acids from the environment as soon as they were formed, by using a mechanism called a "cold trap". Had he not done so, the conditions of the environment in which the amino acids formed would immediately have destroyed the molecules.
It is quite meaningless to suppose that some conscious mechanism of this sort was integral to earth's primordial conditions, which involved ultraviolet radiation, thunderbolts, various chemicals, and a high percentage of free oxygen. Without such a mechanism, any amino acid that did manage to form would immediately have been destroyed.
F the primordial atmospheric environment that Miller attempted to simulate in his experiment was not realistic. Nitrogen and carbon dioxide would have been constituents of the primordial atmosphere, but Miller disregarded this and used methane and ammonia instead.
Why? Why were evolutionists insistent on the point that the primitive atmosphere contained high amounts of methane (CH4), ammonia (NH3), and water vapour (H2O)? the answer is simple: without ammonia, it is impossible to synthesise an amino acid. Kevin McKean talks about this in an article published in Discover magazine:
After a long period of silence, Miller himself also confessed that the atmospheric environment he used in his experiment was not realistic.
F Another important point invalidating Miller's experiment is that there was enough oxygen to destroy all the amino acids in the atmosphere at the time when evolutionists thought that amino acids formed. This oxygen concentration would definitely have hindered the formation of amino acids. This situation completely negates Miller's experiment, in which he totally neglected oxygen. If he had used oxygen in the experiment, methane would have decomposed into carbon dioxide and water, and ammonia would have decomposed into nitrogen and water.
On the other hand, since no ozone layer yet existed, no organic molecule could possibly have lived on earth because it was entirely unprotected against intense ultraviolet rays.
F in addition to a few amino acids essential for life, Miller's experiment also produced many organic acids with characteristics that are quite detrimental to the structures and functions of living things. If he had not isolated the amino acids and had left them in the same environment with these chemicals, their destruction or transformation into different compounds through chemical reactions would have been unavoidable. Moreover, a large number of right-handed amino acids also formed. the existence of these amino acids alone refuted the theory, even within its own reasoning, because right-handed amino acids are unable to function in the composition of living organisms and render proteins useless when they are involved in their composition.
To conclude, the circumstances in which amino acids formed in Miller's experiment were not suitable for life forms to come into being. the medium in which they formed was an acidic mixture that destroyed and oxidised any useful molecules that might have been obtained.
Evolutionists themselves actually refute the theory of evolution, as they are often wont to do, by advancing this experiment as "proof". If the experiment proves anything, it is that amino acids can only be produced in a controlled laboratory environment where all the necessary conditions have been specifically and consciously designed. That is, the experiment shows that what brings life (even the "near-life" of amino acids) into being cannot be unconscious chance, but rather conscious will – in a word, Creation. This is why every stage of Creation is a sign proving to us the existence and might of Allah.
The Miraculous Molecule: DNA
The theory of evolution has been unable to provide a coherent explanation for the existence of the molecules that are the basis of the cell. Furthermore, developments in the science of genetics and the discovery of the nucleic acids (DNA and RNA) have produced brand-new problems for the theory of evolution.
In 1955, the work of two scientists on DNA, James Watson and Francis Crick, launched a new era in biology. Many scientists directed their attention to the science of genetics. Today, after years of research, scientists have, largely, mapped the structure of DNA.
Here, we need to give some very basic information on the structure and function of DNA:
The molecule called DNA, which exists in the nucleus of each of the 100 trillion cells in our body, contains the complete construction plan of the human body. Information regarding all the characteristics of a person, from the physical appearance to the structure of the inner organs, is recorded in DNA by means of a special coding system. the information in DNA is coded within the sequence of four special bases that make up this molecule. These bases are specified as A, T, G, and C according to the initial letters of their names. All the structural differences among people depend on the variations in the sequence of these bases. There are approximately 3.5 billion nucleotides, that is, 3.5 billion letters in a DNA molecule.
The DNA data pertaining to a particular organ or protein is included in special components called "genes". for instance, information about the eye exists in a series of special genes, whereas information about the heart exists in quite another series of genes. the cell produces proteins by using the information in all of these genes. Amino acids that constitute the structure of the protein are defined by the sequential arrangement of three nucleotides in the DNA.
At this point, an important detail deserves attention. An error in the sequence of nucleotides making up a gene renders the gene completely useless. When we consider that there are 200 thousand genes in the human body, it becomes more evident how impossible it is for the millions of nucleotides making up these genes to form by accident in the right sequence. An evolutionist biologist, Frank Salisbury, comments on this impossibility by saying:
The number 41000 is equivalent to 10600. We obtain this number by adding 600 zeros to 1. As 10 with 11 zeros indicates a trillion, a figure with 600 zeros is indeed a number that is difficult to grasp.
Evolutionist Prof. Ali Demirsoy was forced to make the following admission on this issue:
In addition to all these improbabilities, DNA can barely be involved in a reaction because of its double-chained spiral shape. This also makes it impossible to think that it can be the basis of life.
Moreover, while DNA can replicate only with the help of some enzymes that are actually proteins, the synthesis of these enzymes can be realised only by the information coded in DNA. As they both depend on each other, either they have to exist at the same time for replication, or one of them has had to be "created" before the other. American microbiologist Jacobson comments on the subject:
The quotation above was written two years after the disclosure of the structure of DNA by James Watson and Francis Crick. Despite all the developments in science, this problem remains unsolved for evolutionists. to sum up, the need for DNA in reproduction, the necessity of the presence of some proteins for reproduction, and the requirement to produce these proteins according to the information in the DNA entirely demolish evolutionist theses.
Two German scientists, Junker and Scherer, explained that the synthesis of each of the molecules required for chemical evolution, necessitates distinct conditions, and that the probability of the compounding of these materials having theoretically very different acquirement methods is zero:
In short, the theory of evolution is unable to prove any of the evolutionary stages that allegedly occur at the molecular level.
To summarise what we have said so far, neither amino acids nor their products, the proteins making up the cells of living beings, could ever be produced in any so-called "primitive atmosphere" environment. Moreover, factors such as the incredibly complex structure of proteins, their right-hand, left-hand features, and the difficulties in the formation of peptide bonds are just parts of the reason why they will never be produced in any future experiment either.
Even if we suppose for a moment that proteins somehow did form accidentally, that would still have no meaning, for proteins are nothing at all on their own: they cannot themselves reproduce. Protein synthesis is only possible with the information coded in DNA and RNA molecules. Without DNA and RNA, it is impossible for a protein to reproduce. the specific sequence of the twenty different amino acids encoded in DNA determines the structure of each protein in the body. However, as has been made abundantly clear by all those who have studied these molecules, it is impossible for DNA and RNA to form by chance.
The Fact of Creation
With the collapse of the theory of evolution in every field, prominent names in the discipline of microbiology today admit the fact of creation and have begun to defend the view that everything is created by a conscious Creator as part of an exalted creation. This is already a fact that people cannot disregard. Scientists who can approach their work with an open mind have developed a view called "intelligent design". Michael J. Behe, one of the foremost of these scientists, states that he accepts the absolute being of the Creator and describes the impasse of those who deny this fact:
The result of these cumulative efforts to investigate the cell – to investigate life at the molecular level – is a loud, clear, piercing cry of "design!" the result is so unambiguous and so significant that it must be ranked as one of the greatest achievements in the history of science. This triumph of science should evoke cries of "Eureka" from ten thousand throats.
But, no bottles have been uncorked, no hands clapped. Instead, a curious, embarrassed silence surrounds the stark complexity of the cell. When the subject comes up in public, feet start to shuffle, and breathing gets a bit laboured. in private people are a bit more relaxed; many explicitly admit the obvious but then stare at the ground, shake their heads, and let it go like that. Why does the scientific community not greedily embrace its startling discovery? Why is the observation of design handled with intellectual gloves? the dilemma is that while one side of the elephant is labelled intelligent design, the other side must be labelled God.19
Today, many people are not even aware that they are in a position of accepting a body of fallacy as truth in the name of science, instead of believing in Allah. Those who do not find the sentence "Allah created you from nothing" scientific enough can believe that the first living being came into being by thunderbolts striking a "primordial soup" billions of years ago.
As we have described elsewhere in this book, the balances in nature are so delicate and so numerous that it is entirely irrational to claim that they developed "by chance". No matter how much those who cannot set themselves free from this irrationality may strive, the signs of Allah in the heavens and the earth are completely obvious and they are undeniable.
Allah is the Creator of the heavens, the earth and all that is in between.
The signs of His being have encompassed the entire universe.
1. Charles Darwin, the Origin of Species: By Means of Natural Selection or the Preservation of Favoured Races in the Struggle for Life, London: Senate Press, 1995, p. 134.
2. Derek A. Ager. "The Nature of the Fossil Record." Proceedings of the British Geological Association, vol. 87, no. 2, (1976), p. 133.
3. T.N. George, "Fossils in Evolutionary Perspective", Science Progress, vol.48, (January 1960), p.1-3
4. Richard Monestarsky, Mysteries of the Orient, Discover, April 1993, p.40.
5. Stefan Bengston, Nature 345:765 (1990).
6. Earnest A. Hooton, Up From the Ape, New York: McMillan, 1931, p.332.
7. Stephen Jay Gould, Smith Woodward's Folly, New Scientist, 5 April, 1979, p. 44.
8. Charles E. Oxnard, the Place of Australopithecines in Human Evolution: Grounds for Doubt, Nature, No. 258, p. 389.
9. Richard Leakey, the Making of Mankind, London: Sphere Books, 1981, p. 116
10. Eric Trinkaus, Hard Times Among the Neanderthals, Natural History, No. 87, December 1978, p. 10, R.L. Holoway, "The Neanderthal Brain: What was Primitive?", American Journal of Physical Anthrophology Supplement, No. 12, 1991, p. 94
11. Ali Demirsoy, Kalitim ve Evrim (Inheritance and Evolution), Ankara: Meteksan Yayinlari 1984, p. 61
12. Ali Demirsoy, Kalitim ve Evrim (Inheritance and Evolution), Ankara: Meteksan Yayinlari 1984, p. 61
13. Fabbri Britannica Science Encyclopaedia, Vol. 2, No. 22, p. 519
14. Kevin McKean, Bilim ve Teknik, No. 189, p. 7
15. Frank B. Salisbury, "Doubts about the Modern Synthetic Theory of Evolution", American Biology Teacher, September 1971, p. 336.
16. Ali Demirsoy, Kalitim ve Evrim (Inheritance and Evolution), Ankara: Meteksan Publishing Co., 1984, p. 39.
17. Homer Jacobson, "Information, Reproduction and the Origin of Life", American Scientist, January, 1955, p.121.
18. Reinhard Junker & Siegfried Scherer, "Entstehung Gesiche Der Lebewesen", Weyel, 1986, p. 89.
19. Michael J. Behe, Darwin's Black Box, New York: Free Press, 1996, pp. 232-233.
|
<urn:uuid:fe9bd018-5373-4204-9f99-710c805250b0>
|
CC-MAIN-2013-20
|
http://harunyahya.com/en/books/531/Allah_Is_Known_Through_Reason/chapter/100/Evolution_Deceit
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.960514
| 10,884
| 2.859375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon dioxide",
"methane"
],
"nature": [
"habitat"
]
}
|
{
"strong": 3,
"weak": 0,
"total": 3,
"decision": "accepted_strong"
}
|
Gasoline prices have increased rapidly during the past several years, pushed up mainly by the sharply rising price of oil. A gallon of gasoline in the US rose from $1.50 in 2002 to $2 in 2004 to $2.50 in 2006 to over $4 at present. Gasoline prices almost trebled during these 6 years compared to very little change in nominal gas prices during the prior fifteen years. The US federal tax on gasoline has remained at 18.4 cents per gallon during this period of rapid growth in gasoline prices, while state excise taxes add another 21.5 cents per gallon. In addition, many local governments levy additional sales and other taxes on gasoline. Gasoline taxes have not risen much as the price of gasoline exploded upward.
The price of gasoline is much lower than in other rich countries mainly because American taxes are far smaller. For example, gasoline taxes in Germany and the United Kingdom amount to about $3 per gallon. Some economists and environmentalists have called for large increases in federal, state, and local taxes to make them more comparable to gasoline taxes in other countries. Others want these taxes to rise by enough so that at least they would have kept pace with the sharply rising pre-tax fuel prices. At the same time two presidential candidates, Hillary Clinton and John McCain, proposed a temporary repeal during this summer of the federal tax in order to give consumers a little relief from the higher gas prices. We discuss the optimal tax on gasoline, and how the sharp increase in gas prices affected its magnitude.
Taxes on gasoline are a way to induce consumers to incorporate the "external" damages to others into their decision of how much to drive and where to drive. These externalities include the effects of driving on local and global pollution, such as the contribution to global warming from the carbon emitted into the atmosphere by burnt gasoline. One other important externality is the contribution of additional driving to road congestion that slows the driving speeds of everyone and increases the time it takes to go a given distance. Others include automobile accidents that injure drivers and pedestrians, and the effect of using additional gasoline on the degree of dependence on imported oil from the Middle East and other not very stable parts of the world.
A careful 2007 study by authors from Resources for the Future evaluates the magnitudes of all these externalities from driving in the US (see Harrington, Parry, and Walls, "Automobile Externalities and Policies", Journal of Economic Literature, 2007, pp 374-400). They estimate the total external costs of driving at 228 cents per gallon of gas used, or at 10.9 cents per mile driven, with the typical car owned by American drivers. Their breakdown of this total among different sources is interesting and a little surprising. They attribute only 6 cents of the total external cost to the effects of gasoline consumption on global warming through the emission of carbon into the atmosphere from the burning of gasoline, and 12 cents from the increased dependency on imported oil. Perhaps their estimate of only 6 cents per gallon is a large underestimate of the harmful effects of gasoline use on global warming. Yet even if we treble their estimate, that only raises total costs of gasoline use due to the effects on global warming by 12 cents per gallon. That still leaves the vast majority of the external costs of driving to other factors.
They figure that local pollution effects amount to 42 cents per gallon, which makes these costs much more important than even the trebled cost of global warming. According to their estimates, still more important costs are those due to congestion and accidents, since these are 105 cents and 63 cents per gallon, respectively. Their figure for the cost of traffic accidents is likely too high –as the authors' recognize- because it includes the cost in damages to property and person of single vehicle accidents, as when a car hits a tree. Presumably, single vehicle accidents are not true externalities because drivers and their passengers would consider their possibility and internalize them into their driving decisions. Moreover, the large effect of drunk driving on the likelihood of accidents should be treated separately from a gasoline tax by directly punishing drunk drivers rather than punishing also sober drivers who are far less likely to get into accidents.
On the surface, these calculations suggest that American taxes on gasoline, totaling across all levels of government to about 45 cents per gallon, are much too low. However, the federal tax of 18.4 cents per gallon is almost exactly equal to their figure of 18 cents per gallon as the external costs of global warming and oil dependency. To be sure, a trebled estimate for global warming would bring theirs up to 30 cents per gallon. However, the federal government also taxes driving through its mandated fuel efficiency standards for cars, although this is an inefficient way to tax driving since it taxes the type of car rather than driving. Still, the overall level of federal taxes does not fall much short, if at all, from the adjusted estimate of 30 cents per gallon of damages due to the effects of gasoline use on global warming and oil dependency.
Any shortfall in taxes would be at the state and local levels in combating externalities due to local pollution effects, and to auto accidents and congestion on mainly local roads. Here too, however, the discrepancy between actual and optimal gasoline taxes is far smaller than it may seem, and not only because single vehicle accidents are included in their estimate of the cost of car accidents, and accidents due to drunk driving should be discouraged through punishments to drunk drivers. One important reason is that congestion should be reduced not by general gasoline taxes, but by special congestion taxes- as used in London and a few other cities- that vary in amount with degree of congestion (see our discussion of congestion taxes on February 12, 2006). Congestion taxes are a far more efficient way to reduce congestion than are general taxes on gasoline that apply also when congestion is slight.
In addition and often overlooked, the sharp rise in pre-tax gasoline prices has partly accomplished the local pollution and auto accident goals that would be achieved by higher gas taxes. For higher prices have cut driving, just as taxes would, and will cut driving further in the future as consumers continue to adjust the amount and time of their driving to gasoline that costs more than $4 a gallon. Reduced driving will lower pollution and auto accidents by reducing the number of cars on the road during any time period, especially during heavily traveled times when pollution and accidents are more common.
The effects of high gas prices in reducing congestion, local pollution, and accident externalities could be substantial. These authors estimate the size of local driving externalities, aside from congestion costs, at 105 cents per gallon. Even after the sharp run up in gas prices, this may still exceed the 28 cents per gallon of actual state and local taxes, but the gap probably is small. It surely is a lot smaller than it was before gas prices exploded on the back of the climb in the cost of oil. In effect, by reducing driving, higher gasoline prices have already done much of the work in reducing externalities that bigger gas taxes would have done when prices were lower.
|
<urn:uuid:f233a3f5-7030-4ac7-86a7-3c91855cbc39>
|
CC-MAIN-2013-20
|
http://www.becker-posner-blog.com/2008/07/should-us-taxes-on-gasoline-be-higher-becker.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.963967
| 1,427
| 2.953125
| 3
|
[
"climate"
] |
{
"climate": [
"global warming"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
There is an overwhelming amount of news coverage related to the state of the economy, international politics, and various domestic programs. The environment and environmental policy on the other hand, are being ignored. When environmental policy is discussed many citizens and representatives champion the need for substantial policy reform. However, when actual policies are introduced, they are typically ignored or delayed. Due to the current state of the environment, politicians need to place a higher priority on environmental policy.
First, the public needs to understand exactly what environmental policy is and how it affects them. Environmental policy is defined as “any action deliberately taken to manage human activities with a view to prevent, reduce, or mitigate harmful effects on nature and natural resources, and ensuring that man-made changes to the environment do not have harmful effects on humans or the environment,” (McCormick 2001). It generally covers air and water pollution, waste management, ecosystem management, biodiversity protection, and the protection of natural resources, wildlife, and endangered species. Issues like these, affect everyone across the globe and cannot be ignored.
Environmental policy became a national issue under Theodore Roosevelt, when National Parks were established in hopes of preserving wildlife for future generations. The modern environmental movement began in the 1970s during the Nixon administration, when a large amount of environmental legislation started rolling out. Nixon signed the National Environmental Policy Act (NEPA) which established a national policy promoting the enhancement of the environment and set requirements for all government agencies to prepare Environmental Assessments and Environmental Impact Statements. Nixon also established the President’s Council on Environmental Quality. Legislation of the time established the Environmental Protection Agency (EPA), The Clean Air Act, and the Federal Water Pollution Control Act. The EPA has received a lot of notoriety recently, mostly for republican’s desire to get rid of it, though it is still a source vital to protecting the environmental.
Rising gas prices in the 1970s inspired a wave of greener vehicles, a phenomenon witnessed again in 2008. High energy costs motivated Jimmy Carter to install solar panels on the white house roof, a clear message that helping the environment was everyone’s responsibility.
Focus on environmental policy began dwindling in the 1980s though, under the Regan administration. As the Soviet Union began to weaken and fall, the restructuring of Europe became a priority and the environment quickly slipped to the backburner.
Many of the environmental issues that the public faced in the late twentieth century are still issues today, including: climate change, lack of fossil fuels, sustainable energy solutions, ozone depletion, and resource depletion. Today, with the plethora of issues currently affecting the environment, it needs to become a priority again.
As gas prices reached record highs in 2007 and 2008, there was a surge in Green Startups to help companies struggling with high fuel costs. As fuel costs decreased, the focus on these Green Startups decreased as well. However, now that gas prices are again on the rise, there will likely be a green resurgence in the market.
These green initiatives should not rise and fall with the cost of gas. Environmental issues have impacts and implications far greater than the bottom dollar.
Rising sea levels, droughts, and other extreme weather events have enormous human impacts, killing or displacing scores of people each year. In an Oxfam International study at the University of Belgium, the earth is currently experiencing approximately 500 natural disasters a year, affecting over 250 million people (Guitierrez 2008).
It is paramount that our government focus significant attention and funding on environmental policy. If we continue to disregard the environment, the planet might be degraded to the point where it is no longer habitable. We only have one Earth; we need to do our best to preserve it.
Gutierrez, David. “Natural Disasters Up More Than 400 Percent in Two Decades.”
NaturalNews.com, June 5, 2008. Accessed March 10, 2012. http://www.naturalnews.com/023362.html.
|
<urn:uuid:601a56d2-b3ec-413e-80c1-296b8bc9080e>
|
CC-MAIN-2013-20
|
http://www.nupoliticalreview.com/?p=1545
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939738
| 812
| 3.4375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change",
"extreme weather"
],
"nature": [
"biodiversity",
"ecosystem",
"endangered species"
]
}
|
{
"strong": 5,
"weak": 0,
"total": 5,
"decision": "accepted_strong"
}
|
PROVIDENCE, RI -- Shifting glaciers and exploding volcanoes aren't confined to Mars' distant past, according two new reports in the journal Nature.
Glaciers moved from the poles to the tropics 350,000 to 4 million years ago, depositing massive amounts of ice at the base of mountains and volcanoes in the eastern Hellas region near the planet's equator, based on a report by a team of scientists analyzing images from the Mars Express mission. Scientists also studied images of glacial remnants on the western side of Olympus Mons, the largest volcano in the solar system. They found additional evidence of recent ice formation and movement on these tropical mountain glaciers, similar to ones on Mount Kilimanjaro in Africa.
In a second report, the international team reveals previously unknown traces of a major eruption of Hecates Tholus less than 350,000 million years ago. In a depression on the volcano, researchers found glacial deposits estimated to be 5 to 24 million years old.
James Head, professor of geological sciences at Brown University and an author on the Nature papers, said the glacial data suggests recent climate change in Mars' 4.6-billion-year history. The team also concludes that Mars is in an "interglacial" period. As the planet tilts closer to the sun, ice deposited in lower latitudes will vaporize, changing the face of the Red Planet yet again.
Discovery of the explosive eruption of Hecates Tholus provides more evidence of recent Mars rumblings. In December, members of the same research team revealed that calderas on five major Mars volcanoes were repeatedly active as little as 2 million years ago. The volcanoes, scientists speculated, may even be active today.
"Mars is very dynamic," said Head, lead author of one of the Nature reports. "We see that the climate change and geological forces that drive evolution on Earth are happening there."
Head is part of a 33-institution team analyzing images from Mars Express, launched in June 2003 by the European Space Agency. The High Resolution Stereo Camera, or HRSC, on board the orbiter is producing 3-D images of the planet's surface.
These sharp, panoramic, full-color pictures provided fodder for a third Nature report. In it, the team offers evidence of a frozen body of water, about the size and depth of the North Sea, in southern Elysium.
A plethora of ice and active volcanoes could provide the water and heat needed to sustain basic life forms on Mars. Fresh data from Mars Express – and the announcement that live bacteria were found in a 30,000-year-old chunk of Alaskan ice – is fueling discussion about the possibility of past, even present, life on Mars. In a poll taken at a European Space Agency conference last month, 75 percent of scientists believe bacteria once existed on Mars and 25 percent believe it might still survive there.
Head recently traveled to Antarctica to study glaciers, including bacteria that can withstand the continent's dry, cold conditions. The average temperature on Mars is estimated to be 67 degrees below freezing. Similar temperatures are clocked in Antarctica's frigid interior.
"We're now seeing geological characteristics on Mars that could be related to life," Head said. "But we're a long way from knowing that life does indeed exist. The glacial deposits we studied would be accessible for sampling in future space missions. If we had ice to study, we would know a lot more about climate change on Mars and whether life is a possibility there."
The European Space Agency, the German Aerospace Center and the Freie Universitaet in Berlin built and flew the HRSC and processed data from the camera. The National Aeronautics and Space Administration (NASA) supported Head's work.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
|
<urn:uuid:3b3957fd-03ff-4083-b0d3-e93cff20fba9>
|
CC-MAIN-2013-20
|
http://www.eurekalert.org/pub_releases/2005-03/bu-fai031405.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.933385
| 821
| 3.703125
| 4
|
[
"climate"
] |
{
"climate": [
"climate change"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Teachers, register your class or environmental club in our annual Solar Oven Challenge! Registration begins in the fall, and projects must be completed in the spring to be eligible for prizes and certificates.
Who can participate?
GreenLearning's Solar Oven Challenge is open to all Canadian classes. Past challenges have included participants from grade 3 through to grade 12. Older students often build solar ovens as part of the heat unit in their Science courses. Other students learn about solar energy as a project in an eco-class or recycling club.
How do you register?
1. Registration is now open to Canadian teachers. To register, send an email to Gordon Harrison at GreenLearning. Include your name, school, school address and phone number, and the grade level of the students who will be participating.
2. After you register, you will receive the Solar Oven Challenge Teacher's Guide with solar oven construction plans. Also see re-energy.ca for construction plans, student backgrounders, and related links on solar cooking and other forms of renewable energy. At re-energy.ca, you can also see submissions, photos and recipes from participants in past Solar Oven Challenges.
3. Build, test and bake with solar ovens!
4. Email us photos and descriptions of your creations by the deadline (usually the first week of June).
5. See your recipes and photos showcased at re-energy.ca. Winners will be listed there and in GreenLearning News.
|
<urn:uuid:c9af9ded-f40a-4b50-aa4b-6454d2943f75>
|
CC-MAIN-2013-20
|
http://www.greenlearning.ca/re-energy/solar-oven-challenge
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.92865
| 301
| 2.71875
| 3
|
[
"climate"
] |
{
"climate": [
"renewable energy"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
You are here: PureEnergySystems.com
> News > October
E-Cat Test Validates Cold Fusion Despite Challenges
The test of the E-Cat (Energy Catalyzer) that took place on October 6, 2011 in Italy has validated Andrea Rossi's claim that the device produces excess energy via a novel Cold Fusion nuclear reaction. Despite its success, the test was flawed, and could have been done in a way that produced more spectacular results -- as if confirmation of cold fusion is not already stunning enough.
Andrea Rossi stands in front of his E-Cat
apparatus, October 6, 2011
Photo by Maurizio Melis of Radio24
by Hank Mills
Pure Energy Systems News
Andrea Rossi has made big claims for the past year, about his cold fusion "E-Cat" (Energy Catalyzer) technology. He has claimed that it produces vast amounts of energy via a safe and clean low energy nuclear reaction that consumes only tiny amounts of nickel and hydrogen. A series of tests had been performed earlier this year that seemed to confirm excess energy is produced by the systems tested. Some of the tests were particularly impressive, such as one that lasted eighteen hours, and was performed by Dr. Levi of the University of Bologna. Unfortunately, the tests were not planned out as well as they could have been and had flaws.
The most recent test that took place on October 6, 2011 in Bologna, Italy, was supposed to address many of the concerns about the previous tests, and be performed in a way that would put to rest many issues that had been discussed continually on the internet. Despite showing clear evidence of excess energy -- which is absolutely fantastic -- this most recent test failed to live up to its full potential. It was a big success in that it validated the claim the E-Cat produces excess energy via cold fusion, but it was not nearly as successful as it could have been. Or as successful as we, the outsiders looking in, would like for it to have been.
The Inventor's Mindset
One thing that should be stated is that inventors do not always think like the people who follow their inventions. They have their own mindset and way of looking at things. This should be obvious, because they are seeing *everything* from a different perspective. For example, when we think seeing the inside of an important component would be exciting and informative, they consider it a threat to their intellectual property. Or, for example, when we would like to see a test run for days, they are thinking that a few hours is long enough. In their mind, they know their technology works, and running it for hours, days, or weeks would be more of a chore to them than an exciting event.
In Rossi's case, he has worked with these reactors for many years. He has tested them time and time again. In fact, he has built hundreds of units (of different models), and has tested every one of them. He is aware of how the units operate and how they perform. Actually, for a period of many months to a year or more, he had an early model of E-Cat heating one of his offices in Italy. Satisfying the curiosity of internet "chatters" by operating a unit for an extended period of time -- beyond what he thinks is needed to prove the effect -- is just a waste of his time, according to his thinking. He could spend the time getting the one megawatt plant ready to launch.
Don't forget, Rossi is a busy person. In addition to finishing the one megawatt plant, he has a new partner company to find, a wife at home, and a life to live! We need to consider that he works sixteen to eighteen hours a day building units, testing them, addressing other issues about the E-Cat. Although he is a very helpful person in many ways (willing to communicate with people and answer questions), he simply does not have the time to grant all of the many requests made of him. If he did, he could not get any work done at all, and the E-Cat would never be launched, or ever make it to the market place!
The Outsider's Mindset
I consider myself an outsider. I have never built a cold fusion device, have never spent years working to develop a technology, and have never gone through the grueling process of trying to bring a product into the market place. Although I spend a lot of time researching various technologies on the internet, I don't work sixteen to eighteen hours a day. In addition, I have no vested interest in the success of any technology, other than simply wanting at least one to hit the market place, ASAP.
As an outsider, I do not think like Rossi thinks. I don't think the majority of people think like Rossi thinks, because they are not in his shoes. They are not working to the point of exhaustion, and do not have years of their life invested in an exotic technology. Due to the fact we do not think like Rossi, his actions or sometimes lack thereof can seem strange, bizarre, or odd. Sometimes, they can make us want to smack ourselves, to make sure we are not in some sort of strange dream.
The recent test on October 6, 2011 is an example of a situation in which outsiders would have liked to have seen a very different test. Here are examples of how an outsider would have liked to see the test performed, compared to Rossi's possible mindset.
(Please note that I am making speculations about what Rossi is thinking, and his mindset. I do not know for sure if my guesses are accurate. If they are not, then I would like to apologize to Rossi, and give him the chance to respond in any way he sees fit.)
In the recent test, the output producing capability of the reactors was throttled down for safety reasons. This may have been done by keeping the hydrogen pressure low, or adding less of the catalyst to the nickel powder. Also, only one out of the three reactors that were inside of the module, were used in the test. For an experimental test to prove the effect beyond a shadow of a doubt, I, as an outsider, would have loved to have seen the device fully throttled up, despite the safety risks. Even if it meant everyone that attended would have had to sign long legal disclaimers, it would have been worth it.
I think it would have been great if all three reactors were utilized, and they all were adjusted to produce their maximum level of output. This would have increased the amount of output produced dramatically, and would have reduced the amount of input needed. The more heat produced by the system, the less heat would have needed to be input via the electric resistors.
Rossi, on the other hand, probably thought throttling up the device to a high level was not worth the risk, and was not needed to prove that excess energy was being produced. It is true that an explosion causing injuries -- while probably VERY unlikely -- could result in a setback of his project, and possible legal ramifications. Also, in reality, the test proved excess energy was being produced even with only one, throttled down reactor being used.
So even though a test of the device adjusted to operate at full power would have been useful and exciting, it was not absolutely needed for what Rossi wanted to accomplish.
I would like to ask Rossi to consider performing a demonstration with a module both adjusted to operate at full power, and utilizing all three reactor cores. Even if he has to limit the number of people involved, perform the test remotely with cameras monitoring the module, utilize a blast shield, or only allow certain individuals (who have signed disclaimers) to go into the room in which the module is running.
A Longer Self Sustain
As an outsider, I have not had the chance to look at test data from these devices self-sustaining for long periods of time -- 12 hours, 24 hours, days, weeks, etc. I would really like to see one of these units self sustain for a *very* long period of time. This is not because I think the output of the E-Cat during the recent test was due to stored energy being released (the 'thermal inertia' theory being floated around the internet). In fact, I think that the flat line in NyTeknik's graph -- showing self sustain mode for three and a half hours without any drop in output temperature -- provides clear evidence against the thermal inertia theory. The reason I would like to see a longer period of self sustain, is that it would not only document a huge gain of energy, but one that no individual could rationally deny!
Rossi has claimed that these devices represent an alternative energy solution that could change the world. I think this is true. However, to show just how much potential this technology has, an even more extended test of the E-Cat in self sustain mode (at full power or at least with all three reactors inside the module being used), would have been much more impressive. I am not saying the Oct. 6 test was not impressive -- it was very significant because it demonstrated excess energy and proof of cold fusion -- but that a longer test would have been better. It would have done more to shut up the cynics (a few of which will never change their minds), and help the technology get into the mainstream (dumbstream) media.
I really don't think Rossi cares too much about showing off the technology's full potential, at this point. He also does not seem to appear to want the attention of the mainstream media, or at least any more than he thinks he needs. If he did, the test would have been far different, and would have produced such a gigantic amount of excess energy everyone's jaws would have dropped. My jaw dropped when I saw the flat line during self sustain mode (because it proved beyond a doubt the system was producing excess energy), but my jaw did not drop as far as it could have, if the period of operation had lasted longer.
Interestingly, I have known inventors, of unrelated energy technologies, that purposely held back from showing the *best* version of their technology. They did not want to show off too much, because they did not want to deal with the fallout of attracting too much attention. Instead of performing an amazing demonstration, they performed one that proved the point -- at least to their satisfaction -- but would not attraction too much attention. I think Rossi may feel the same way. If he had his way, he would have never done a single test before the launch of the one megawatt plant. It was Focardi that convinced him to do a public test, because he feared that (due to health problems) he may not live long enough to see the technology be revealed to the world.
A longer test (at least 12 hours) in self sustain mode would have been great, exciting, and would have produced even more excess energy. However, in Rossi's mind, it was not needed, for potentially valid reasons (at least from the perspective of someone on the inside).
I would simply like to humbly plead with Rossi, to try and step in the shoes of the outsider, and at the next test allow the module to run for a longer period in self sustain mode.
Modern Testing Methods and Tools
I have looked at the data acquired during the test, but have not had a chance to study it as in depth as I would like to. The data shows a clear gain of energy in my opinion, and confirms that the E-Cat is producing excess energy. As I said before, the test was a success. However, it could have been performed in a more modern way.
For example, all of the temperature measurements, power input measurements, and water flow measurements should have been fed into the same computer, to be recorded in a real time manner. This way all the data would have been automatically recorded into one data set, including the hour and second of every measurement. It seems data collection was not done this way at the test, and some of the data was actually taken by hand!
Because the data was not all automatically recorded into one computer during the test, NyTeknik (who had the exclusive right to be the first to post a report on the test) has not yet posted a graph that charts all the measurements of all the factors of the test. What I would like to see is a single high resolution graph, that shows all of the measurements that were taken of every parameter of the test. If one graph showing everything would be too complex for a non-expert to easily interpret, then a series of graphs would be ideal. This would allow everyone to more simply determine the total energy in, and the total energy out.
The data collected and the manner in which it was collected is good enough to show there was a significant amount of excess energy produced, especially during self sustain mode. It may also be good enough to show even more details about the excess energy produced. Sadly, I'm not an expert in scientific data interpretation, so it takes me more time to interpret data than an expert who does so full time (like Rossi).
I hope that when I have had the time to examine the data in more depth, I will see that Rossi's claims about the results of the test (not just excess energy but a six fold gain of energy, in a worst case scenario) are accurate. At this point, I am not going to doubt him. He is the expert, and there are many people going over the data, and hopefully more data from the test will be coming in the near future.
What I would like to do, is request that he upgrade his data acquisition methods for any upcoming public tests. However, from Rossi's perspective, the way the data was acquired was good enough, and proved the point he wanted to make. I respect his view, but I do hope that he will change his mind in the future.
For the record, I am not stating that I think better data acquisition techniques are needed to verify his technology produces excess energy, and even significant amounts of it. I simply think it would make analysis of test data much simpler, quicker, and precise.
One of the most useful tools in the scientific method is a control. A control is an object or thing that you do not try to change during the experiment. For example, if you were giving an experimental drug to a hundred people, you might want to have a number of additional people who do not receive the drug. You would compare how the drug effects the people who consumed it, to those who did not receive the drug at all. By comparing the two sets of people, those who consumed the drug and those who did not, you could more easily see the effectiveness of the drug -- or if it was doing harm.
In Rossi's test, a control system would have been an E-Cat module that was setup in the exact same way, except it would have not been filled with hydrogen gas. It would have had the same flow of water going through it, the same electrical input, and it would have operated for the same length of time as the E-Cat unit with hydrogen. By comparing the two, you could easily see the difference between the "control" E-Cat (that was not having nuclear reactions take place), and the "real" E-Cat (that was producing excess heat).
If a control had been used in the experiment, the excess heat would be even more obvious. It would have been so obvious, that it could have made the test go from a major success (with some flaws), to the most spectacular scientific test in the last hundred years.
Yes, a control would have made that much of a difference!
I understand that Rossi may not see the need for a control, when the test that was performed clearly showed excess energy without it. A control might have made the experiment so mind blowingly amazing, it could have attracted too much media attention, too many scientists that would want to get involved, and too many individuals wanting additional information. The result could have been that Rossi would not even have the time to finish his one megawatt plant.
However, from the view point of an outsider, I think a control would have greatly benefited the experiment. If it created too much media attention, perhaps someone could volunteer to work for a month as an unpaid intern, filtering through all of the requests from media representatives, and taking care of many non-technical tasks, so Rossi could focus on getting the one megawatt plant ready!
I sincerely hope that during the test of the one megawatt plant, and any tests before then, a control run will be performed, in which no hydrogen is placed in the reactors.
Rossi's Statement about the Test Results
Andrea Rossi responded to an email we sent him that had questions about the test. Here is the email, and his responses.
THANK YOU FOR YOUR CONTINUOUS ATTENTION. PLEASE FIND THE ANSWERS IN BLOCK LETTERS ALONG YOUR TEXT:
Dear Andrea Rossi,
In regards to the latest test of the Energy Catalyzer, I have a number of questions I hope you can answer.
1) My understanding is that if a reactor core is not adjusted to be under-powered (below its maximum potential) in self-sustain mode, it can have a tendency to become unstable and climb in output. If the reactor is left in an unstable self-sustaining mode for too long, the output can climb to potentially dangerous levels. Can you provide some information about how the reactor core in the test was adjusted to self-sustain in a safe manner?
NO, VERY SORRY
a) For example, there was only one active reactor core in the module tested. How was the single reactor core adjusted to be under-powered?
b) Is adjusting the reactor core as simple as lowering the hydrogen pressure?
2) What is the power consumption of the device that "produces frequencies" that was mentioned in the NyTeknik article? Although the power consumption of this device is probably insignificant, providing a figure could help put to rest the idea (that some are suggesting) that a large amount of power was being consumed by the frequency-generating device, and transmitted into the reactor.
THE ENERGY CONSUMED FROM THE FREQUENCY GENERATOR IS 50 WH/H AND IT HAS BEEN CALCULATED, BECAUSE THIS APPARATUS WAS PLUGGED IN THE SAME LINE WHERE THE ENERGY-CONSUME MEASUREMENT HAS BEEN DONE
a) Can you tell us anything more about this frequency generating device and its function?
NO, SORRY, THIS IS A CONFIDENTIAL ISSUE
b) Is the frequency-generating device turned on at all times when a module is in operation, or only when a module is in self-sustain mode?
c) Some are suggesting that this device is "the" catalyst that drives the reactions in the reactor core. However, you have stated in the past that the catalyst is actually one or more physical elements (in addition to nickel and hydrogen) that are placed in the reactor core. Can you confirm that physical catalysts are used in the reactor?
YES, I CONFIRM THIS
3) Does the reaction have to be quenched with additional water flow though the reactor, or is reducing the hydrogen pressure enough to end the reactions on its own?
NEEDS ADDITIONAL QUENCHING
a) If reducing the hydrogen pressure (or venting it completely) is not enough to turn off the module, could it be due to the fact some hydrogen atoms are still bonded to nickel atoms, and undergoing nuclear reactions?
b) If there is some other reason why reducing hydrogen pressure is not enough to quickly turn off the module, could you please specify?
Thank you for taking the time to answer these questions, and for allowing a test to be performed that clearly shows anomalous and excess energy being produced. Hopefully, the world will notice the significance of this test.
THANK YOU VERY MUCH, AND, SINCE I HAVE ABSOLUTELY NOT TIME TO ANSWER (I MADE AN EXCEPTION FOR YOU) PLEASE EXPLAIN THAT BEFORE THE SELF SUSTAINING MODE THE REACTOR WAS ALREADY PRODUCING ENERGY MORE THAN IT CONSUMED, SO THAT THE ENERGY CONSUMED IS NOT LOST, BUT TURNED INTO ENERGY ITSELF, THEREFORE IS NOT PASSIVE. ANOTHER IMPORTANT INFORMATION: IF YOU LOOK CAREFULLY AT THE REPORT, YOU WILL SEE THAT THE SPOTS OF DRIVE WITH THE RESISTANCE HAVE A DURATION OF ABOUT 10 MINUTES, WHILE THE DURATION OF THE SELF SUSTAINING MODES IS PROGRESSIVELY LONGER, UNTIL IT ARRIVES TO BE UP TO HOURS. BESIDES, WE PRODUCED AT LEAST 4.3 kWh/h FOR ABOUT 6 HOURS AND CONSUMED AN AVERAGE OF 1.3 kWh/h FOR ABOUT 3 HOURS, SO THAT WE MADE IN TOTAL DURING THE TEST 25.8 kWh AND CONSUMED IN TOTAL DURING THE TEST 3.9 kWh. IN THE WORST POSSIBLE SCENARIO, WHICH MEANS NOT CONSIDERING THAT THE CONSUME IS MAINLY MADE DURING THE HEATING OF THE REACTOR DURING THE FIRST 2 HOURS, WE CAN CONSIDER THAT THE WORST POSSIBLE RATIO IS 25.8 : 3.9 AND THIS IS THE COP 6 WHICH WE ALWAYS SAID. OF COURSE, THE COP IS BETTER, BECAUSE, OBVIOUSLY, THE REACTOR, ONCE IN TEMPERATURE, NEEDS NOT TO BE HEATED AGAIN FROM ROOM TEMPERATURE TO OPERATIONAL TEMPERATURE.
WARMEST REGARDS TO ALL, ANDREA ROSSI
He claims that the test produced 25.8 kilowatt hours of power, and consumed only 3.9 kilowatt hours, not considering the losses from using two circuits of water and a heat exchanger. This would be very impressive for a system that is only using one reactor core (out of three), that has been adjusted to only produce a fraction of its maximum potential power.
However, from my analysis of the data so far (still trying to wrap my head around it), I have not been able to confirm his claim of a COP of 6. I am not saying it is not the case, or not in the data. I simply have yet to fully examine the data, and I am waiting for more data to be released.
Actually, I hope that someone will release all the data in one file and/or graph that will be easier to interpret. Perhaps NyTeknik, if they have not done so already, could contact Rossi or someone else who attended and recorded the data, and ask for any test data they are missing.
Bottom Line - Cold Fusion Is Here
The fact of the matter was that the October 6th test was a success in many ways.
- It documented a gain of energy.
- It documented a gain of energy in self-sustain mode.
- It documented massive "heat after death."
Most importantly, it proved beyond a doubt, that cold fusion is a reality.
Italian scientific journalist Maurizio Melis of Il Sole 24 Ore, who witnessed the test in Bologna,
"In the coming weeks Rossi aims to activate a 1MW plant, which is now almost ready, and we had the opportunity to inspect it during the demonstration of yesterday. If the plant starts up then it will be very difficult to affirm that it is a hoax. Instead, we will be projected suddenly into a new energetic era."
The test could have been made better in many ways. It had flaws. However, it was the most significant test of the E-Cat so far, for one reason in particular....
This graph shows that the E-Cat is a device producing excess energy, because the red line does not go down until after the hydrogen is vented.
- Some may legitimately argue about how much energy was produced, because we don't yet have all the test data in one easy to interpret graph or file.
- Some may point out the flaws in the test, such as the lack of a control, the lack of another several hours of operation in self sustain mode.
- Some may point out ways the test could be improved.
However, that graph by NyTeknik makes it clear the test was a success -- not a failure.
Mainstream media, your alarm clock is buzzing, it's time to wake up!
# # #
This story is also published at BeforeItsNews.
What You Can Do
- Pass this on to your friends and favorite news sources.
- Join the H-Ni_Fusion
technical discussion group to explore the details of the technology.
- Once available, purchase a unit and/or encourage others who are able, to do so.
- Let professionals in the renewable energy sector know about the promise of
- Subscribe to our newsletter
to stay abreast of the latest, greatest developments in the free energy
- Consider investing in Rossi's group once they open to that in October.
- Help us manage the PESWiki
feature page on Rossi's technology.
Other PES Coverage
PESN Coverage of E-Cat
For a more exhaustive listing, see News:Rossi_Cold_Fusion
LENR-to-Market Weekly -- June 6, 2013 - EU Parliament gives LENR thumbs
LENR-to-Market Weekly -- May 30, 2013 - additional info on the E-Cat
3rd party test report (PESN)
LENR-to-Market Weekly -- May 23, 2013 - E-Cat 3rd Party Results Posted (PESN)
E-Cat Validation Creates More Questions (PESN;
May 21, 2013)
Third-Party E-Cat Test Results
Posted - posted on ArXiv.org
May 20, 2013)
Interview with E-Cat Distributor License Broker, Roger Green (PESN;
May 17, 2013)
LENR-to-Market Weekly -- May 9, 2013 - Interview with Rossi about recent 1 MW plant delivery
Interview with Andrea Rossi About 1 MW E-Cat Plant Delivery (PESN;
May 7, 2013)
LENR-to-Market MONTHLY -- April 29, 2013 - E-Cat teases with April 30
delivery date (PESN)
LENR-to-Market Weekly -- March 28, 2013 - E-Cat 3rd-Party testing
LENR-to-Market Weekly -- March 7, 2013 - more on NASA (PESN)
LENR-to-Market Weekly -- February 21, 2013 - NASA on nuclear reactor in
LENR-to-Market Weekly -- February 14, 2013 - Piantelli self-sustains 2
LENR-to-Market Weekly -- February 7, 2013 - CF 101 week 2 (PESN)
LENR-to-Market Weekly -- January 31, 2013 - CF 101 week 1 at MIT (PESN)
LENR-to-Market Weekly -- January 24, 2013 - Piantelli gets CF patent,
Rossi rebuffs (PESN)
LENR-to-Market Weekly -- January 17, 2013 - Defkalion joint venture
LENR-to-Market Weekly -- January 10, 2013 - Rossi having problems
finding certification for home application (PESN)
LENR-to-Market Weekly -- January 3, 2013 - Hot-Cat creating
electromotive force? (PESN)
|
<urn:uuid:e2d43690-786e-4e65-ab06-86238be64fae>
|
CC-MAIN-2013-20
|
http://pesn.com/2011/10/08/9501929_E-Cat_Test_Validates_Cold_Fusion_Despite_Challenges/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.971945
| 5,799
| 2.71875
| 3
|
[
"climate"
] |
{
"climate": [
"renewable energy"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Photo: James Duncan Davidson
Drawn into controversy
Wearing his wide-brimmed hat, climate scientist James Hansen starts his TEDTalk by asking, ”What do I know that would cause me, a reticent midwestern scientist, to get arrested in front of the White House, protesting?”
Hansen studied under professor James Van Allen, who told him about observations of Venus — there was intense microwave radiation — because it’s hot, and it was kept that way by a thick C02 atmosphere. He was fortunate enough to join NASA and send an instrument to Venus. But while it was in transit, he became involved in calculating what would be the effect of the greenhouse effect here on Earth.
It turns out the atmosphere was changing before our eyes and, “A planet changing before our eyes is more important, it affects and changes our lives.” The greenhouse effect has been understood for a century. Infrared radiation is absorbed by a layer of gas, working like a blanket to keep heat in.
He worked with other scientists and eventually published an article in Science in 1981. They made several predictions in that paper: There would be shifting climate zones, rising sea levels, an opening of the northwest passage, and other effects. All of these have happened or are underway.
That paper was reported on the front page of the NY Times, and led to him testifying to congress. He told them it would produce varied effects, heat waves and droughts, but also (because warmer atmosphere holds more water vapor) more extreme rainfall, stronger storms and greater flooding.
All the global warming ‘hoopla’ became too much, and was distracting him from doing science. In addition, he was upset that the White House had altered his testimony, so he decided to leave communication to others.
The future draws him back in
The problem with not speaking was that he had two grandchildren. He realized he did not want them to say, “Opah understood what was happening, but he didn’t make it clear.”
So he was drawn more and more into the urgency.
Adding carbon to the air is like throwing a blanket on the bed. “More energy is coming in than is going out, until Earth is warm enough to raiate to space as much energy as it recieves from the Sun.” The key quantity is the imbalance, so they did the measurements. It turns out that continents to depths of tens of meters were getting warmer, and the Earth is gaining energy from heat. That amount of energy is equivalent to dropping 400,000 Hrioshima bombs every day, over a year, and there is as much in the pipeline as has already occurred.
If we want to restore energy balance and prevent further warming, we need to reduce the carbon levels from 391 parts per million to 350.
The arguments against
Deniers contend that it’s the sun driving this change. But Hansen notes the biggest change occurred during the low point of the solar cycle — meaning that the effect from the sun is dwarfed by the warming effect.
There are remarkable records in the Earth of what has come before, and we’ve studied them extensively. There is a high correlation between the overall temperature, carbon levels, and sea level. The temperature slightly leads carbon changes by a couple centuries. Deniers like to use that to trick the public. But these are amplifying feedbacks, even through it’s instigated by small effect, a cycle is set up that feed in on itself: More sun in the summer means that ice sheets melt, which means a darker planet, which means more warming. These amplifying feedbacks account for almost entire paleoclimate changes.
The same amplifying feedbacks must occur today. Ice sheets will melt, carbon and methane will be releaseed. “We can’t say exactly how fast these effects will happen, but it is certain they will occur. Unless we stop the warming.”
The view of the future
Hansen presents data showing that Greenland and Antarctica are both losing mass, and that methane is bubbling from the permafrost. That does not bode well. Historically, even at today’s level of carbon, the sea level was 15 meters higher than it is now. We will get least one meter of that this century.
We will have started a process that is out of humanity’s control. There will be no stable shoreline, and the economic implications of that are devastating — not to mention the spectacular loss of speices. It’s possible that 20-50% of all species could be extinct by end of century if we stay on fossil fuels.
Changes have already started. The Texas, Moscow, Oklahoma and other heat waves in recent memory were all exceptional events. There is clear evidence that these were caused by global warming.
Hansen’s grandson Jake is super-enthusiastic, “He thinks he can protect his 2 and a half day old little sister. It would be immoral to leave these people with a climate system spiraling out of control.”
The tragedy is that we can solve this. It could be addressed by collecting a fee for carbon emissions, distributed to all residents. That would stimulate the economy and innovation, and would not enlarge the government. Instead of doing this, we are subsidizing fossil fuels by $400-500 billion per year worldwide.
This, says Hansen, is a planetary emergency, just as important as an asteroid on its way. “But we dither, taking no action to divert the asteroid, even though the longer we wait, the more difficult and expensive it becomes.”
“Now you know some of what I know that is sounding me to sound this alarm. Clearly I haven’t gotten this message across. I need your help. We owe it to our children and grandchildren.”
|
<urn:uuid:d44ce976-2ef0-4a6c-aa1f-7b3db6a74681>
|
CC-MAIN-2013-20
|
http://blog.ted.com/2012/02/29/why-i-must-speak-out-on-climate-change-james-hansen-at-ted2012/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.976114
| 1,222
| 3.265625
| 3
|
[
"climate"
] |
{
"climate": [
"climate system",
"energy balance",
"global warming",
"methane",
"paleoclimate",
"permafrost"
],
"nature": []
}
|
{
"strong": 3,
"weak": 3,
"total": 6,
"decision": "accepted_strong"
}
|
Tata teams up with Aussies for India’s first floating solar plant
The pilot project, which is due to start operations by the end of the year, is based on a Sunengy patented Liquid Solar Array (LSA) technology which uses traditional concentrated photovoltaic technology - a lens and a small area of solar cells that tracks the sun throughout the day, like a sunflower
LSA inventor and Sunengy executive director and chief technology officer, Phil Connor, said that when located on and combined with hydroelectric dams, LSA provides the breakthroughs of reduced cost and "on demand" 24/7 availability that are necessary for solar power to become widely used.
Floating the LSA on water reduces the need for expensive supporting structures to protect it from high winds. The lenses submerge in bad weather and the water also cools the cells which increases their efficiency and life-span.
According to Connor, hydro power supplies 87 percent of the world's renewable energy and 16 percent of the world's power but is limited by its water resource. He said an LSA installation could match the power output of a typical hydro dam using less than 10 percent of its surface area and supply an additional six to eight hours of power per day. Modeling by Sunengy shows that a 240 MW LSA system could increase annual energy generation at the Portuguese hydro plant, Alqueva, by 230 percent.
"LSA effectively turns a dam into a very large battery, offering free solar storage and opportunity for improved water resource management," said Connor. "If India uses just one percent of its 30,000 square kilometers of captured water with our system, we can generate power equivalent to 15 large coal-fired power stations."
Construction of the pilot plant in India will commence in August 2011. Sunengy also plans to establish a larger LSA system in Australia's Hunter Valley by mid-2012 before going into full production.
|
<urn:uuid:a9635421-6f3c-47fb-b1b6-0bd49aad4d63>
|
CC-MAIN-2013-20
|
http://www.cleanbiz.asia/story/tata-teams-aussies-india%E2%80%99s-first-floating-solar-plant
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.962086
| 395
| 2.90625
| 3
|
[
"climate"
] |
{
"climate": [
"renewable energy"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Fuel cells and the electric motor are examples of highly-efficient, electric drive trains. Electric vehicles are expected to one day outstrip sales of combustion engines vehicles. Innovative technologies such as fuel cells, electric motors and electric vehicles will influence our future mobility. The market for electric vehicles boasts the most potential.
Fuel cells, electric motors and electric vehicles are currently experiencing a breakthrough. Fuel cells are being used in new applications such as automobiles or laptop computers. Like electric vehicles, fuel cells are still in the development phase however. The potential is far from being exploited. Because a genuine fuel cell boom is anticipated, mass production is already underway. Like fuel cells, the application potential for electric motors and electric vehicles is still in its infancy stage. The discovery of the relationship between magnetic fields and electricity laid the foundation for the electric motor, and thus the electric vehicle. The electric motor that eventually resulted from this discovery is driven by the Lorentz force, which is the force on an electric charge as it moves through a magnetic field. The development of traditional technologies such as fuel cells and the electric motor has led to a rise in environmentally-friendly electric vehicles. Hybrid vehicles are still dominating the market in the segment for environmentally-friendly automobiles however. Utilizing a combination of combustion and electric motors, hybrid vehicles are slimmed-down versions of the electric vehicle.
Fuel cells are based on the principle of a galvanic process. The composition of a fuel cell is influenced by both electrodes. The fuel cell energy stems from the electrode potential, which is created by the charging of the anode and cathode. The charging results in a potential difference in the fuel cell, which is eventually transformed into electric energy. From its discovery, to today's high-technology status, the fuel cell has experienced an astounding development. Fuel cells are already being used in a variety of applications today. But its impressive career is far from over. Because of their simple operation, the use of fuel cells in electric vehicles represents the market of the future.
Theelectric motor began as an electromechanical transformer. As the description implies, the electric motor is capable of transforming electricity into mechanical energy. The electric motor functions by transforming its mechanical force into motion. Like fuel cell technology, the electric motor is a popular drive train alternative in electric vehicles. The development of the electric motor as a drive train for electric vehicles is still a work in progress however. The first genuine electric motor was produced as early as 1834. Today, state-of-the-art, innovative technologies are still based on discoveries made by researchers nearly 200 years ago, as illustrated by the examples of the fuel cell, electric motor and electric vehicle.
While electric motors and fuel cells were originally used in industrial machine applications, electric vehicles are the technology of the future. At the beginning of their development, electric motors were initially used in locomotives . At this point, the focus is on the development of roadworthy electric vehicles. The key drivers of modern research into the electric vehicle are the electric motor's high degree of efficiency and low CO2 output, two factors that are behind current efforts to combat energy resource and climate change issues. The major issue is energy storage , which is the why researches are focused primarily on this aspect. For this reason, hybrid model electric vehicles - the combination of electric and combustion motors - are still in their infancy stage.
Automotive Engineering highlights issues related to automobile manufacturing - including vehicle parts and accessories - and the environmental impact and safety of automotive products, production facilities and manufacturing processes.
innovations-report offers stimulating reports and articles on a variety of topics ranging from automobile fuel cells, hybrid technologies, energy saving vehicles and carbon particle filters to engine and brake technologies, driving safety and assistance systems.
Fraunhofer-Institut für Werkstoffmechanik IWM13.08.2008 | Read more
The National Academies21.07.2008 | Read more
Fraunhofer Institute for Information and Data Processing IITB11.07.2008 | Read more
Du Pont de Nemours (Deutschland) GmbH11.07.2008 | Read more
University of Portsmouth02.07.2008 | Read more
DOE/Lawrence Livermore National Laboratory06.06.2008 | Read more
Fraunhofer Institute for Integrated Circuits IIS04.06.2008 | Read more
Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF09.05.2008 | Read more
Fraunhofer Institute for Open Communication System02.05.2008 | Read more
Du Pont de Nemours (Deutschland) GmbH09.04.2008 | Read more
National Institute of Standards and Technology (NIST)04.04.2008 | Read more
University of Stuttgart17.03.2008 | Read more
DOE/Argonne National Laboratory26.02.2008 | Read more
DOE/Pacific Northwest National Laboratory25.02.2008 | Read more
Du Pont de Nemours (Deutschland) GmbH20.02.2008 | Read more
Universiti Putra Malaysia (UPM)14.02.2008 | Read more
A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials.
The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers.
Droplets in this toroidal shape made ...
Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry.
Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada.
High manufacturing cost and a short lifetime are still a major obstacle on ...
University of Würzburg physicists have succeeded in creating a new type of laser.
Its operation principle is completely different from conventional devices, which opens up the possibility of a significantly reduced energy input requirement. The researchers report their work in the current issue of Nature.
It also emits light the waves of which are in phase with one another: the polariton laser, developed ...
Innsbruck physicists led by Rainer Blatt and Peter Zoller experimentally gained a deep insight into the nature of quantum mechanical phase transitions.
They are the first scientists that simulated the competition between two rival dynamical processes at a novel type of transition between two quantum mechanical orders. They have published the results of their work in the journal Nature Physics.
“When water boils, its molecules are released as vapor. We call this ...
Researchers have shown that, by using global positioning systems (GPS) to measure ground deformation caused by a large underwater earthquake, they can provide accurate warning of the resulting tsunami in just a few minutes after the earthquake onset.
For the devastating Japan 2011 event, the team reveals that the analysis of the GPS data and issue of a detailed tsunami alert would have taken no more than three minutes. The results are published on 17 May in Natural Hazards and Earth System Sciences, an open access journal of ...
22.05.2013 | Life Sciences
22.05.2013 | Ecology, The Environment and Conservation
22.05.2013 | Earth Sciences
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News
|
<urn:uuid:b0d7c4e8-3cb3-4f9b-bfa3-37fbf01e78f3>
|
CC-MAIN-2013-20
|
http://www.innovations-report.com/reports/reports_list.php?show=19&page=5
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.921333
| 1,622
| 3.25
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change",
"co2"
],
"nature": [
"conservation"
]
}
|
{
"strong": 3,
"weak": 0,
"total": 3,
"decision": "accepted_strong"
}
|
Office of Environment and Energy
The Energy web site describes HUD energy initiatives, policies and how federal government wide energy policies affect HUD programs and assistance.
HUD faces many challenges when it comes to energy policy. For an overview, see Implementing HUD's Energy Strategy, a report to Congress dated December 2008, that includes a summary of progress toward implementing planned actions .
First, utility bills burden the poor and can cause homelessness. There is a Home Energy Affordability Gap Index based on energy bills for persons below 185 percent of the Federal Poverty Level. The gap was $34.1 billion at 2007/2008 winter heating fuel prices . The burden on the poor is more than four times the average 4 percent others pay.Twenty-six percent of evictions were due to utility cut-offs in St. Paul, MN.
Second, HUD programs are affected by energy costs. HUD's own "energy bill" - the amount that HUD spends annually on heating, lighting, and cooling its portfolio of public and assisted housing and section 8 vouchers - reached the $5 billion mark in 2007 . Public Housing utilities cost more than $1 billion per year.
Third, energy costs affect economic development. Importing fuel drains millions of dollars from local economies .
Database of State Incentives for Renewables & Efficiency (DSIRE)
Information on state, local, utility and federal incentives and policies that promote renewable energy and energy efficiency from a database funded by the U.S. Department of Energy.
Edison Electric Institute’s Electric Company Programs
Information on energy efficiency and low-income assistance programs offered by various utilities across the nation.
HUD’s Public and Indian Housing Environmental and Conservation Clearinghouse
Sources of funding for energy conservation and utility cost reduction activities from HUD’s Public and Indian Housing Environmental Clearinghouse.
Promoting Energy Star through HUD’s HOME Investment Partnerships Program
Resources for promoting Energy Star through the HOME program.
Additional HUD Resources
Useful documents, publications, and information related to resource conservation in public housing from HUD’s Public and Indian Housing Environmental Clearinghouse.
Energy Star For New Construction Assisted By The Home Program
HUD has worked with EPA to promote the use of ENERGY STAR standards in construction of houses. Here are the results of that production by the HOME Program for Fiscal Year 2009.
Energy Star for Grantees
Energy Efficiency with CDBG HOME
Energy Star Awards for Affordable Housing
Regional Energy Coordinators reviewed applications for Energy Star Awards for Affordable Housing in 2008, 2009, and 2010.
HUD CHP Screening Tools
HUD's 2002 Energy Action Plan committed HUD to promote the use of combined heat and power (cogeneration) (CHP) in housing and community development. HUD developed a Q Guide explaining CHP to building owners and managers. HUD and DOE Oak Ridge National Laboratory then developed a Level 1 feasibility screening software tool to enable them quickly to get a rough estimate of the cost, savings and payback for installing CHP. The Level 1 screening tool requires only monthly utility bills and a little information about the building and its occupants.
PDF | Download Software | more...
HUD and ORNL have now produced a Level 2 Combined Heat and Power (CHP) analysis tool for more detailed analysis of the potential for installing combined heat and power (cogeneration) in multifamily buildings. Level 2 works from hourly utility consumption and detailed information about the building and its equipment.
ORNL Level 2 Tool
"HUD CHP Guide #3 Introduction to the Level 2 Analysis for Combined Heat and Power in Multifamily Housing" explains how it was developed and provides links to ORNL for downloading the tool, its Users' Manual and training material. It also provides an exercise to demonstrate how it works. The tool is complex and calls for analysis by those with advanced ability to understand building energy use and simulation.
Green Homes and Communities
This website has very good energy information, including references to Sustainable Communities, DOE EECBG funding etc.
Energy Efficiency in CPD Programs
- See Table on page 13 for planned actions developed by the Energy Task Force.
- See Fisher, Sheehan and Colton, On the Brink 2008; The Home Energy Affordability Gap
- See Table B-1, in "Implementing HUD's Energy Strategy"
- See Energy and Economic Development Phase I | Phase II.
|
<urn:uuid:a2370072-c634-4186-b06e-2fd1ba0e0385>
|
CC-MAIN-2013-20
|
http://portal.hud.gov/hudportal/HUD?src=/program_offices/comm_planning/library/energy
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.915712
| 901
| 2.65625
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"renewable energy"
],
"nature": [
"conservation"
]
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
How will the technology and policy changes now sweeping through the industry affect the architecture of the utility grid? Will America build an increasingly robust transmission infrastructure, or...
Unforeseen consequences of dedicated renewable energy transmission.
Growth in renewable electricity (RE) generation will require major expansion of electricity transmission grids, and in the U.S. this could require building an additional 20,000 miles of transmission over the next decade—double what’s currently planned. To facilitate this, government policymakers are planning to build what are sometimes called “green” transmission lines that are restricted to carrying electricity generated by renewable sources, primarily wind and solar.
However, state and local jurisdictions are resisting siting of transmission unless it serves local constituents and existing power plants. If such transmission is built and local access is allowed, then the major beneficiaries of the added transmission might be existing power generation facilities, especially coal plants. Many of these facilities have very low electricity generating costs and their capacity factors are transmission-constrained. Their access to added transmission lines could enable them to sell electric power at rates against which RE can’t compete.
20,000 Miles of Wire
JP Morgan studied a possible federal renewable energy standard (RES) and its impact on the growth rate of RE. 1 We used JP Morgan data to estimate the potential impact of an RES and the transmission required to facilitate it on the existing fleet of power plants. The analysis focused primarily on coal plants because they can increase their capacity factors, whereas U.S. nuclear plants already have capacity factors above 90 percent. Given the location of the coal plants throughout the U.S. and their current capacity factors, we estimated the impact of expanded electricity transmission lines on RE generation and costs and on conventional electricity generation and costs.
The locations of the RE central station technologies and their distances from major load centers largely determine the new transmission that will be required. Geothermal will be installed in a small number of Western states, 2 while biomass will be installed primarily in the northern Great Plains, the Pacific Northwest, and perhaps parts of the South. Solar thermal (ST) and photovoltaics (PV) will be installed in some Western and Southwestern states, and wind will be installed primarily in the northern Great Plains.
The major load centers are primarily metropolitan areas in the coastal states, the Boston-Washington corridor, the West Coast corridor, and major Midwestern cities. In general, increased transmission capability is desirable, because a robust interstate electric transmission system is in everyone’s interest—consumers, power producers, and governments. An expanded transmission network will allow for power system growth, provide greater flexibility in expanding generation at existing plant sites, and facilitate construction of new generating plants at optimal locations.
However, there’s a mismatch between RE resources and load centers: Most of the best RE sites are west of the Mississippi river, but most of the load centers are east of the river or on the West Coast. Even West Coast load centers are far from the best RE sites. We estimated how much new transmission needs to be built to
|
<urn:uuid:be40a391-ba3c-4564-a4eb-3d42885d4685>
|
CC-MAIN-2013-20
|
http://www.fortnightly.com/fortnightly/2012/02/not-so-green-superhighway
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.928116
| 625
| 2.953125
| 3
|
[
"climate"
] |
{
"climate": [
"renewable energy"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
A new carbon cycle model developed by researchers in Europe indicates that global carbon emissions must start dropping by no later than 2015 to prevent the planet from tipping into dangerous climate instability.
The finding is likely to put new pressure on the world’s top two carbon emitters — China and the US — both of which were widely blamed for failure to reach a binding global accord on carbon reductions in Copenhagen last December. Furthermore, the non-binding outcome of Copenhagen has global carbon emissions peaking in 2020 — five years too late, according to the latest model.
The model, developed by researchers at Germany’s Max Planck Institute for Meteorology, suggests the world’s annual carbon emissions can reach no more than 10 billion tonnes in five years’ time before they must be put on a steady downward path. After that, the researchers say, emissions must drop by 56 per cent by mid-century and need to approach zero by 2100.
Those targets are necessary to prevent average global temperatures from rising by more than 2 degrees C by 2100. Under that scenario, though, further warming can still be expected for years to come afterward.
“It will take centuries for the global climate system to stabilise,” says Erich Roeckner, a researcher at the Max Planck Institute.
The new model is the first to pinpoint the extent to which global carbon emissions must be cut to prevent dangerous climate change. Since the beginning of the Industrial Revolution, atmospheric concentrations of carbon dioxide have risen by 35 per cent, to around 390 parts per million today. Stabilising the climate will require concentrations to climb to no higher than 450 parts per million.
“What’s new about this research is that we have integrated the carbon cycle into our model to obtain the emissions data,” Roeckner says.
|
<urn:uuid:3c48599d-00d0-4bef-8ac0-94d34a0f8472>
|
CC-MAIN-2013-20
|
http://www.greenbang.com/clocks-ticking-carbon-emissions-must-peak-by-2015_14888.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.934462
| 371
| 3.9375
| 4
|
[
"climate"
] |
{
"climate": [
"carbon dioxide",
"climate change",
"climate system"
],
"nature": []
}
|
{
"strong": 2,
"weak": 1,
"total": 3,
"decision": "accepted_strong"
}
|
Pacific ecosystem: An ochre sea star makes a kelp forest home in Monterey Bay, California.
Image courtesy of Kip F. Evans
Ocean Literacy - Essential Principle 5
The ocean supports a great diversity of life and ecosystems.
Fundamental Concept 5a.
Ocean life ranges in size from the smallest virus to the largest animal that has lived on Earth, the blue whale.
Fundamental Concept 5b.
Most life in the ocean exists as microbes. Microbes are the most important primary producers in the ocean. Not only are they the most abundant life form in the ocean, they have extremely fast growth rates and life cycles.
Fundamental Concept 5c.
Some major groups are found exclusively in the ocean. The diversity of major groups of organisms is much greater in the ocean than on land.
Fundamental Concept 5d.
Ocean biology provides many unique examples of life cycles, adaptations and important relationships among organisms (symbiosis, predator-prey dynamics and energy transfer) that do not occur on land.
Fundamental Concept 5e.
The ocean is three-dimensional, offering vast living space and diverse habitats from the surface through the water column to the seafloor. Most of the living space on Earth is in the ocean.
Fundamental Concept 5f.
Ocean habitats are defined by environmental factors. Due to interactions of abiotic factors such as salinity, temperature, oxygen, pH, light, nutrients, pressure, substrate and circulation, ocean life is not evenly distributed temporally or spatially, i.e., it is “patchy”. Some regions of the ocean support more diverse and abundant life than anywhere on Earth, while much of the ocean is considered a desert.
Fundamental Concept 5g.
There are deep ocean ecosystems that are independent of energy from sunlight and photosynthetic organisms. Hydrothermal vents, submarine hot springs, methane cold seeps, and whale falls rely only on chemical energy and chemosynthetic organisms to support life.
Fundamental Concept 5h.
Tides, waves and predation cause vertical zonation patterns along the shore, influencing the distribution and diversity of organisms.
Fundamental Concept 5i.
Estuaries provide important and productive nursery areas for many marine and aquatic species.
Shop Windows to the Universe Science Store!
The Winter 2009 issue of The Earth Scientist
, focuses on Earth System science, including articles on student inquiry, differentiated instruction, geomorphic concepts, the rock cycle, and much more!
You might also be interested in:
About 70% of the Earth is covered with water, and we find 97% of that water in the oceans. Everyone who has taken in a mouthful of ocean water while swimming knows that the ocean is really salty. All water...more
Oxygen is a chemical element with an atomic number of 8 (it has eight protons in its nucleus). Oxygen forms a chemical compound (O2) of two atoms which is a colorless gas at normal temperatures and pressures....more
The deep ocean is very cold, under high pressure, and always dark because sunlight can not penetrate that far. The only light comes from bioluminescence – a chemical reaction inside the bodies of some...more
Photosynthesis is the name of the process by which autotrophs (self-feeders) convert water, carbon dioxide, and solar energy into sugars and oxygen. It is a complex chemical process by which plants and...more
Methane is gas that is found in small quantities in Earth's atmosphere. Methane is the simplest hydrocarbon, consisting of one carbon atom and four hydrogen atoms. Methane is a powerful greenhouse gas....more
The intertidal zone is the area along a coastline that is underwater at high tide and above the water at low tide. Whether it’s a rocky coast, a sandy beach, or a salt marsh, life in the intertidal zone...more
The ocean makes Earth habitable. Fundamental Concept 4a. Most of the oxygen in the atmosphere originally came from the activities of photosynthetic organisms in the ocean. Fundamental Concept 4b. The first...more
|
<urn:uuid:449397e8-9b26-4af0-be64-67eab4f80091>
|
CC-MAIN-2013-20
|
http://www.windows2universe.org/teacher_resources/main/frameworks/ol_ep5.html&lang=sp
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.89608
| 855
| 3.859375
| 4
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon dioxide",
"greenhouse gas",
"methane"
],
"nature": [
"ecosystem",
"ecosystems"
]
}
|
{
"strong": 5,
"weak": 0,
"total": 5,
"decision": "accepted_strong"
}
|
From Sioux Falls to Rapid City, fire weather meteorologists are watching conditions closely. And until we receive widespread, heavy moisture they'll be monitoring what is known as the Keetch-Byram Drought Index or KBDI.
It measures the amount of precipitation needed to return the soil to full saturation. It uses a system rating of zero to 800, which represents the moisture amount of zero to eight inches of water. It's what is needed to reduce the drought index to zero, which is saturation.
Much of KELOLAND is at 500 or above. The KBDI of 400 to 600 is typical of late summer and early fall.
When it gets to 600 and above, that's when intense, deep burning fires can be expected with an emphasis on downwind new fires occuring.
The highest spots are in south central, north central and northeast South Dakota. Just this week, the area near Lake Andes is also considered at 800.
It's an important number to know this time of year, whether you're harvesting or off road for hunting.
© 2012 KELOLAND TV. All Rights Reserved.
|
<urn:uuid:18ab9e3d-e911-44bd-a8d5-0c3c3e67c967>
|
CC-MAIN-2013-20
|
http://www.keloland.com/newsdetail.cfm/fire-weather-index/?id=137711
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.950935
| 229
| 2.546875
| 3
|
[
"climate"
] |
{
"climate": [
"drought"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
U.S. Power Plant Carbon Emissions Zoom in 2007
WASHINGTON, DC, March 18, 2008 (ENS) – The biggest single year increase in greenhouse gas emissions from U.S. power plants in nine years occurred in 2007, finds a new analysis by the nonprofit, nonpartisan Environmental Integrity Project. The finding of a 2.9 percent rise in carbon dioxide emissions over 2006 is based on an analysis of data from the U.S. Environmental Protection Agency.
Now the largest factor in the U.S. contribution to climate change, the electric power industry’s emissions of carbon dioxide, CO2, have risen 5.9 percent since 2002 and 11.7 percent since 1997, the analysis shows.
Texas tops the list of the 10 states with the biggest one-year increases in CO2 emissions, with Georgia, Arizona, California, Pennsylvania, Michigan, Iowa, Illinois, Virginia and North Carolina close behind.
The top three states – Texas, Georgia and Arizona – had the greatest increases in CO2 emissions on a one, five and 10 year basis.
TXU’s coal-fired Martin Lake power
plant in east Texas (Photo
Director of the Environmental Integrity Project Eric Schaeffer said, “The current debate over global warming policy tends to focus on long-term goals, like how to reduce greenhouse gas emissions by 80 percent over the next 50 years. But while we debate, CO2 emissions from power plants keep rising, making an already dire situation worse.”
“Because CO2 has an atmospheric lifetime of between 50 and 200 years, today’s emissions could cause global warming for up to two centuries to come,” he warned.
Data from 2006 show that the 10 states with the least efficient power production relative to resulting greenhouse gas emissions were North Dakota, Wyoming, Kentucky, Indiana, Utah, West Virginia, New Mexico, Colorado, Missouri, and Iowa.
The report explains why national environmental groups are fighting to stop the construction of new conventional coal-fired power plants, which they say would make a bad situation worse.
“For example” the report points out, “the eight planned coal-fired plants that TXU withdrew in the face of determined opposition in Texas would have added an estimated 64 million tons of CO2 to the atmosphere, increasing emissions from power plants in that state by 24 percent.”
Some of the rise in CO2 emissions comes from existing coal fired power plants, the analysis found, either because these plants are operating at increasingly higher capacities, or because these aging plants require more heat to generate electricity. “For example, all of the top 10 highest emitting plants in the nation, either held steady or increased CO2 output from 2006 to 2007.”
Robert W Scherer Power Plant is a coal-fired
plant just north of Macon, Georgia.
It emits more carbon dioxide than
any other point in the United States.
(Photo credit unknown)
Georgia Power’s Scherer power plant near Macon, Georgia is the highest emitting plant in the nation. It pumped out 27.2 million tons of CO2 in 2007, up roughly two million tons from the year before.
In view of these facts, the Environmental Integrity Project recommends that the nation’s oldest and dirtiest power plants should be retired, and replaced with cleaner sources of energy. That will require accelerating the development of wind power and other renewable sources of energy.
Another good solution is cutting greenhouse gases quickly by reducing the demand for electricity, the authors advise. Smarter building codes, and funding low-cost conservation efforts, such as weatherization of low-income homes, purchase and installation of more efficient home and business appliances will reduce demand and yield greenhouse gas benefits.
Texas tops every state measurement in the report from the most carbon dioxide measured in total tons to the largest increases in CO2 emissions over the last five years between 2002 and 2007.
Ken Kramer, director of the Lone Star chapter of the Sierra Club based in Austin, Texas, says his state not only has more emissions than any other state – it has solutions to offer, such as a recent boom in wind power installations.
“The bad news is that Texas is #1 in carbon emissions among the 50 states, and our emissions have grown in recent years,” Kramer said. “The good news is that Texas has the potential to play a major role in addressing global warming if we embrace smart energy solutions such as energy efficiency and renewable energy, solutions which pose tremendous economic as well as environmental benefits.”
In Des Moines, Mark Kresowik, Iowa organizer of the Sierra Club’s National Coal Campaign, said, “Energy efficiency and renewable energy are powering a renaissance in rural Iowa and creating thousands of new manufacturing jobs for our state. By rejecting coal plants and reducing pollution through energy efficiency and renewable energy our states will prosper and attract new businesses and young workers for the future.”
The consumption of electricity accounted for more than 2.3 billion tons of CO2 in 2006, or more than 39.5 percent of total emissions from human sources, according to the U.S. Department of Energy. Coal-fired power plants alone released more than 1.9 billion tons, or nearly one third of the U.S. total.
The Department of Energy projects that carbon dioxide emissions from power generation will increase 19 percent between 2007 and 2030, due to new or expanded coal plants.
An additional 4,115 megawatts of new coal-fired generating capacity was added between 2000 and 2007, with another 5,000 megawatts expected by 2012.
|
<urn:uuid:dc1a8aec-de58-4413-8512-51862619d9e3>
|
CC-MAIN-2013-20
|
http://www.sundancechannel.com/blog/2008/03/us-power-plant-carbon-emissions-zoom-in-2007/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00004-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.936904
| 1,150
| 3.25
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon dioxide",
"climate change",
"co2",
"global warming",
"greenhouse gas",
"renewable energy"
],
"nature": [
"conservation"
]
}
|
{
"strong": 7,
"weak": 0,
"total": 7,
"decision": "accepted_strong"
}
|
Humans are intimately connected with the physical environment, even though we have only been present for a fraction of the vast history of the Earth. We strive to understand how the Earth has evolved since its formation over 4 billion years ago and what types of processes have fostered these changes. Our knowledge of the Earth is critical, not only for piecing together its history, but also to aid in the understanding of issues relevant to our present-day lives, such as: availability of natural resources, pollution, climate change, and natural hazards.
During this course, we will perform a general survey of the physical Earth. We will examine the minerals and rocks of which the solid Earth is composed, the processes that generate Earth's landforms, natural hazards associated with geologic processes, geologic time, and surface processes (e.g., glaciers, streams, groundwater).
Final Exam (May 17: 10:30-12:30)
The final exam will be on materials presented during the final quarter of the course (lectures and materials in chapters 16, 15, 18, 19, 12). The exam will be comprehensive and will include material from the entire course. To get the most recent copies of the study guides click at left.
The field trip to Great Falls (MD) took place on Saturday April 22. About 100 students attended on a morning that set a record for rainfall at Dulles Airport (over 3 inches of rain). The field trip was teh only opportunity for extra credit in the course. To see some photos from previous trips, click here.
|
<urn:uuid:a91b04b9-4328-4883-b634-7987811bb953>
|
CC-MAIN-2013-20
|
http://www.geol.umd.edu/~piccoli/geol100-2006/100.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00004-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.943719
| 316
| 3.515625
| 4
|
[
"climate"
] |
{
"climate": [
"climate change"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Original URL: http://www.theregister.co.uk/2006/08/21/mars_geysers/
Martian pole freckled with geysers
Unlike anything on Earth
Every spring, the southern polar cap on Mars almost fizzes with carbon dioxide, as the surface is broken by hundreds of geysers throwing sand and dust hundreds of feet into the Martian "air".
The discovery was announced in the journal Nature by researchers at the Arizona State University, based on data from the Thermal Emission Imaging System on the Mars Odyssey orbiter.
Images sent back by the probe showed that as the sun began to warm the pole, the polar cap began to break out in dark spots. Over the days and weeks that followed, these spots formed fan-like markings, and spidery patterns. As the sun rose higher in the Martian sky, the spots and fans became more numerous.
"Originally, scientists thought the spots were patches of warm, bare ground exposed as the ice disappeared," said lead scientist Phil Christensen. "But observations made with THEMIS on NASA's Mars Odyssey orbiter told us the spots were nearly as cold as the carbon dioxide ice, which is at minus 198 degrees Fahrenheit."
The team concluded that the dark spots were in fact geysers, and the fans that appeared were caused by the debris from the eruptions.
Christensen said: "If you were there, you'd be standing on a slab of carbon-dioxide ice. Looking down, you would see dark ground below the three foot thick ice layer.
"The ice slab you're standing on is levitated above the ground by the pressure of gas at the base of the ice."
He explains that as the sunlight hits the region in the spring, it warms the dark ground enough that the ice touching the ground is vaporised. The gas builds up under the ice until it is highly pressurised and finally breaks through the surface layer.
As the gas escapes, it carries the smaller, finer particles of the soil along with it, forming grooves under the ice. This "spider" effect indicates a spot where a geyser is established, and will form again the following year. ®
|
<urn:uuid:4bfc6c68-3f68-4e10-b04d-51c5dc0dab2e>
|
CC-MAIN-2013-20
|
http://www.theregister.co.uk/2006/08/21/mars_geysers/print.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00004-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.971399
| 454
| 3.140625
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
No flood, no money. What a trade!
When asked about his future, Nguyen Van Nghe, a fisherman in Dong Thap Province answered: “No flood, no money. What a trade!” He was referring to the fact that over the last 10 years the construction of high dykes in Dong Thap and the Long Xuyen Quadrangle has blocked the annual replenishment of freshwater, nutrients, and sediment on which fish depend. In turn, this has destroyed the wild fisheries on which the landless, including Nghe, who make up 20% of the Mekong Delta’s population, depend. Wild fish production has declined by 40% over the last 10 years and species that were once used as fertilizer now sell for VND180,000/kg. The predatory snakehead, which once occupied the top of the aquatic food chain, has disappeared. The high dykes have also greatly reduced the annual flushing. This has resulted in the accumulation of pathogens and toxins in the surface water and growing public health problems.
High dykes were built to allow a third, or autumn-winter, rice crop to be grown because of the high prices this off-season crop can fetch on the international market. In 2011, 560,000 hectares of autumn-winter rice were planted, up from 520,000 hectares in 2010. But because of the loss of sediment, rice productivity can only be maintained through the heavy use of fertilizer. Nguyen Huu Thien, a wetland specialist based in Can Tho, questions whether the third rice crop is profitable once you take into account the increased use of fertilizer and pesticide, the cost of dyke maintenance, the loss of wild fish, and, inevitably, the cost of dyke failure: when dykes failed in 2011, 50 people were killed and tens of millions of dollars of houses, roads, and other infrastructure was destroyed. The intensification of rice production has also resulted in the virtual extinction of the traditional long-stem floating rice varieties, which in Brazil sell for $3,500/ton, almost ten times the price of autumn-winter rice.
Dr. Ngo Van Be of the Dong Thap Muoi Institute of Research and Development says that the floods that used to be “mild” are now “fierce” and unpredictable. In hydrological terms, what the high dykes have done is to separate the Mekong River from its 1.5 million-hectare floodplain. According to Dr. Le Anh Tuan of Can Tho University, these dykes have narrowed the floodplain during the peak October-November flood from 150 kilometers to a few tens of kilometers. This has accelerated the water flow and displaced flooding to residential areas downstream. Reduction of the flooded area has also reduced groundwater recharge, reduced river base flows, and increased dry season saline intrusion, which increases the cost of drinking water supplies. The violent floods of 2011 call into question the value of the third rice crop and instead argue for a more natural hydrology that provides multiple benefits, including greater resilience to climate change, which is likely to result in more intense rainfalls and flash flooding.
To learn more about these issues, watch this 30-minute file produced by the Center for Water Resources Conservation and Development (WARECOD) and VTC16.
|
<urn:uuid:b81f265f-7682-429a-b24e-792b78f2ff29>
|
CC-MAIN-2013-20
|
http://cms.iucn.org/about/union/secretariat/offices/asia/asia_where_work/vietnam/?9560/No-flood-no-money--What-a-trade&add_comment
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00004-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.95582
| 679
| 2.71875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change"
],
"nature": [
"conservation",
"wetland"
]
}
|
{
"strong": 3,
"weak": 0,
"total": 3,
"decision": "accepted_strong"
}
|
Do you cringe at the word “xeriscape”? Does that mean boring, thin-leaved, un-colorful plants to you? Well think again. Xeriscape gardening can look lush, colorful and be a snap to maintain. To help people learn how to have a beautiful garden while being water consious, Colorado Springs Utilities has created two xeriscaped demonstration gardens for denizens to learn about and apply water thriftiness in their own gardens. The display at these gardens provides a lesson for everyone, even if you don’t live in a drought prone area. Can you imagine not having to water much *at all* between rains?
(Editor's Note: This article was originally published on September 1, 2008. Your comments are welcome, but please be aware that authors of previously published articles may not be able to promptly respond to new questions or comments.)
Even if you don't live in Colorado Springs, the Rocky Mountains, or even in a dry climate, the suggestions from these demonstration gardens will help you save time watering and money on your next water bill. Colorado and areas like it have wonderfully dry climates due to low humidity and unfortunately sometimes high winds.For most gardeners, this means both dry skin and dry soil.Besides a good moisturizer, here's how to cope.Plant water-wise plants and irrigate intelligently!
Once the plants on this list get established (one-two weeks) they don't need much water, if any, at all.With the growing water shortage in Colorado and many other states, this is an important feature for sustainability in the coming years.The Mesa Road Xeriscape Garden (one of CSU's featured demonstration gardens) is actually a lush, colorful and peaceful place that requires very little water.Come along for a tour and a xeriscaping lesson, Colorado style.
Why should you xeriscape?
The biggest and most widely misunderstood lesson of xeriscaping is planting in water-usage zones, or hydrozones. That is, putting plants that need lots of water next to ones that need lots of water, and putting plants that can don't need any water with the like. Seems logical right? If you put an iris next to a nasturtium, you are bound to do one of three things: overwater the iris, underwater the nasturtium, or kill both. It just makes sense to put your irises with your Blanketflowers and your Nasturtium next to your cannas. That doesn't sound all that boring, right?
When you really get down to it though, xeriscaping does mean planting flora that does not require much additional water than the average rainfall of your area. Once established, they should practically grow themselves.
CSU (the utilities company, not the university) has created two demonstration gardens to showcase how good xeriscaping can look: the Mesa Xeriscape Demonstration Garden and the Cottonwood Creek Park Xeriscape Garden in Colorado Springs. Along with some planting tips and the right plants, xeriscaping never has to be humdrum.
Bee Balm - Monarda
Blanketflower - Gaillardia
Texas Red Yucca surrounded by other colorful xerics
Lavender and Creeping Thyme
Plants you'll find at the Mesa Xeriscape Demonstration Garden (and you should try!)
Silver Blade Evening Primrose
Dwarf Garden Phlox
Rocky Mountain Sumac
Mullein 'Southern Charm'
Mexican Feather Grass
Dwarf Goldenrod 'Goldenbaby'
Blue Mist Spirea
Pale Purple Coneflower
Gray Creeping Germander
California Fuchsia 'Orange Carpet'
Some more hardy xeric plants to check out:
Red Hot Poker
Autumn Joy Sedum
The Mesa Garden also has a demonstration rock garden, which usually are made up of small xeric plants. Some great xeric rock garden plants are sedums, Penstemon, Dwarf Barberry, Edelweiss, Lamb's Ear, Oregano, Hens and Chicks, Rupturewort, and Cattail Iris. The list of attractive rock garden plants is just about endless.
For more information on fantastic, hardy xeric plants, here is a great link:
Xeric gardening does not have to be colorless or boring as you can see from all of these beautiful flowers showcased at the Mesa Xeriscape Demonstration Garden. Even if you just put in a few xerics, you can benefit from the decreased water usage.
All photos taken at Mesa Xeriscape Demonstration Garden, Colorado Springs, Colorado. Copyrighted to Susanne and Kyle Talbert
About Susanne Talbert
I garden in beautiful Colorado Springs, half a mile from Garden of the Gods. Since we bought our first house two years ago, I have been busy revamping my 1/4 acre of ignored decomposed granite.
My garden passions include water gardening, vines, super-hardy perennials, and native xerics. By day, I am a high school ceramics teacher as well as a ceramicist and painter.
|
<urn:uuid:54856022-11e3-4c4c-ac3d-420d9b2f0f8e>
|
CC-MAIN-2013-20
|
http://davesgarden.com/guides/articles/view/1438/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00004-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.902786
| 1,078
| 2.71875
| 3
|
[
"climate"
] |
{
"climate": [
"drought"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Guest blogger Jon Bowermaster is a writer and filmmaker. His most recent documentary is "SoLa, Louisiana Water Stories" and his most recent book is OCEANS, The Threats to the Sea and What You Can Do To Turn the Tide.
Typically at this time of year a certain breed of shopper purposefully wanders the fish stalls of their favorite grocer taking stock of the piles of fresh oysters carefully arranged on crushed ice or to pick up and judge the heft in their hands of tightly packed tins of caviar, which sell for anywhere from $50 to $2,000.
But maybe this is the year to lay off those two favored treats and replace them with something slightly less traditional: squid.
I know, a big bowl of calamari hardly compares to one of caviar… but, man, there’s a lot of squid out there these days. I’m sure some of those very popular sustainable fish chefs have already dreamed up some special calamari entrée to take advantage of the boom.
How much squid is out there? It’s estimated that around-the-world squid in mass outweighs the human population. And that’s with sperm whales alone munching down more than 100 million tons of squid each year.Squid are one of the most important foods in the ocean, along with other prey like sardines and pollock. Whales and seabirds depend on abundant squid to raise their young, particularly during breeding season.
Along the coast of California, the market squid season has been so abundant the state Department of Fish and Game reports its annual limit of 118,000 tons has already been taken and the squid season is now closed until March 31. Marine biologists credit a rush of colder-than-normal water for the banner year; usually February is prime time.
At the same time, certain squid are booming thanks to a slight warming of sea temperatures, in places like Alaska and Siberia. Many squid, octopuses and other sucker-bearing members of the cephalopod family don't appear to be too troubled by the minor increase. In fact, when it's a little warmer, some thrive. The populations are thought to be exploding because of the overfishing of other fish that used to dine on young squid.
Warmer waters can help squid “balloon” in size because their enzymes work faster when warm. A young giant squid can grow from 2 millimeters to a meter in a single year, the equivalent of a human baby growing to the size of a whale in twelve months.
There’s also been a boom in Humboldt squid along the Pacific coastline ranging from Peru to California, now expanding northward to places they’ve never been seen before. The big tentacled variety can grow more than seven feet long and weigh more than one hundred pounds. A feisty fish, once on the line, the big squids can be slightly dangerous to haul into your boat. They have a nasty, pecking beak, like to spray black ink and have the ability to expel up to two gallons of water into the faces of unsuspecting fishermen (“like a giant squirt gun”).
A downside to the boom in giant squid is that they also have giant appetites, which means they are making a big hit on salmon, for example, thus reducing the amount of the pink fleshy fish for human tables.
The giant squid are also proving to be a menace to divers, being both aggressive and carnivorous, a mean combo when the tentacles of one of the rust-colored, six-foot long creatures latches onto your air tank, or leg.
Editor’s note: It’s typical for squid and other prey species to boom and bust in response to changing environmental conditions. Oceana is working to establish regulations that adapt fishing to this rollercoaster, fishing more during the boom and cutting back during lean times to protect food for whales. As an added complexity, cephalopods are seeing their numbers expand as the fishing industry captures more and more of the big predatory fish that eat squid, such as tuna.
- Stocks Show Signs of Recovery, But Still Work to Do Posted Fri, May 17, 2013
- What Do Historic CO2 Levels Mean for the Oceans? Posted Tue, May 14, 2013
- U.S. Coast Guard Captures Illegal Fishermen in Texas Posted Tue, May 14, 2013
- Victory! Delaware Becomes Seventh State in U.S. to Ban Shark Fin Trade! Posted Thu, May 16, 2013
- It's Endangered Species Day! Posted Fri, May 17, 2013
|
<urn:uuid:e36547a6-f5b7-4076-9f86-3bd9dd865431>
|
CC-MAIN-2013-20
|
http://oceana.org/en/blog/2011/01/guest-post-boom-times-for-squid
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00004-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.954545
| 964
| 2.8125
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"co2"
],
"nature": [
"endangered species"
]
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Climate change is the defining human development challenge of the 21st Century. Failure to respond to that challenge will stall and then reverse international efforts to reduce poverty. The poorest countries and most vulnerable citizens will suffer the earliest and most damaging setbacks, even though they have contributed least to the problem. Looking to the future, no country—however wealthy or powerful—will be immune to the impact of global warming.
Climate change is not just a future scenario. Increased exposure to droughts, floods and storms is already destroying opportunity and reinforcing inequality. Meanwhile, there is now overwhelming scientific evidence that the world is moving towards the point at which irreversible ecological catastrophe becomes unavoidable. Business-as-usual climate change points in a clear direction: unprecedented reversal in human development in our lifetime, and acute risks for our children and their grandchildren.
In a divided but ecologically interdependent world, it challenges all people to reflect upon how we manage the environment of the one thing that we share in common: planet Earth. It challenges us to reflect on social justice and human rights across countries and generations. It challenges political leaders and people in rich nations to acknowledge their historic responsibility for the problem, and to initiate deep and early cuts in greenhouse gas emissions. Above all, it challenges the entire human community to undertake prompt and strong collective action based on shared values and a shared vision.
The IPCC2000 report, based on the work of some 2,500 scientists in more than 130 countries, concluded that humans have caused all or most of the current planetary warming. Human-caused global warming is often called anthropogenic climate change.
Industrialization, deforestation, and pollution have greatly increased atmospheric concentrations of water vapor, carbon dioxide, methane, and nitrous oxide, all greenhouse gases that help trap heat near Earth’s surface. Humans are pouring carbon dioxide into the atmosphere much faster than can be absorbed by plants and oceans.
It is expected that environmentally friendly methods of electricity generation will help to lower the CO2 being produced during power generation. Some of this methods include solar and wind energy power generators.
THE SUN IS NOT TO BLAME
It’s global warming, it’s climate change, it’s our fault, no it’s volcanoes… wait, actually it’s the sun! Or maybe not.
The arguments about climate change have been raging for years. Some say that the warming is a natural occurrence, some blame humanity and then others simply say ‘what warming?’ One idea now at least can be put to rest – the sun is not to blame.
It has been very popular to cite that the planet goes through many warming and cooling periods as a direct result of changes in solar activity. It sounds fair enough, if the sun produces more heat we get warmer. And this was very much the idea put forward by Britain’s Channel 4 in their documentary called ‘The great global warming swindle.’ There is one main problem with blaming the sun – it’s just not true.
It is true that up until about 1980 all the information showed that solar activity was increasing. This is exactly what they showed on all of their graphs, and it looks both convincing and logical. The sun was outputting more energy, the planet was getting warmer, bingo, we have our culprit! But this is just not the case, the trend in solar activity changed.
“This paper is the final nail in the coffin for people who would like to make the sun responsible for present global warming,” Stefan Rahmstorf, a climate scientist at the Potsdam Institute for Climate Impact Research in Germany, told the journal Nature.
GLOBAL MEAN TEMPERATURES
Mike Lockwood, from Oxford’s Rutherford-Appleton Laboratory and Claus Fröhlich, from the World Radiation Centre in Switzerland recently published their paper in the UK’s Royal Society’s journal, stating clear evidence that the warming of the past two decades is not due to the sun – we need to look elsewhere.
Not only does the sun fail to explain any rise in temperature but it would actually suggest a cooling, as clearly stated in the abstract of the paper, “There is considerable evidence for solar influence on the Earth’s pre-industrial climate and the Sun may well have been a factor in post-industrial climate change in the first half of the last century. Here we show that over the past 20 years, all the trends in the Sun that could have had an influence on the Earth’s climate have been in the opposite direction to that required to explain the observed rise in global mean temperatures.”
COUNTERING GLOBAL WARMING
This study has compiled data about solar activity spanning the last one hundred years and have shown that solar activity peaked between 1985 and 1987.
“This is an important contribution to the scientific debate on climate change. At present there is a small minority which is seeking to deliberately confuse the public on the causes of climate change. They are often misrepresenting the science, when the reality is that the evidence is getting stronger every day. We have reached a point where a failure to take action to reduce carbon dioxide and other greenhouse gas emissions would be irresponsible and dangerous,” said a representative of the Royal Society.
Even though one theory countering global warming has been overturned new ones will come up every day. People will continue to say that the whole idea of climate change is nothing but government propaganda and a dozen or so other conspiracy theories. Now more than ever it is important for the public to be properly educated about what is going on. There are far too many places citing ‘scientific’ reasons as to why global warming is nothing but a lie. And sadly, with the way the media and the internet works, this is unlikely to change. Let’s just hope the politicians listen to the scientists and not to members of the public who are being confused and misled by far too many sources.
SHEEP BLAME HUMANS
Two years after scientists concluded that a breed of wild sheep on a remote Scottish island was shrinking over time, a study released Thursday revealed why: milder winters tied to global warming.
Due to milder winters, lambs on the island of Hirta do not need to put on as much as weight in the first months of life to survive to their first year, according to the study in the peer-reviewed journal Science. As a result, even the slower-growing ones now have a chance of surviving.
“In the past, only the big, healthy sheep and large lambs that had piled on weight in their first summer could survive the harsh winters on Hirta,” lead author Tim Coulson, a researcher at Imperial College London, said in a statement.
“But now, due to climate change, grass for food is available for more months of the year, and survival conditions are not so challenging — even the slower growing sheep have a chance of making it, and this means smaller individuals are becoming increasingly prevalent in the population.”
EVOLUTIONARY THEORY UPENDED
The study upends the belief that natural selection is a dominant feature of evolution, noting that climate can trump that card.
“According to classic evolutionary theory,” Coulson added, the sheep “should have been getting bigger, because larger sheep tend to be more likely to survive and reproduce than smaller ones, and offspring tend to resemble their parents.”
The sheep on Hirta have been examined closely since 1985 and experts concluded in 2007 that average body size was shrinking. By this year, it had decreased by 5 percent since 1985.
Coulson’s team analyzed body-weight measurements and key life milestones for a selected group of female sheep. They then plugged the data into a computer model that predicts how body size will change over time due to natural selection and other factors.
The results suggest that the decrease in average size is primarily an ecological response to warming, the authors said, and that natural selection has contributed relatively little.
|
<urn:uuid:d1a04e75-07aa-41a3-a7ee-a6c19c39639e>
|
CC-MAIN-2013-20
|
http://ziarra.net/light-wind/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00004-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.950002
| 1,657
| 3.359375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon dioxide",
"climate change",
"co2",
"global warming",
"greenhouse gas",
"methane",
"nitrous oxide"
],
"nature": [
"deforestation",
"ecological"
]
}
|
{
"strong": 8,
"weak": 1,
"total": 9,
"decision": "accepted_strong"
}
|
The Working Group III Special Report on Renewable Energy Sources and Climate Change Mitigation (SRREN) presents an assessment of the literature on the scientific, technological, environmental, economic and social aspects of the contribution of six renewable energy (RE) sources to the mitigation of climate change. It is intended to provide policy relevant information to governments, intergovernmental processes and other interested parties. This Summary for Policymakers provides an overview of the SRREN, summarizing the essential findings. The SRREN consists of 11 chapters. Chapter 1 sets the context for RE and climate change; Chapters 2 through 7 provide information on six RE technologies, and Chapters 8 through 11 address integrative issues. References to chapters and sections are indicated with corresponding chapter and section numbers in square brackets. An explanation of terms, acronyms and chemical symbols used in this SPM can be found in the glossary of the SRREN (Annex I).Conventions and methodologies for determining costs, primary energy and other topics of analysis can be found in Annex II and Annex III. This report communicates uncertainty where relevant.
You are here
Residential Photovoltaic Energy Systems in California: The Effect on Home Sales Prices (2011)
|
<urn:uuid:85828e38-8149-4c95-a497-d73d92671dea>
|
CC-MAIN-2013-20
|
http://www.seia.org/research-resources/residential-photovoltaic-energy-systems-california-effect-home-sales-prices-2011
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00004-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.860681
| 245
| 2.65625
| 3
|
[
"climate"
] |
{
"climate": [
"climate change",
"renewable energy"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Family: Corvidae, Crows, Magpies, Jays view all from this family
Description ADULT Has a pale gray-brown back, but otherwise mostly dark blue upperparts; note, however, the dark cheeks and eyeline and pale forecrown, extending back as faint supercilium. Throat is whitish and streaked, with discrete demarcation from otherwise grubby pinkish gray underparts. JUVENILE Similar, but dull gray on head, back, and wing coverts, with blue flight feathers and tail.
Dimensions Length: 11" (28 cm)
Endangered Status The Florida Scrub-Jay is on the U.S. Endangered Species List. It is classified as threatened throughout its range in Florida. Like many other Florida wildlife species, the scrub-jay has declined as its habitat has succumbed to development. Remaining habitat has been fragmented and degraded, and existing populations are small and isolated. This makes them vulnerable to any change to their environment, as an entire population can be wiped out at once. In some areas the rate of mortality appears to exceed the rate at which the populations is reproducing. A long-term problem for this species could be rising sea levels caused by global warming, as their remaining habitat could easily become inundated.
Habitat Common and widespread resident of scrubby woodland and overgrown suburban lots. Has declined markedly due to habitat loss and degradation
Observation Tips Easy to see and often indifferent to people.
Voice Utters a harsh, nasal cheerp, cheerp, cheerpÖ and other chattering calls.
Similar Species Western Scrub-jay A. californica (L 11-12 in) is bluer on head but otherwise similar; a mainly western species, resident in Texas.
Discussion Florida endemic with slim body, long tail, and stout, but rather slender bill. An opportunistic feeder with an omnivorous diet that includes berries, fruits, insects, and the eggs and young of songbirds. Usually seen in family groups. Sexes are similar.
|
<urn:uuid:007f6280-3b67-492c-aae2-0c291fb254c8>
|
CC-MAIN-2013-20
|
http://www.enature.com/fieldguides/detail.asp?sortBy=has+audio&curFamilyID=246&curGroupID=1&lgfromWhere=&viewType=&curPageNum=2
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00004-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.949335
| 426
| 3.171875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"global warming"
],
"nature": [
"endangered species",
"habitat"
]
}
|
{
"strong": 3,
"weak": 0,
"total": 3,
"decision": "accepted_strong"
}
|
The continuing Texas drought has taken an enormous and growing toll on trees, killing as many as half a billion – 10 percent of the state’s 4.9 billion trees – this year alone, the Texas Forest Service estimates.
That calculation did not include trees claimed by this year’s deadly and extensive wildfires, even if they were drought-related, Burl Carraway, who heads the agency’s Sustainable Forestry Department, told Texas Climate News.
(Previously, the Forest Service estimated that about 1.5 million trees were lost on 34,000 charred acres in the Bastrop County fire, most destructive in Texas history. In another damage assessment, the agency said more than 2,000 fires in East Texas had charred more than 200,000 acres. Texas has about 63 million acres of forestlands.)
The estimate that up to half a billion trees have been lost to drought in 2011 was issued Monday by the Forest Service. It was based on statistics tabulated by agency foresters after they canvassed local forestry professionals in their regions, developed estimated percentages of drought-killed trees, and applied them to regional tree inventories.
The resulting estimate was that 100 million to 500 million trees with a diameter of at least five inches had perished because of the drought – two to 10 percent of the nearly five billion trees of that size in the state.
In 2011, Texas experienced an exceptional drought, prolonged high winds and record-setting temperatures. Together, those conditions took a severe toll on trees across the state. Large numbers of trees in both urban communities and rural forests have died or are struggling to survive. The impacts are numerous and widespread.
The agency found that trees in three areas appeared to be hurt the most by the drought:
- An area in West Texas including Sutton, Crockett, western Kimble and eastern Pecos counties, with extensive death of Ashe junipers.
- An area in Southeast Texas including Harris, Montgomery, Grimes, Madison and Leon counties, where many loblolly pines succumbed.
- An area southeast of Austin, including western Bastrop and eastern Caldwell counties as well as neighboring areas, which had widespread mortality among cedars and post oaks.
Also, the agency said, “localized pockets of heavy mortality were reported for many other areas.
The Forest Service plans to use aerial imagery in a more detailed analysis next spring, when trees that entered early dormancy because of the drought may start to recover. In addition, the agency said, “a more scientific, long-term study” of tree losses will be carried out through its Forest Inventory and Analysis program’s census of the state’s trees. Carraway said Forest Service officials “fully expect mortality percentages to increase if the drought continues.”
Texas state climatologist John Nielsen-Gammon has said that a second year of drought in 2012 is “likely,” perhaps with more dry conditions following that.
Nielsen-Gammon has estimated that about a tenth of the excess heat this past summer was attributable to manmade climate change. He and other climate experts have said hotter, drier conditions are expected to increase in Texas in decades ahead as concentrations of human-created greenhouse gases accumulate in the atmosphere.
What the warming average temperature of the planet could mean for forests and other ecosystems was the focus of research findings announced last week by NASA.
The study, carried out by researchers from NASA’s Jet Propulsion Laboratory and the California Institute of Technology, used a computer model that projected massive changes in plant communities across nearly half of the earth’s land surface, with “the conversion of nearly 40 percent of land-based ecosystems from one major ecological community type – such as forest, grassland or tundra – toward another.”
The NASA announcement added:
The model projections paint a portrait of increasing ecological change and stress in Earth’s biosphere, with many plant and animal species facing increasing competition for survival, as well as significant species turnover, as some species invade areas occupied by other species. Most of Earth’s land that is not covered by ice or desert is projected to undergo at least a 30 percent change in plant cover – changes that will require humans and animals to adapt and often relocate.
In addition to altering plant communities, the study predicts climate change will disrupt the ecological balance between interdependent and often endangered plant and animal species, reduce biodiversity and adversely affect Earth’s water, energy, carbon and other element cycles.
“For more than 25 years, scientists have warned of the dangers of human-induced climate change,” said Jon Bergengren, a scientist who led the study while a postdoctoral scholar at Caltech. “Our study introduces a new view of climate change, exploring the ecological implications of a few degrees of global warming. While warnings of melting glaciers, rising sea levels and other environmental changes are illustrative and important, ultimately, it’s the ecological consequences that matter most.”
When faced with climate change, plant species often must “migrate” over multiple generations, as they can only survive, compete and reproduce within the range of climates to which they are evolutionarily and physiologically adapted. While Earth’s plants and animals have evolved to migrate in response to seasonal environmental changes and to even larger transitions, such as the end of the last ice age, they often are not equipped to keep up with the rapidity of modern climate changes that are currently taking place. Human activities, such as agriculture and urbanization, are increasingly destroying Earth’s natural habitats, and frequently block plants and animals from successfully migrating.
– Bill Dawson
Image credits: Photos – Texas Forest Service; Map – NASA
|
<urn:uuid:c21fe654-c05d-4eee-a89f-56fcb6eb1626>
|
CC-MAIN-2013-20
|
http://texasclimatenews.org/wp/?p=3779
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.961364
| 1,178
| 3.203125
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change",
"drought",
"global warming"
],
"nature": [
"biodiversity",
"ecological",
"ecosystems"
]
}
|
{
"strong": 5,
"weak": 1,
"total": 6,
"decision": "accepted_strong"
}
|
Biology 107 Lab Exam Review
Science B01 with Magor at University of Alberta
About this deck
Size: 132 flashcards
Sign up (free) to study this.
When is a plant cell plasmolyzed?
What is a hypertonic solution?
A hypertonic solution is a solution that has a higher concentration of solutes than found inside the cell
What are some of the functions of cell membranes?
Separation of cell contents from external environment
Organization of chemicals and reactions into specific organelles within the cell
Regulation of the transport of certain molecules into and out of the cell and its organelles
Beet cells red pigment, located in the cell's large central vacuole and surrounded by the tonoplast membrane
Molecule or part of a molecule that absorbs radiant energy (light)
Graph that shows the amount of light absorbed at a number of wavelengths
Four parts of a spectrophotometer
Device that isolates a photoelectric tube
Standard curve limitations
Standard curve is specific to the pigment and its buffer
Standard curve cannot be used for absorbances beyond the range of the standard curve
Cu = Cd/D
Deposits single bacterial cells from a liquid culture over the surface of an agar medium
Prevents bacteria in the environment from contaminating work, and prevents bacteria in work from contaminating the environment
Aspetic techniques that can be used
Sterilize surfaces and working surfaces
Washing hand before and after
Flaming inoculating loop and lip of culture tubes
Reduce time sterile medium, cultures, or bacteria are exposed to air
Work in area with low resident population of bacteria
What does the blank used with a spectrophotometer consist of?
How is a spectrophotometer zeroed?
Define each term in the "fluid mosaic model"
Fluid: membranes are able to move, things may pass through them (selective permeability)
Mosaic: membranes are composed of a variety of different things - phospholipid bilayer, enzymes, proteins that act as channels
Explain phospholipid bilayers
Fatty acid bilayers, where the hydrophobic ends of the fatty acids attract each other to the inside of the layer and the hydrophilic ends are on the outside of the membrane in the water, creating a double layer of fatty acids
How are bacterial species identified?
Cell and colony morphology, chemical composition of cell walls, biochemical activities, and nutritional requirements
What is the best way to isolate individual cells?
Streaking them onto an agar plate, so that each individual cell will produce a colony
Cell wall is composed of a thick layer of peptidoglycan surrounding the cell membrane
Differential stain to divide bacteria into Gram-positive and Gram-negatie
Steps of a gram stain
A basic dye (crystal violet) is used to stain the peptidoglycan in both cells, then iodine is used to increase affinity of the dye to peptidoglycan. Ethanol is then used to dissolve lipids in outer membrane of Gram-negative bacteria, allowing iodine dye complex to leave cells (peptidoglycan layer too thin to retain dye), while Gram-positive cells retain the dyed due to the thick layer of peptidoglycan - a counterstain is then applied that dies the Gram-negative cells pink
Commonly reocognized cell morphologies
Cocci: spherical shape
Bacilli: shaped like rods or cylinders (long and slender, or so short they resemble cocci)
Spirilla: resemble a corkscrew
What are three other ways to identify bacteria besides morphology?
Presence of flagella (motility)
Formation of endospores
How is motility in bacteria tested?
Bacteria are injected into a tube containing a dye that turns red when oxidized by growing bacteria- distribution of red dye indicates swimming ability
How is formation of endospores determined in bacteria?
Sample can withstand extreme conditions (high temperatures) and will grow at optimal conditions
What are the different enzymatic activities in bacteria that can be tested?
How can resolution be decreased (improved)?
Using an illumination source with a smaller wavelength
Increasing the numerical aperture of the objective lens, as well as using immersion oil with the 100x objective lens
How is the resolution value calculated?
Why do Gram-positive bacteria stain purple?
They contain a thick layer of peptidoglycan which retains the iodine-crystal violet complex CV-I, causing cells to hold the dye and thus retain the purple colour even after the ethanol wash
Why do Gram-negative bacteria stain pink?
Gram negative cells have only a thin layer of peptidoglycan, so the CV-I complex easily washes out, and thus the cells are able to be counterstained pink because they no longer contain any crystal violet dye
What is the effect of high temperatures on bacteria that do not form endospores?
High temperatures will kill cells that do not produce endospores as it can damage cell membranes and denature proteins, resulting in cells that are unable to function
Microtubules, microfilaments, and intermediate filaments which function in cell structure, cell motility (flagella and cilia - microtubules as part of their ultrastructure), and various biological processes
Mitochondria, chloroplasts, Golgi apparatus, endoplasmic reticulum, nucleus, and vacuoles/vesicles
Function in cell motility (flagella and cilia) and one organism also uses cilia to propel food towards its oral groove
Used in amoeboid movement and organelle movement (intermediate filaments give nucleus its shape - nuclear lamina)
How do microfilaments act in amoeboid movement?
Phase contrast microscope
Phase contrast microscope
Cells and medium have different refractive index, and therefore light traveling through a material with different refractive indexes show a change in phase of light waves, which the microscope then translates to a change in light intensity (areas of higher refractive index appear darker)
Some cell structures contain autofluoresce, some require staining - by exposing cells to several stains at once, different structures will fluoresce different colours
How can contrast be improved?
Staining compounds (vital stain - living cells/tissue, or dead cells and tissues), using special types of microscopes to manipulate light, and by reducing the amount of light
What effect would cytochalasin (inhibits microtubules) and cholchicine (inhibits microfilaments) have on Pelomyxa?
Cytochalasin would cause amoeba to become sessile, as the microfilaments are responsible for the movement of amoebas - cholchicine would have no effect
What effect would cytochalasin (inhibits microtubules) and cholchicine (inhibits microfilaments) have on Euglena?
What effect would cytochalasin (inhibits microtubules) and cholchicine (inhibits microfilaments) have on motile prokaryotes?
What is phagocytosis, and how does it differ from receptor-mediated endocytosis?
Phagocytosis is the process that Paramecium use to take in food. It differs from receptor-mediated endocytosis in that receptor-mediated endocytosis is very specific and allows the cell to acquire bulk quantities of specific substances, whereas phagocytosis is more general and can take in different substances
Organelles (enzymes) that digest or break down waste materials and cellular debris, such as worn out organelles, food particles, and engulfed bacteria and viruses
Contains cell's genetic information, and is surrounded by the nuclear lamina (made up of intermediate filaments) to protect the DNA
Engulfs food via phagocytosis and uses lysosomes to break the food down
Organelles found in plant cells that are used in photosynthesis - they capture light and convert it to usable energy
Proteins that catalyze metabolic reactions without being consumed or destroyed by the molecule - lower a reaction's activation energy (substrate specific)
Molecule to be reacted, that fits into a uniquely shaped pocket of the enzyme called the active site and binds with the enzyme as it is converted into the end product
Allows plants to use starch it has stored after photosynthesis - takes amylose and breaks it down into smaller molecules by hydrolysis (glucose molecules, maltose, and shorter chains of amylose)
Polymeric macromolecule composed of glucose monomers that is too large to pass through a cell membrane
What are enzymes made up of?
Enzymes are proteins, which are made up of amino acids
What is an active site?
An active site is a site that uniquely fits the substrate specific to the enzyme, and will activate the enzyme once the substrate binds to the site
Energy transfer from light to chemical bonds through series of light reactions
What series of reactions occurs in photosynthesis?
Light energy from sun strikes pigments in thylakoid membrane of chloroplast, which is transformed into excited electrons (electrical energy), then into chemical energy in the form of bonds in ATP and NADPH molecules, and the ATP and NADH molecules are then used to power the fixation of carbon dioxide into sugar molecules (Calvin cycle, occurring in stroma)
Determines which wavelengths of light the chloroplasts maximally absorb (these wavelengths also produce the highest rates of photosynthetic activity)
Chloroplast suspension is mixed with indicator dye DCPIP - as DCPIP accepts electrons from the electron transport chain of Photosystem II it becomes reduced and therefore colourless, allowing absorptions to be measured to determine concentrations
What were the controls in the photosynthesis in spinach chloroplasts experiment?
Controls must show that DCPIP is stable and there is no other source of electrons to reduce DCPIP (colour does not change spontaneously) and that the colour of chloroplast suspension is stable and does not change colour spontaneously
What are the independent and dependent variables for the absorbance spectrum of photosynthesis in spinach chloroplasts?
The independent variable is the wavelength of light, and the dependent variable is the absorbance of the chloroplast suspension at varying wavelengths of light
What are the independent and dependent variables for the action spectrum of photosynthesis in spinach chloroplasts?
The independent variable is the colour of light, and the dependent variable is the absorbance of the solution
Why are spinach leaves green?
Spinach leaves are green because they maximally absorb blue and red wavelengths, and green wavelengths are absorbed the least and are therefore reflected back the most, resulting in the green colour
At what wavelength is the action spectrum measured at?
The wavelength that photosynthesis occurs at maximally
Where do light reactions take place in the chloroplast? Reactions of the Calvin cycle?
Light reactions take place in the stroma of the chloroplast, and reactions of the Calvin cycle take place in the stroma
Measuring oxygen levels, sugar produced, or carbon dioxide levels
How is ATP produced?
Through the catabolism of carbohydrates, proteins, and fats
Glycolysis in eukaryotes
Cytosolic reactions to convert glucose to pyruvate (one molecule of glucose results in the net production of 2 molecules of ATP via substrate level phosphorylation
After glycolysis, what occurs in the presence of oxygen?
Eukaryotes convert pyruvate into acetyl CoA, which is transported to Kreb's cycle in mitochondria (produces 2 more molecules of ATP) and then oxidative phosphorylation (the transfer of electrons from food to oxygen) produces the rest of the ATP molecules (carbon dioxide is also formed as a by-product)
After glycolysis, what occurs in the absence of oxygen?
Pyruvate is degraded via a series of cytostolic pathways - lactic acid fermentation and alcohol fermentation (produces ethanol and carbon dioxide, regenerates NAD+ - required for glycolytic pathway)
What sort of feedback system occurs in alcohol fermentation?
Fermentation of glucose produces ethanol, but high concentrations of ethanol are toxic to yeast
Physiological response curve
Why is fermentation necessary?
Why was the yeast flask swirled prior to adding yeast to each tube?
To re-suspend the yeast and therefore the ensure that similar concentrations of yeast were present in each tube (constant)
What would happen if the metabolism in yeast experiment were done without the 10 minute pre-incubation period?
The lag phase of the physiological response curve would be significantly longer as the pre-incubation period brings the tube to a temperature at which yeast metabolizes glucose most effectively, and therefore without the incubation period the yeast would not metabolize glucose as well
What process are the yeast in the Durham tube undergoing?
The yeast are undergoing fermentation - other eukaryotes undergo aerobic respiration, and only prokaryotes undergo anaerobic respiration
What metabolic processes occur in the cytoplasm?
Alcohol fermentation, ATP production, glycolysis, and NADH production
What metabolic processes occur in the mitochondria?
ATP production, Krebs cycle, electron transport chain, and NADH production
Bacterial genomic DNA
Consists of a double stranded DNA helix arranged in a circle that is anchored to the bacterial plasma membrane - 4000 genes that encode all the functions of the bacterial cell
Bacterial plasmid DNA
Floats freely in cytoplasm of bacterial cell
Circular and can assume supercoiled conformation in which circular double helix molecule twists on itself
Much smaller than genomic DNA (2- 25 genes)
Can sometimes conform extra properties to the cell that allow the cell to survive in conditions that it could not survive without the plasmid DNA (only when there is selective pressure)
Arranged in linear strands (chromosomes - 23 pairs) in nucleus of cell (30 000 - 35 000 genes - high molecular weight DNA)
Can be used to analyze small amount of plasmid DNA - DNA is not very pure and maxi prep must be used for further analysis as it is a larger quantity of very pure DNA - separates plasmid DNA from bacterial genomic DNA based on size and conformation
How can high molecular weight (HMW) DNA be extracted?
High affinity for glass - buffer solution must contain Tris and EDTA as it binds magnesium ions which are required for DNAse, preventing DNAse from functioning and degrading the DNA into nucleotides
What does centrifuging do?
Creates a centrifugal force that causes bacterial cells to collect in a pellet at the bottom of the tube - liquid above is referred to as the supernatant
What does vortexing do?
Vortexing disrupts the pellet of cells so that they may be re-suspended
Why is STE added to the DNA treatment?
Washes the medium away from the cells
What is Solution I in the DNA treatment?
A buffered, isotonic solution that is used to re-suspend bacterial cells
What is Solution II in the DNA treatment?
Contains sodium dodecylsulfate (SDS) and sodium hydroxide (an alkali) - SDS denatures proteins and disrupts the plasma membrane, causing the cell to lyse and releasing cell components into the solution, and NaOH raises the pH of the lysate to denature the hydrogen bonds between the base pairs of DNA, separating the helix
What is Solution III in the DNA treatment?
Acidic potassium acetate solution that neutralizes the pH in the lysate so that some hydrogen bonds in the DNA will re-form in random base pairs, resulting in a tangled, insoluble mass of DNA - hydrogen bonds in the plasmid DNA reform between the original complementary base pairs (when solution is placed on ice potassium forms white, insoluble mass with SDS that precipitates out along with many of the proteins, cell wall, debris, and genomic DNA)
What does centrifuging do to the genomic DNA-potassium-SDS-protein-cell wall complex?
Causes the complex to pellet in the bottom of the tube and the plasmid to remain in the supernatant solution
What does the 95% ethanol wash do in the DNA treatment?
Removes water molecules from macromolecules by decreasing hydrogen bonding between water molecules and macromolecules (plasmid DNA and RNA come out of solution and precipitate, so that they may be centrifuged into a pellet)
What does the 70% ethanol wash do in the DNA treatment?
Removes the salts which were not removed with the 95% ethanol, and hydrates the pellets slightly so that it may dissolve in the aqueous solution
Why is the 30 minute incubation period necessary in the DNA extraction?
Why is sodium acetate used to precipitate HMW DNA?
The salt ions compete with macromolecules (DNA) for the water molecules
What are the 2 properties of DNA that allow you to separate genomic DNA from plasmid DNA
Size - big genomic DNA precipitates faster with centrifugation
Conformation (shape of molecule) - supercoiled plasmid DNA maintains its shape even when hydrogen bonds in backbone are broken
How can HMW DNA be extracted from solution?
Its affinity for glass and the fact that it forms very long "threads" of DNA
What would happen if the tube were vortexed after the addition of Solution II?
Genomic DNA would break and would not all be centrifuged out, therefore contaminating plasmid DNA
What is the difference between genomic DNA, plasmid DNA, and eukaryotic DNA?
Genomic DNA contains the majority of genes needed for the bacterial cell to function
Plasmid DNA is a small, circular structure of DNA in the cell cytoplasm that contains genes that can allow the bacteria to survive in conditions where it could otherwise not survive
Eukaryotic DNA is much larger and is contained within the nucleus, in 23 pairs of chromosomes, encoding all the genes necessary for the survival of the eukaryote
What is the purpose of the 95% ethanol and 70% ethanol wash?
95% ethanol dehydrates the cell
70% ethanol treatment removes the salts and rehydrates the plasmid DNA, allowing it to dissolve faster
Why must plasmid DNA be kept on ice following incubation?
DNAse will break down DNA at room temperature - T solution has EDTA to inactivate DNAse
What was the experiment performed by Avery, MacLeod, and McCarty?
Tested various cellular macromolecules for their ability to transform non-virulent Streptococcus pneumoniae into virulent bacteria - discovered DNA was the only macromolecule capable of transforming non-virulent bacteria into virulent bacteria
How did the experiment performed in Biol 107 differ from Avery et al?
Escherichia coli was used
E. coli was examined for transformation by a gene on the plasmid DNA instead of the genomic DNA
E. coli cells needed to be made competent to uptake DNA using a calcium chloride solution
Mice were not used (medium containing kanamycin was used instead)
Only DNA was focused on (as opposed to various parts of the cell)
How were kanamycin sensitive E. coli cells made proficient to take up DNA?
A calcium chloride solution was made, which created holes in the cellular membranes (competent cells)
How is a competent cell transformed?
If plasmid DNA entering competent cells is capable of replicating, the competent cells will be genetically altered or transformed (kanamycin resistant) - all descendants of transformed cells should be genetically altered
What is kanamycin?
Antibiotic belonging to the family of antibiotics characterized by their ability to inhibit protein synthesis in prokaryotic cells - they are transported into the cell by oxygen dependent active transport system and irreversibly inhibit protein synthesis by binding to a small subunit of ribosomes in bacterial cell, so cells are unable to synthesize proteins - cell death
What occurs in kanamycin resistant cells?
Phosphotransferase enzyme is encoded and expressed in the presence of kanamycin, which phosphorylates (adds a phosphorous group) to kanamycin and renders the antibiotic inactve
Use of a DNA template to synthesize RNA
Reading of mRNA to produce protein
Plate Count Method
Viable cell count (living cells only) in which original cell suspension is diluted into suspensions of decreasing cell concentration, which are spread onto the surface of an agar medium and allowed to incubate so that single cells may grow into a colony - following incubation colonies may be counted, and each is representative of a single cell originally deposited on the plate
Petroff-Hausser Counting Chamber
Total cell count (living and dead) using a specially designed microscope slide with a depressed surface and etched grid, where a thin layer of cell suspension of known volume is spread and the number of cells in the volume is directly counted with the aid of a microscope
Optical Density (OD)
Indirect method of total cell count, measuring turbidity (cloudiness of a solution due to the presence of particles such as cells), measured using a spectrophotometer, and developing a standard curve
Why is Solution T (Tris) buffer used in the Transformation of Bacterial Cells lab?
Maintains the pH at 8.0 and is the solvent for plasmid DNA
Solution B is the solution used to dissolve DNAse but does not contain DNAse
What occurs during the first incubation period of the Transfomation of Bacterial Cells lab?
What occurs during the heat shock incubation period in the Transformation of Bacterial Cells lab?
Helps the plasmid DNA enter the competent cells and induces the expression of survival genes necessary to repair damage to the plasma membrane
What does the third incubation period in the Transformation of Bacterial Cells lab do?
Allows time for kanamycin resistance gene to be expressed - must be transcribed into mRNA, then mRNA must translate it into a polypeptide chain (phosphotransferase)
The plate which does not contain plasmid DNA, and instead contains solution B, Tris buffer, and competent cells
The plate that contains plasmid DNA that has been broken down into nucleotides by DNAse, as well as solution B and competent cells
There would be colony growth on plate 5+K, as the DNAse cannot enter the competent cells and the plasmid DNA would not have been broken down - the kanamycin resistant gene would have been expressed in the competent cells
What would occur if the environment in which the E. coli was grown was anaerobic?
The kanamycin would not affect the growth of the cells, as kanamycin enters the cell in an oxygen dependent manner
About this deck
Size: 132 flashcards
|
<urn:uuid:3670f77b-0b38-4bc1-bee6-21473a8445d7>
|
CC-MAIN-2013-20
|
http://www.studyblue.com/notes/note/n/biology-107-lab-exam-review/deck/809265
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.902229
| 4,854
| 3.375
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
A long time ago I wrote the article The Dull Case of Emissivity and Average Temperatures and expected that would be the end of the interest in emissivity. But it is a gift that keeps on giving, with various people concerned that no one has really been interested in measuring surface emissivity properly.
All solid and liquid surfaces emit thermal radiation according to the Stefan-Boltzmann formula:
E = εσT4
where ε=emissivity, a material property; σ = 5.67×10-8 ; T = temperature in Kelvin (absolute temperature)
and E is the flux in W/m²
More about this formula and background on the material properties in Planck, Stefan-Boltzmann, Kirchhoff and LTE.
The parameter called emissivity is the focus of this article. It is of special interest because to calculate the radiation from the earth’s surface we need to know only temperature and emissivity.
Emissivity is a value between 0 and 1. And is also depends on the wavelength of radiation (and in some surfaces like metals, also the direction). Because the wavelengths of radiation depend on temperature, emissivity also depends on temperature.
When emissivity = 1, the body is called a “blackbody”. It’s just the theoretical maximum that can be radiated. Some surfaces are very close to a blackbody and others are a long way off.
Note: I have seen many articles by keen budding science writers who have some strange ideas about “blackbodies”. The only difference between a blackbody and a non-blackbody is that the emissivity of a blackbody = 1, and the emissivity of a non-blackbody is less than 1. That’s it. Nothing else.
The wavelength dependence of emissivity is very important. If we take snow for example, it is highly reflective to solar (shortwave) radiation with as much as 80% of solar radiation being reflected. Solar radiation is centered around a wavelength of 0.5μm.
Yet snow is highly absorbing to terrestrial (longwave) radiation, which is centered around a wavelength of 10μm. The absorptivity and emissivity around freezing point is 0.99 – meaning that only 1% of incident longwave radiation would be reflected.
Let’s take a look at the Planck curve – the blackbody radiation curve – for surfaces at a few slightly different temperatures:
The emissivity (as a function of wavelength) simply modifies these curves.
Suppose, for example, that the emissivity of a surface was 0.99 across this entire wavelength range. In that case, a surface at 30°C would radiate like the light blue curve but at 99% of the values shown. If the emissivity varies across the wavelength range then you simply multiply the emissivity by the intensity at each wavelength to get the expected radiation.
Sometimes emissivity is quoted as an average for a given temperature – this takes into account the shape of the Planck curve shown in the graphs above.
Often, when emissivity is quoted as an overall value, the total flux has been measured for a given temperature and the emissivity is simply:
ε = actual radiation measured / blackbody theoretical radiation at that temperature
[Fixed, thanks to DeWitt Payne for pointing out the mistake]
In practice the calculation is slightly more involved, see note 1.
It turns out that the emissivity of water and of the ocean surface is an involved subject.
And because of the importance of calculating the sea surface temperature from satellite measurements, the emissivity of the ocean in the “atmospheric window” (8-14 μm) has been the subject of many 100′s of papers (perhaps 1000′s). These somewhat overwhelm the papers on the less important subject of “general ocean emissivity”.
Aside from climate, water itself is an obvious subject of study for spectroscopy.
For example, 29 years ago Miriam Sidran writing Broadband reflectance and emissivity of specular and rough water surfaces, begins:
The optical constants of water have been extensively studied because of their importance in science and technology. Applications include a) remote sensing of natural water surfaces, b) radiant energy transfer by atmospheric water droplets, and c) optical properties of diverse materials containing water, such as soils, leaves and aqueous solutions.
In this study, values of the complex index of refraction from six recent articles were averaged by visual inspection of the graphs, and the most representative values in the wavelength range of 0.200 μm to 5 cm were determined. These were used to find the directional polarized reflectance and emissivity of a specular surface and the Brewster or pseudo-Brewster angle as functions of wavelength.
The directional polarized reflectance and emissivity of wind-generated water waves were studied using the facet slope distribution function for a rough sea due to Cox and Munk .
Applications to remote sensing of sea surface temperature and wave state are discussed, including effects of salinity.
Emphasis added. She also comments in her paper:
For any wavelength, the total emissivity, ε, is constant for all θ [angles] < 45° [from vertical]; this follows from Fig. 8 and Eq. (6a). It is important in remote sensing of thermal radiation from space, as discussed later..
The polarized emissivities are independent of surface roughness for θ < 25°, while for θ > 25°, the thermal radiation is partly depolarized by the roughness.
This means that when you look at the emission radiation from directly above (and close to directly above) the sea surface roughness doesn’t have an effect.
I thought some other comments might also be interesting:
The 8-14-μm spectral band is chosen for discussion here because (a) it is used in remote sensing and (b) the atmospheric transmittance, τ, in this band is a fairly well-known function of atmospheric moisture content. Water vapor is the chief radiation absorber in this band.
In Eqs. (2)-(4), n and k (and therefore A and B) are functions of salinity. However, the emissivity value, ε, computed for pure water differs from that of seawater by <0.5%.
When used in Eqs. (10), it causes an error of <0.20°C in retrieved Ts [surface temperature]. Since ε in this band lies between 0.96 and 0.995, approximation ε= 1 is routinely used in sea surface temperature retrieval. However, this has been shown to cause an error of -0.5 to -1.0°C for very dry atmospheres. For very moist atmospheres, the error is only ≈0.2°C.
One of the important graphs from her paper:
Click to view a larger image
Emissivity = 1 – Reflectance. The graph shows Reflectance vs Wavelength vs Angle of measurement.
I took the graph (coarse as it is) and extracted the emissivity vs wavelength function (using numerical techniques). I then calculated the blackbody radiation for a 15°C surface and the radiation from a water surface using the emissivity from the graph above for the same 15°C surface. Both were calculated from 1 μm to 100 μm:
The “unofficial” result, calculating the average emissivity from the ratio: ε = 0.96.
This result is valid for 0-30°C. But I suspect the actual value will be modified slightly by the solid angle calculations. That is, the total flux from the surface (the Stefan-Boltzmann equation) is the spectral intensity integrated over all wavelengths, and integrated over all solid angles. So the reduced emissivity closer to the horizon will affect this measurement.
Niclòs et al – 2005
One of the most interesting recent papers is In situ angular measurements of thermal infrared sea surface emissivity—validation of models, Niclòs et al (2005). Here is the abstract:
In this paper, sea surface emissivity (SSE) measurements obtained from thermal infrared radiance data are presented. These measurements were carried out from a fixed oilrig under open sea conditions in the Mediterranean Sea during the WInd and Salinity Experiment 2000 (WISE 2000).
The SSE retrieval methodology uses quasi-simultaneous measurements of the radiance coming from the sea surface and the downwelling sky radiance, in addition to the sea surface temperature (SST). The radiometric data were acquired by a CIMEL ELECTRONIQUE CE 312 radiometer, with four channels placed in the 8–14 μm region. The sea temperature was measured with high-precision thermal probes located on oceanographic buoys, which is not exactly equal to the required SST. A study of the skin effect during the radiometric measurements used in this work showed that a constant bulk–skin temperature difference of 0.05±0.06 K was present for wind speeds larger than 5 m/s. Our study is limited to these conditions.
Thus, SST used as a reference for SSE retrieval was obtained as the temperature measured by the contact thermometers placed on the buoys at 20-cm depth minus this bulk–skin temperature difference.
SSE was obtained under several observation angles and surface wind speed conditions, allowing us to study both the angular and the sea surface roughness dependence. Our results were compared with SSE models..
The introduction explains why specifically they are studying the dependence of emissivity on the angle of measurement – for reasons of accurate calculation of sea surface temperature:
The requirement of a maximum uncertainty of ±0.3 K in sea surface temperature (SST) as input to climate models and the use of high observation angles in the current space missions, such as the 55° for the forward view of the Advanced Along Track Scanning Radiometer (AATSR) (Llewellyn-Jones et al., 2001) on board ENVISAT, need a precise and reliable determination of sea surface emissivity (SSE) in the thermal infrared region (TIR), as well as analyses of its angular and spectral dependences.
The emission of a rough sea surface has been studied over the last years due to the importance of the SSE for accurate SST retrieval. A reference work for many subsequent studies has been the paper written by Cox and Munk (1954)..
The experimental setup:
From Niclos (2004)
The results (compared with one important model from Masuda et al 1988):
From Niclos (2004)
Click on the image for a larger graphic
This paper also goes on to compare the results with the model of Wu & Smith (1997) and indicates the Wu & Smith’s model is a little better.
The tabulated results, note that you can avoid the “eye chart effect” by clicking on the table:
Click on the image for a larger view
Note that the emissivities are in the 8-14μm range.
You can see that the emissivity when measured from close to vertical is 0.98 – 0.99 at two different wind speeds.
Konda et al – 1994
A slightly older paper which is not concerned with angular dependence of sea surface emissivity is by Konda, Imasato, Nishi and Toda (1994).
They comment on a few older papers:
Buettner and Kern (1965) estimated the sea surface emissivity to be 0.993 from an experiment using an emissivity box, but they disregarded the temperature difference across the cool skin.
Saunders (1967b, 1968) observed the plane sea surface irradiance from an airplane and determined the reflectance. By determining the reflectance as the ratio of the differences in energy between the clear and the cloudy sky at different places, he calculated the emissivity to be 0.986. The process of separating the reflection from the surface irradiance, however, is not precise.
Mikhaylov and Zolotarev (1970) calculated the emissivity from the optical constant of the water and found the average in the infrared region was 0.9875.
The observation of Davies et al. (1971) was performed on Lake Ontario with a wave height less than 25 cm. They measured the surface emission isolated from sky radiation by an aluminum cone, and estimated the emissivity to be 0.972. The aluminum was assumed to act as a mirror in infrared region. In fact,aluminum does not work as a perfect mirror.
Masuda et al. (1988) computed the surface emissivity as a function of the zenith angle of observed radiation and wind speed. They computed the emissivity from the reflectance of a model sea surface consisting of many facets, and changed their slopes according to Gaussian distribution with respect to surface wind. The computed emissivity in 11 μm was 0.992 under no wind.
Each of these studies in trying to determine the value of emissivity, failed to distinguish surface emission from reflection and to evaluate the temperature difference across the cool skin. The summary of these studies are tabulated in Table 1.
The table summarizing some earlier work:
Konda and his co-workers took measurements over a one year period from a tower in Tanabe Bay, Japan.
They calculated from their results that the ocean emissivity was 0.984±0.004.
One of the challenges for Konda’s research and for Niclòs is the issue of sea surface temperature measurement itself. Here is a temperature profile which was shown in the comments of Does Back Radiation “Heat” the Ocean? – Part Three:
Kawai & Wada (2007)
The point is the actual surface from which the radiation is emitted will usually be at a slightly different temperature from the bulk temperature (note the logarithmic scale of depth). This is the “cool skin” effect. This surface temperature effect is also moderated by winds and is very difficult to measure accurately in field conditions.
Smith et al – 1996
Another excellent paper which measured the emissivity of the ocean is by Smith et al (1996):
An important objective in satellite remote sensing is the global determination of sea surface temperature (SST). For such measurements to be useful to global climate research, an accuracy of ±0.3K or better over a length of 100km and a timescale of days to weeks must be attained. This criterion is determined by the size of the SST anomalies (≈1K) that can cause significant disturbance to the global atmospheric circulation patterns and the anticipated size of SST perturbations resulting from global climate change. This level of uncertainty is close to the theoretical limits of the atmospheric corrections..
It is also a challenge to demonstrate that such accuracies are being achieved, and conventional approaches, which compare the SST derived from drifting or moored buoys, generally produce results with a scatter of ±0.5 to 0.7K. This scatter cannot be explained solely by uncertainties in the buoy thermometers or the noise equivalent temperature difference of the AVHRR, as these are both on the order of 0.2K or less but are likely to be surface emissivity/reflectivity uncertainties, residual atmospheric effects, or result from the methods of comparison
Note that the primary focus of this research was to have accurate SST measurements from satellites.
From Smith et al (1996)
The experimental work on the research vessel Pelican included a high spectral resolution Atmospheric Emitted Radiance Interferometer (AERI) which was configured to make spectral observations of the sea surface radiance at several view angles. Any measurement from the surface of course, is the sum of the emitted radiance from the surface as well as the reflected sky radiance.
- ocean salinity
- intake water temperature
- surface air temperature
- wind velocity
- SST within the top 15cm of depth
There was also independent measurement of the radiative temperature of the sea surface at 10μm with a Heimann broadband radiation thermometer “window” radiometer. And radiosondes were launched from the ship roughly every 3 hours.
Additionally, various other instruments took measurements from a flight altitude of 20km. Satellite readings were also compared.
The AERI measured the spectral distribution of radiance from 3.3μm to 20μm at 4 angles. Upwards at 11.5° from zenith, and downwards at 36.5°, 56.5° and 73.5°.
There’s a lot of interesting discussion of the calculations in their paper. Remember that the primary aim is to enable satellite measurements to have the most accurate measurements of SST and satellites can only really “see” the surface through the “atmospheric window” from 8-12μm.
Here are the wavelength dependent emissivity results shown for the 3 viewing angles. You can see that at the lowest viewing angle of 36.5° the emissivity is 0.98 – 0.99 in the 8-12μm range.
From Smith et al (1996)
Note that the wind speed doesn’t have any effect on emissivity at the more direct angle, but as the viewing angle moves to 73.5° the emissivity has dropped and high wind speeds change the emissivity considerably.
Henderson et al – 2003
Henderson et al (2003) is one of the many papers which consider the theoretical basis of how viewing angles change the emissivity and derive a model.
Just as an introduction, here is the theoretical variation in emissivity with measurement angle, versus “refractive index” as computed by the Fresnel equations:
The legend is refractive index from 1.20 to 1.35. Water, at visible wavelengths, has a refractive index of 1.33. This shows how the emissivity reduces once the viewing angle increases above 50° from the vertical.
The essence of the problem of sea surface roughness for large viewing angles is shown in the diagram below, where multiple reflections take place:
Henderson and his co-workers compare their results with the measured results of Smith et al (1996) and also comment that at zenith viewing angles the emissivity does not depend on the wind speed, but at larger angles from vertical it does.
A quick summary of their model:
We have developed a Monte Carlo ray-tracing model to compute the emissivity of computer-rendered, wind-roughened sea surfaces. The use of a ray-tracing method allows us to include both the reflected emission and shadowing and, furthermore, permits us to examine more closely how these processes control the radiative properties of the surface. The intensity of the radiation along a given ray path is quantified using Stokes vectors, and thus, polarization is explicitly included in the calculations as well.
Their model results compare well with the experimental results. Note that the approach of generating a mathematical model to calculate how emissivity changes with wind speed and, therefore, wave shape is not at all new.
Water retains its inherent properties of emissivity regardless of how it is moving or what shape it is. The theoretical challenge is handling the multiple reflections, absorptions, re-emissions that take place when the radiance from the water is measured at some angle from the vertical.
The best up to date measurements of ocean emissivity in the 8-14 μm range are 0.98 – 0.99. The 8-14 μm range is well-known because of the intense focus on sea surface temperature measurements from satellite.
From quite ancient data, the average emissivity of water across a very wide broadband range (1-100 μm) is 0.96 for water temperatures from 0-30°C.
The values from the ocean when measured close to the vertical are independent of wind speed and sea surface roughness. As the angle of measurement moves from the vertical around to the horizon the measured emissivity drops and the wind speed affects the measurement significantly.
These values have been extensively researched because the calculation of sea surface temperature from satellite measurements in the 8-14μm “atmospheric window” relies on the accurate knowledge of emissivity and any factors which affect it.
For climate models – I haven’t checked what values they use. I assume they use the best experimental values from the field. That’s an assumption. I’ve already read enough on ocean emissivity.
For energy balance models, like the Trenberth and Kiehl update, an emissivity of 1 doesn’t really affect their calculations. The reason, stated simply, is that the upwards surface radiation and the downward atmospheric radiation are quite close in magnitude. For example, the globally annually averaged values of both are 396 W/m² (upward surface) vs 340 W/m² (downward atmospheric).
Suppose the emissivity drops from 0.98 to 0.97 – what is the effect on upwards radiation through the atmosphere?
The upwards radiation has dropped by 4W/m², but the reflected atmospheric radiation has increased by 3.4W/m². The net upwards radiation through the atmosphere has reduced by only 0.6 W/m².
One of our commenters asked what value the IPCC uses. The answer is they don’t use a value at all because they summarize research from papers in the field.
Whether they do it well or badly is a subject of much controversy, but what is most important to understand is that the IPCC does not write papers, or perform GCM model runs, or do experiments – and that is why you see almost no equations in their many 1000′s of pages of discussion on climate science.
For those who don’t believe the “greenhouse” effect exists, take a look at Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part One in the light of all the measured results for ocean emissivity.
On Another Note
It’s common to find claims on various blogs and in comments on blogs that climate science doesn’t do much actual research.
I haven’t found that to be true. I have found the opposite.
Whenever I have gone digging for a particular subject, whether it is the diurnal temperature variation in the sea surface, diapycnal & isopycnal eddy diffusivity, ocean emissivity, or the possible direction and magnitude of water vapor feedback, I have found a huge swathe of original research, of research building on other research, of research challenging other research, and detailed accounts of experimental methods, results and comparison with theory and models.
Just as an example, in the case of emissivity of sea surface, at the end of the article you can see the first 30 or so results pulled up from one journal – Remote Sensing of the Environment for the search phrase “emissivity sea surface”. The journal search engine found 348 articles (of course, not every one of them is actually about ocean emissivity measurements).
Perhaps it might turn out to be the best journal for this subject, but it’s still just one journal.
Broadband reflectance and emissivity of specular and rough water surfaces, Sidran, Applied Optics (1981)
In situ angular measurements of thermal infrared sea surface emissivity—validation of models, Niclòs, Valor, Caselles, Coll & Sànchez, Remote Sensing of Environment (2005)
Measurement of the Sea Surface Emissivity, Konda, Imasato, Nishi and Toda, Journal of Oceanography (1994)
Observations of the Infrared Radiative Properties of the Ocean—Implications for the Measurement of Sea Surface Temperature via Satellite Remote Sensing, Smith, Knuteson, Revercomb, Feltz, Nalli, Howell, Menzel, Brown, Brown, Minnett & McKeown, Bulletin of the American Meteorological Society (1996)
The polarized emissivity of a wind-roughened sea surface: A Monte Carlo model, Henderson, Theiler & Villeneuve, Remote Sensing of Environment (2003)
Note 1: The upward radiation from the surface is the sum of three contributions: (i) direct emission of the sea surface, which is attenuated by the absorption of the atmospheric layer between the sea surface and the instrument; (ii) reflection of the downwelling sky radiance on the sea, attenuated by the atmosphere; and (iii) the upwelling atmospheric radiance emitted in the observing direction.
So the measured radiance can be expressed as:
where the three terms on the right are each of the three contributions noted in the same order.
Note 2: 1/10th of the search results returned from one journal for the search term “emissivity sea surface”:
Remote Sensing of Environment - search results
Read Full Post »
|
<urn:uuid:2853d08f-5639-42a5-8a44-704d5ccd4cce>
|
CC-MAIN-2013-20
|
http://scienceofdoom.com/2010/12/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.926786
| 5,304
| 3.578125
| 4
|
[
"climate"
] |
{
"climate": [
"2°c",
"atmospheric circulation",
"climate change",
"energy balance",
"ipcc"
],
"nature": []
}
|
{
"strong": 3,
"weak": 2,
"total": 5,
"decision": "accepted_strong"
}
|
Don't miss our great houseplant growing section with self watering planters, high quality potting soil, organic houseplant fertilizers, sprouting bags and more.
In the late 1980s, a study by NASA and the Associated Landscape Contractors of America (ALCA) resulted in excellent news for homeowners and office workers everywhere. The study concluded that common houseplants such as bamboo palms and spider plants not only make indoor spaces more attractive, they also help to purify the air!
The study was conducted by Dr. B.C. Wolverton, Anne Johnson, and Keith Bounds in 1989. While it was originally intended to find ways to purify the air for extended stays in orbiting space stations, the study proved to have implications on Earth as well.
Newer homes and buildings, designed for energy efficiency, are often tightly sealed to avoid energy loss from heating and air conditioning systems. Moreover, synthetic building materials used in modern construction have been found to produce potential pollutants that remain trapped in these unventilated buildings.
The trapped pollutants result in what is often called the Sick Building Syndrome. With our ultra modern homes and offices that are virtually sealed off from the outside environment, this study is just as important now as when it was first published.
While itís a well known fact that plants convert carbon dioxide into oxygen through photosynthesis, the NASA/ALCA study showed that many houseplants also remove harmful elements such as trichloroethylene, benzene, and formaldehyde from the air.
NASA and ALCA spent two years testing 19 different common houseplants for their ability to remove these common pollutants from the air. Of the 19 plants they studied, 17 are considered true houseplants, and two, gerbera daisies and chrysanthemums, are more commonly used indoors as seasonal decorations.
The advantage that houseplants have over other plants is that they are adapted to tropical areas where they grow beneath dense tropical canopies and must survive in areas of low light. These plants are thus ultra-efficient at capturing light, which also means that they must be very efficient in processing the gasses necessary for photosynthesis. Because of this fact, they have greater potential to absorb other gases, including potentially harmful ones.
In the study NASA and ALCA tested primarily for three chemicals: Formaldehyde, Benzene, and Trichloroethylene. Formaldehyde is used in many building materials including particle board and foam insulations. Additionally, many cleaning products contain this chemical. Benzene is a common solvent found in oils and paints. Trichloroethylene is used in paints, adhesives, inks, and varnishes.
While NASA found that some of the plants were better than others for absorbing these common pollutants, all of the plants had properties that were useful in improving overall indoor air quality.
NASA also noted that some plants are better than others in treating certain chemicals.
For example, English ivy, gerbera daisies, pot mums, peace lily, bamboo palm, and Mother-in-law's Tongue were found to be the best plants for treating air contaminated with Benzene. The peace lily, gerbera daisy, and bamboo palm were very effective in treating Trichloroethylene.
Additionally, NASA found that the bamboo palm, Mother-in-law's tongue, dracaena warneckei, peace lily, dracaena marginata, golden pathos, and green spider plant worked well for filtering Formaldehyde.
After conducting the study, NASA and ALCA came up with a list of the most effective plants for treating indoor air pollution.
The recommended plants can be found below. Note that all the plants in the list are easily available from your local nursery.
For an average home of under 2,000 square feet, the study recommends using at least fifteen samples of a good variety of these common houseplants to help improve air quality. They also recommend that the plants be grown in six inch containers or larger.
Here is a list of resources for more information on this important study:
PDF files of the NASA studies related to plants and air quality:
|
<urn:uuid:39285448-3be3-4fb0-8cc6-d3a6c701e3cf>
|
CC-MAIN-2013-20
|
http://www.cleanairgardening.com/houseplants.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.953078
| 864
| 3.609375
| 4
|
[
"climate"
] |
{
"climate": [
"carbon dioxide"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
There's No Such Thing as Ethical Oil (or Nuclear Power)
Canada is digging itself a dirty energy destiny in the Athabasca oil sands.
By Evan O'Neil | March 22, 2011
After the BP oil spill in the Gulf of Mexico and now the nuclear meltdown at the Fukushima reactors in Japan, it should be clear that oil and nuclear power are not benign forces in our world. Both are toxic, dirty, and insecure forms of energy. It is thus astonishing that the Canadian energy industry proposes combining the two.
The boreal forest of northern Alberta sits atop one of the largest fossil fuel deposits in the world: the Athabasca bituminous sands. Energy insiders call it oil sands, while environmentalists prefer tar sands—each side seeing what it wants. At room temperature, raw bitumen has the consistency of asphalt and won't flow through a pipeline without being diluted or upgraded into synthetic crude oil.
Underground, the bitumen exists in a mixture with sand and clay, and there are two techniques for extracting it. Surface mines have been the predominant method since commercial production began in the 1960s. At the Suncor Energy mine, for example, the native forests, topsoil, and muskeg bog were cleared, and 50 meters of "overburden" earth was removed to expose a tar sand deposit itself about 50 meters thick. The bitumen is mined 24 hours per day with massive electric shovels that fill dump trucks three stories tall.
The dump trucks haul the tar sands out of the mine to a separation unit where it is mixed with hot water. The bitumen floats to the top and is skimmed off, while the wastewater slurry—containing sand, clay, salts, polycyclic aromatic hydrocarbons, arsenic, naphthenic acid, and other substances—is pumped into large, open-air tailings ponds where it is left to evaporate. The problem with tailings ponds has been that the finest clay particles take decades to settle into sediment. To accelerate reclamation of the land, some companies are now experimenting with adding polyacrylamide flocculant, in a process similar to municipal waste treatment, to help separate the solids from the water.
Mining for deeper deposits is uneconomical, so the industry also employs in situ drilling. In a typical setup, two horizontal wells are drilled, one above the other. The top well injects steam into the sands, melting out the bitumen, which is then pumped out through the lower well in a process called Steam-Assisted Gravity Drainage, or SAGD. The well pads of these SAGD installations dot the remote boreal landscape in a network of roads, pipelines, and seismic cutlines.
Mining and in situ operations both consume a lot of energy. The advantage of in situ is that the land is much less disturbed, making it easier to return it to a natural state. SAGD also separates the sand and bitumen below the surface, requiring significantly less infrastructure. Of the total Athabasca deposit, 80 percent is thought to be recoverable through in situ and 20 percent through mining.
SAGD requires a lot of natural gas to make steam. The ratio of steam injected to oil extracted is what determines a project's carbon emissions as well as its profitability. Mining and SAGD together consume hundreds of billions of cubic feet of natural gas per year, a substantial fraction of Canada's entire demand.
That's where nuclear power enters the picture. As bitumen production in Alberta is slated to expand over the next several decades, gas production will be in decline. This means that eventually producers will have to either burn part of their bitumen, thus eating into their profits, or find new power sources to generate heat and electricity.
Nuclear power has been mooted to fill this gap. Japan, of course, turned to nuclear power during the 1970s oil shocks to offset its dependence on foreign oil. Now, in an ironic twist, Canada is considering nuclear power so that it can expand its oil exports. Most of the tar sands oil is sold south of the border through a pipeline network to meet American demand, while Canada still imports foreign oil to its eastern provinces.
One has to wonder why Canada would burn so much of its natural gas, a relatively clean fossil fuel, to extract an even dirtier energy. The answer is, of course, to make money. Most of the world's oil is controlled by national oil companies, making Canada one of the only remaining patches where the energy industry can really play in the sandbox.
And the Athabasca deposit is a big sandbox. The area is roughly the size of New York State. It contains an estimated 1.7 trillion barrels of bitumen, of which about 170 billion barrels are extractable with current technologies. Multiply by $100 per barrel and pretty soon we're talking real money.
But it is capital intensive to slurp these heavy, unconventional dregs of the global oil barrel. Hundreds of billions of dollars have already been invested in the Alberta tar sands, where it takes an oil price of $65 to 85 per barrel to recuperate costs. As recently as 2009, oil was back down in the $40 range, slowing or canceling many projects.
So is tar sands oil dirty oil? Of course it is. All oil is dirty. But is it dirtier than other sources? On average, yes. According to Cambridge Energy Research Associates, oil from Alberta tends to be about 5 to 15 percent more polluting than the average oil consumed in the United States when compared on a well-to-wheels basis. Twenty-five percent of oil's emissions occur during the production phase, while 75 percent comes from combustion in a vehicle.
Industry insiders often repeat the following argument: It's the consumer's fault, whether they mean car owners or America in general. "If you would stop driving so much, we would stop digging up all this oil and pumping it in your direction," goes the typical line. Then whenever the United States wavers in its affection for Canadian energy, the argument becomes a threat: "We'll just sell it to the Chinese instead."
This argument is nonsense on the individual level. American consumers aren't presented with a significant choice at the pump. They get to decide between three octane ratings with maybe a dash of dubiously efficient ethanol in the blend. The only real power a person has to reduce oil consumption is in deciding where to live. Ditching the car and moving to a dense, pedestrian- and bicycle-friendly community with access to mass transit is the most effective solution. For those who cannot or do not wish to move, the alternative is to work through the local political process to redesign your community.
Most families haven't made the carless choice yet. Instead the typical response is to buy a bigger car when gasoline is cheap and a more efficient one when the price goes back up. Without a price floor of some sort, America will never break its addiction to oil, foreign or domestic. A strong gasoline tax could serve as a de facto price floor if it were set high enough. Unfortunately the United States has chosen to set the bar very low: Gas tax is a pittance relative to the price of gasoline, and it isn't indexed to inflation, meaning the value has actually declined over the last several decades.
It is an abdication of political responsibility to argue that an unorganized and reactionary collective such as consumers is at fault for oil consumption. The essence of ethics is whether our political institutions can make choices that are in the interest of all affected stakeholders, local and global, regardless of the political cost.
Seen in this light, can the Canadian and Albertan governments be trusted not to morph into petrostates?
It is an abdication of political responsibility to argue that an unorganized and reactionary collective such as consumers is at fault for oil consumption.Sadly the outlook is bleak. The federal environment minister recently declared that Canadian oil is "ethical oil." This concept is drawn straight from the title of a book by conservative political activist Ezra Levant, in which he argues that Canada's oil is morally superior to oil from countries with poor human rights records. Even if Canada and the United States were to boycott imports from all countries they consider problematic, an option neither is willing to consider, oil would still remain a globally priced and traded commodity and the benefits of its consumption would continue to flow to unsavory dictators.
On the provincial level, the Alberta government is of the opinion that the tar sands "should" be developed further, despite the fact that a panel recently found that water and environmental monitoring program has been inadequate. Alberta's Energy Resources Conservation Board, its regulatory agency for energy development, has one of the more Orwellian names one can imagine.
Bullish development of the oil sands has also contributed to Canada's violation of its Kyoto Protocol commitments. The goal was to decrease emissions 6 percent below 1990 levels. Instead, Canadian emissions have increased by a whopping 24 percent, in great measure due to tar sands expansion. Tar sands emissions now account for about 5 percent of Canada's total.
Alberta did manage to enact one innovative policy that few other jurisdictions will even consider: a carbon tax of $15 per ton. This move should be applauded, but it is unfortunately accompanied by billion-dollar investments in the unproven technologies of carbon capture and sequestration—an expensive crutch to help the fossil fuel industry limp into the future—with minimal focus on renewable energy research, development, and deployment.
Another concern for Canada's energy future is that the royalty regime [PDF] for tar sands leases is too weak. The rate is set at 1 percent until a project becomes profitable, and then it jumps to 25 percent, which is still low compared to some countries. Alberta risks squandering an opportunity to build its Heritage sovereign wealth fund while the people's resources disappear into private pockets, leaving the province without financial means to transition to a cleaner economy.
Is America being a good neighbor in this transaction, or merely abetting a fellow oil junkie? The proposed Keystone XL extension of the pipeline network that carries Albertan oil to the United States is currently under consideration, and final approval falls to the U.S. Department of State because of the international border crossing. It was announced on March 15 that a supplemental environmental impact statement will be issued, followed by a new public comment period, to determine whether the project is "in the U.S. national interest."
Buying more energy from a friendly neighbor appears like a good idea on the surface. But while energy security has the ring of a robust and consistent concept, it is actually a relative one. It wears a false halo of military necessity even during peacetime. Supplier countries want security of demand, and consumer countries want security of supply. What the oil industry really worries about is running out of business. "Producers who seek to maximize long-term revenue will want to maintain oil prices stable at the highest price that does not induce substantial investment in substitutes," writes technology and innovation expert Philip Auerswald.
Should we be worried about today's high prices? Auerswald doesn't think so. High prices merely hasten the inevitable transition to a post-oil economy. Estimations vary on the timing of peak oil, but the finitude of the resource is undisputed and so is its eventual depletion to a level where the cost of extraction will equal the value of the product.
Saying that America should be open to more tar sands oil is basically just another version of the "drill here, drill now" argument for tapping the Arctic National Wildlife Reserve in Alaska. The Obama administration, it should be noted, has been a booster of domestic production, and production has gone up in the last five years. Canadian production has also increased significantly in the last decade, mostly from growth in the tar sands. But after two major oil price spikes during the same period, in 2008 and again now, it should be crystal clear that domestic and Canadian production growth doesn't control the global price, and that a much better strategy lies in finding replacement technologies and actively reducing demand.
But this is a tough sell when you think of things like how many Caterpillar 797B dump trucks are needed to mine the tar sands, and how the parts are manufactured all over the United States, by quite a few workers in quite a few Congressional districts. Add to that the public's resistance to raising gasoline taxes, and it becomes quite easy to see why it's politically difficult to enact bold and necessary energy policy. The hundreds of millions of lobbying dollars the oil industry spends certainly don't help our Senators see things clearly.
American politicians have been saying we need to get off foreign oil for half a century. Canadian energy executives and politicians bristle when they hear things like that. They feel that somehow Canadian oil shouldn't be considered foreign because it comes from North America. But the ethical thing is for both countries to pursue energy independence based on clean, renewable sources that don't pollute the environment, harm human health, and risk massive destabilization of the global climate.
As it stands, Canada has become a climate change ostrich with its head in the oil sands.
A version of this article first appeared in the Carnegie Ethics Online column.
Read More: Business, Conservation, Democracy, Development, Diplomacy, Economy, Energy, Environment, Ethics, Jobs, Security, Sustainability, Technology, Trade, Transportation, Canada, United States, Americas, Global, Middle Eastblog comments powered by Disqus
|
<urn:uuid:3f3dfde7-636c-4938-b522-42635a6d7bb4>
|
CC-MAIN-2013-20
|
http://www.policyinnovations.org/ideas/commentary/data/000211
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.95866
| 2,767
| 2.90625
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon capture",
"climate change",
"renewable energy"
],
"nature": [
"conservation"
]
}
|
{
"strong": 4,
"weak": 0,
"total": 4,
"decision": "accepted_strong"
}
|
Oct. 6, 2010 Doppler weather radar will significantly improve forecasting models used to track monsoon systems influencing the monsoon in and around India, according to a research collaboration including Purdue University, the National Center for Atmospheric Research and the Indian Institute of Technology Delhi.
Dev Niyogi, a Purdue associate professor of agronomy and earth and atmospheric sciences, said modeling of a monsoon depression track can have a margin of error of about 200 kilometers for landfall, which can be significant for storms that produce as much as 20-25 inches of rain as well as inland floods and fatalities.
"When you run a forecast model, how you represent the initial state of the atmosphere is critical. Even if Doppler radar information may seem highly localized, we find that it enhances the regional atmospheric conditions, which, in turn, can significantly improve the dynamic prediction of how the monsoon depression will move as the storm makes landfall," Niyogi said. "It certainly looks like a wise investment made in Doppler radars can help in monsoon forecasting, particularly the heavy rain from monsoon processes."
Niyogi, U.C. Mohanty, a professor in the Centre for Atmospheric Sciences at the Indian Institute of Technology, and Mohanty's doctoral student, Ashish Routray, collaborated with scientists at the National Center for Atmospheric Research and gathered information such as radial velocity and reflectivity from six Doppler weather radars that were in place during storms. Using the Weather Research and Forecasting Model, they found that incorporating the Doppler radar-based information decreased the error of the monsoon depression's landfall path from 200 kilometers to 75 kilometers.
Monsoons account for 80 percent of the rain India receives each year. Mohanty said more accurate predictions could better prepare people for heavy rains that account for a number of deaths in a monsoon season.
"Once a monsoon depression passes through, it can cause catastrophic floods in the coastal areas of India," Mohanty said. "Doppler radar is a very useful tool to help assess these things."
The researchers modeled monsoon depressions and published their findings in the Quarterly Journal of the Royal Meteorological Society. Future studies will incorporate more simulations and more advanced models to test the ability of Doppler radar to track monsoon processes. Niyogi said the techniques and tools being developed also could help predict landfall of tropical storm systems that affect the Caribbean and the United States.
The National Science Foundation CAREER program, U.S. Agency for International Development and the Ministry of Earth Sciences in India funded the study.
Other social bookmarking and sharing tools:
- A. Routray, U. C. Mohanty, S. R. H. Rizvi, Dev Niyogi, Krishna K. Osuri, D. Pradhan. Impact of Doppler weather radar data on numerical forecast of Indian monsoon depressions. Quarterly Journal of the Royal Meteorological Society, 2010; DOI: 10.1002/qj.678
Note: If no author is given, the source is cited instead.
|
<urn:uuid:1a3342d5-331c-4427-b687-dd2c4ff35a5c>
|
CC-MAIN-2013-20
|
http://www.sciencedaily.com/releases/2010/10/101005171044.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.920835
| 630
| 3.34375
| 3
|
[
"climate"
] |
{
"climate": [
"monsoon"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
To create a true net-zero building, one that literally generates as much or more energy than it consumes, is no easy task. Still, it’s a task that makes good business sense. After all, buildings consume a huge amount of energy, which cuts into profit margins. This simple equation finally hit home with Hines, the international real estate firm, and equity partner J.P. Morgan Asset Management. The two are partnering to build a new 13-story, 415,000-square-foot building at La Jolla Commons in San Diego that will become the nation’s largest carbon-neutral office building to date.
In order to achieve this rare feat, the building will utilize combination of high-performance building design, directed biogas and on-site fuel cells that annually will generate more electricity than tenants will use. The fuel cells, made by Bloom Energy, will generate approximately 5.0 million KWh of electricity annually, which is roughly equivalent to the electricity required to power 1,000 San Diego homes. Methane needed to power the fuel cells will be acquired from carbon-neutral sources, such as landfills and wastewater plants, and placed into the national natural gas pipeline system. The building’s exterior is predominately a glass curtainwall system incorporating highly efficient, insulated, double-paned glass with a clear, low-emissive coating.
Hines views its newest building, which will also contain a highly efficient under-floor air system, as a sort of ongoing R&D project. “Our net-zero project at La Jolla Commons gives us a great foundation for furthering the use of carbon-neutral technologies and fuels,” said Gary Holtzer, Hines’ global sustainability officer. ”Our next step is to adapt what we have learned and apply it to an existing urban property in a less temperate environment.”
Construction on the carbon neutral building began in April 2012 and completion is scheduled for mid-2014.
|
<urn:uuid:7e91d19f-7374-429a-9ae1-3082ddfa9f52>
|
CC-MAIN-2013-20
|
http://inhabitat.com/hines-real-estate-breaks-ground-on-nations-largest-carbon-neutral-building/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.932631
| 408
| 2.53125
| 3
|
[
"climate"
] |
{
"climate": [
"methane"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Date: April 2003
Why does DNA decompose when heated to too high a temperature?
The most of the hydrogen bonds that join the two DNE stands break from the
heat. With more heat the actual bonds between the nucleotides can break
-fragmenting a strand.
Very few organic molecules can with stand high temperatures for very
long. Heat provides the activation energy to break covalent bonds and
denature/destroy molecular structure. Proteins are particularly sensitive.
There are exceptions, as in the case of thermophillic bacteria. There are
cases or heat shock-proteins or "chaperone" proteins that can protect
regular proteins from breakdown - to a point.
DNA doesn't actually decompose when heated. It just melts. DNA comes in two mirror image strands that you could visualize as a zipper. The chemical bonds that make up each strand of the zipper are permanent joins, but the teeth that connect the two strands are much weaker and sensitive to heat. So when you expose DNA to heat (for instance, by boiling it), the two strands of the zipper separate. By very slowly cooling that denatured DNA, you could actually get the strands to reanneal or zip up again.
Christine Ticknor, Ph.D.
Ireland Cancer Center
Case Western Reserve University
DNA in its native form is composed of two molecules that have the
characteristic of being complementary such that one strand associates
with the other in a particular fashion, forming a "double helix." The
two molecules are stabilized in this structure due to noncovalent bonds,
mostly hydrogen bonds and vanderWaals forces. These bonds are not
strong; accordingly, when the temperature rises sufficiently, the double
helix is said to "melt" into its two component molecules, and will
reassociate upon slow cooling under appropriate salt conditions. So,
the issue is not decomposition so much as disassociation upon exposure
to too high a temperature; and the reason for the disassociation is that
the energy input of the heat overcomes the ability of the noncovalent
bonds to keep the molecules together.
Heat "cooks" organic material, DNA or others, especially in the presence of oxyge. So
DNA can "burn" in the usual sense of the word forming CO2, N2 and other compounds. Even in the
absence of O2, heat can cause DNA, or other molecules, to change its structure either by losing
some degradation product, or just changing structure so that it cannot replicate. The
"technical" term for proteins is "denaturing", which is a catch-all phrase for "losing its
Chemical bonds have a certain stability based on the type of bonding. Some
bonds are quite strong The following are covalent bond strengths.
Bond Bond Strength(kJ/mole); Bond Bond Strength (kJ/mole)
Cl-Cl 239 H-Cl 427
H-H 432 C-H 413
N N 941 N-H 391
Bond strength can be overcome by adding heat...The GC and AT bonds in DNA
are hydrogen bonds (non-covalent) and can also be overcome with heat.
these hydrogen bonds are about 71 kilojoules per mole...relatively weak (but
strong in large numbers) and can be broken and reformed by heating and
cooling. This is the secret behind the polymerase chain reaction.
Click here to return to the Molecular Biology Archives
Update: June 2012
|
<urn:uuid:5987cb50-6a4d-4616-95e0-78f67f20e21d>
|
CC-MAIN-2013-20
|
http://newton.dep.anl.gov/askasci/mole00/mole00390.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.909621
| 746
| 3.203125
| 3
|
[
"climate"
] |
{
"climate": [
"co2"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Tim the Plumber wrote:
The idea that you can predict the climate based on it's temperature behaviour between 1970 and 1998 is silly. Just as the statement that the absence of warming since 1998 and 2011 cannot utterly disproove AGW the rise between 1970 and 1998 cannot 100% proove the theory that CO2 is a significant greenhouse gas at the levels we have today.
Nobody is trying to predict temperatures based on historically temperatures over the last 40 or so. The predictions are based on our understanding of earth's climate over hundreds of millions of years and particularly the last 4 million years of recurring ice ages. The climate while complicated has to obey some very simple basic physical rules that is the energy coming in has over time to equal the energy going out. Change that simple relationship in some way and the temperature will change change until such time as the equation is back in balance. It is certain that greenhouse gases reduce the amount of energy that leaves the earth.
Northern Europe is having a wet and cool summer, it's just America which is having a long, hot and dry one.
No my original statement is correct
According to NOAA:-http://www.ncdc.noaa.gov/sotc/global/2012/6
The Northern Hemisphere land and ocean average surface temperature for June 2012 was the all-time warmest June on record, at 1.30°C (2.34°F) above average.
The Northern Hemisphere average land temperature, where the majority of Earth's land is located, was record warmest for June. This makes three months in a row — April, May, and June — in which record-high monthly land temperature records were set. Most areas experienced much higher-than-average monthly temperatures, including most of North America and Eurasia, and northern Africa. Only northern and western Europe, and the northwestern United States were notably cooler than average.
Tim the Plumber wrote:
When thinking about such climatic events it is vital to have a sense of proportion and not see a tiny change over 3 decades as a reason to think that there will be a drastic "exponential" continuation of this.
The temperatures changes over the last 3 decades simply confirms our basic understanding of the climate.
It is akin to having a graph of the speed of your car traveling along a highway. When the speed is 55mph your pasenger is happy, when the graph plots up to 57 mph the pasenger panics because the car is about to accelerate untill the machine disintergrates at the sound barrier. When the graph shows a slowing to 53mph the panic is of the sudden stopping of the car and the trafic behind slamming into the back of the car.
No it is more being in a car where the cruise control is stuck and the speed just keeps increasing.
Climate varies about quiote a lot.
Because we live fairly short lives we do not rember the droughts of the dust bowl. We do not rember the medevil warm period. We do not rember the frost fairs on the frozen Thames.
This is why we maintain weather data which shows that the current conditions are both worst and different.
We should take these dire warnings with a big pinch of salt.
Dire warnings should be assessed on the merits and action taken if necessary but never ignored
The sea level rose by 18cm last centuary, how many cities flooded because of this? This centuary looks like it could be twice as bad, maybe.
So as long as we split the sea level rises into 18 cm chunks it will be no problem ?
I am reminded of camels transporting straw.
|
<urn:uuid:19f0ac3e-e73a-454a-9cba-d004cc56a732>
|
CC-MAIN-2013-20
|
http://www.envirolink.org/forum/viewtopic.php?f=3&t=19521&start=60
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.954595
| 746
| 2.703125
| 3
|
[
"climate"
] |
{
"climate": [
"co2",
"greenhouse gas"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
OAKLAND OUTDOORS: Oakland County residents can visit the ‘Arctic Circle’ right in Royal Oak
The date of the winter solstice on Dec. 21 draws near.
What better time than now to plan an adventuresome trek into the Arctic — a journey to the cryosphere, a land of ice, snow and frozen sea water. The Arctic is a landscape of mountains, fjords, tundra and beautiful glaciers that spawn crystal-colored icebergs. It is the land of Inuit hunters, polar bears and seals and is rich with mysteries that science is still working to unravel. Those wishing to reach the Arctic must travel north of the Arctic Circle.
And just where is the Arctic Circle?
The climatologists at the National Snow and Ice Data Center define the Arctic Circle as the imaginary line that marks the latitude at which the sun does not set on the day of the summer solstice and fails to rise on the day of the winter solstice, a day that is just around the corner. Arctic researchers describe the circle as the northern limit of tree growth. The circle is also defined as the 10 Degree Celsius Isotherm, the zone at which the average daily summer temperature fails to rise above 50 degrees Fahrenheit.
Polar bears are hungry in this foreboding landscape where the mercury can plunge quickly to 60 degrees below zero and winds are ferocious.
Perhaps it is time to stop reading my words, bundle up the kids and hike into the Arctic. Why not today! Children will almost certainly see seals just yards away and, if the timing is right, go nose to nose with a mighty polar bear that may swim their way and give a glance that could be interpreted as a one-word question: “Tasty?”
And here is the rest of the story, a special trail tale that takes visitors to a one-of-a-kind place.
I am amazed at how many residents of Oakland County have yet to hike through an acrylic tunnel known as the Polar Passage — a public portal to the watery world beneath the land above the Arctic Circle.
The 70-foot long tunnel is the highlight of a 4.2-acre living exhibit at the Detroit Zoo, the Arctic Ring of Life. And a hike through that exhibit and underwater viewing tunnel brings encounters with three polar bears, a gray seal, two harbor seals, one harp seal and three arctic fox.
A visit to the Arctic Ring of Life is more than a fun hike. It opens our eyes to the life of the Inuit people and introduces visitors to a fragile world in danger from events that can no longer be denied — climate change, global warming and rising sea waters. Continued...
Upon entering the park, check a map for the location. It’s easy to find and is rich with historical, cultural and natural information of the Inuit, the Arctic people. Before Europeans arrived, they had never even heard the word, “Eskimo.” The arrival of Europeans in the early 1800s was not good news for the pale-skinned strangers. They carried foreign diseases, and missionaries followed that sadly enticed the trusting Intuits to give up their own religion and become Christians. The native people were encouraged, and sometimes forced, to abandon their traditional lifestyles and live in the village. The jury may still be out, but some historians claimed that the Inuit were eager to embrace the new ways to forgo the harsh reality of nomadic life. Today, the Arctic Ring of Life’s Nunavut Gallery gives visitors a glimpse of a disappearing culture torn between the old and the new, but still rich with tradition, folk art, spirituality and creativity.
The Inuit once existed almost exclusively on meat and fat with only limited availability of seasonal plants. The Inuits hunted for survival and considered it disrespectful to hunt for sport. And as I dug deeper into their history to prepare for my tunnel trek, I discovered that metal for their early tools were chipped flake by flake from large meteorites and then pounded into tools like harpoon points.
The fascinating relationship between the Inuit, their environment and creatures that dwell above the Arctic Circle is more deeply understood when visitors trek through the highlight of the exhibit, the polar passage tunnel. A polar bear on a tundra hill, built high to afford the bears a wide range of smells, may be sniffing the air for zoo visitors.
The tunnel has acrylic walls four inches thick, is 12 feet wide, eight feet high and offers great views when a seal or polar bear swims. About 294,000 gallons of saltwater surround visitors.
Some wonder why the polar bears do not eat the seals. They can’t. A Lexan wall separates the species.
Be sure to take the time to explore the Exploration Station of the Arctic Ring of Life. It contains many of the accoutrements of a working research station complete with telemetry equipment, computers and displays of snowshoes and parkas from the arctic. Portholes provide views of the seals and bears.
Visitors may want to do what I did — enter through the tunnel a second time and then read all the well placed interpretive signs above ground. Stories of the hunters and the hunted await and include myth-busting facts on mass suicides of lemmings that have been alleged to jump off cliffs into the sea. And be sure to let a child put his or her foot in the imprint of the polar bear track on the pathway. And, of course, save time to hike the rest of the zoo. Cold weather is a perfect time to explore minus the summer crowds.
Jonathan Schechter’s column appears on Sundays. Look for his Earth’s Almanac blog at www.earthsalmanac.blogspot.com Twitter: OaklandNature E-mail:[email protected].
For more information about the Detroit Zoo’s hours, exhibits and special events, visit detroitzoo.org. The zoo is located at 8450 West 10 Mile Road in Royal Oak. No additional fee to visit Arctic Ring of Life.
See wrong or incorrect information in a story. Tell us here
Location, ST | website.com
National Life Videos
- Fire destroys home, damages business in Waterford (1825)
- Long lines to see Stan Lee, Norman Reedus plague Motor City Comic Con on Saturday WITH VIDEO (1063)
- PAT CAPUTO: Detroit Red Wings show why they are Stanley Cup contenders (805)
- Nearby neighbors concerned after man convicted of murder paroled, moves to Pontiac group home (788)
- PAT CAPUTO: Detroit Red Wings show why Stanley Cup dream not far fetched (785)
- Pontiac council votes against Schimmel’s plan to eliminate health care for retirees (686)
- Fall Out Boy wants to "Save Rock and Roll" with new CD (4)
- New backcourt leads Lathrup over Dragons (4)
- Oakland County Sheriff’s Office veteran — Michigan's first black police captain — honored in retirement WITH VIDEO (3)
- Fire destroys home, damages business in Waterford (3)
- Despite Angelina Jolie, breast cancer treatment has a long way to go - COLUMN (2)
- PITTS: Woman ran to Key West, where else? (2)
- Bloomfield Hills mother graduates college with daughter, stepson (2)
Recent Activity on Facebook
Caren Gittleman likes talking cats. She'll discuss everything about them, from acquiring a cat, differences in breeds, behaviors, health concerns, inside versus outside lifestyles, toys, food, accessories, and sharing cat stories. Share your stories and ask her questions about your favorite feline.
Roger Beukema shares news from Lansing that impacts sportsmen (this means ladies as well) and talks about things he finds when he goes overseas to visit my children, and adding your comments into the mix.
Join Jonathan Schechter as he shares thoughts on our natural world in Oakland County and beyond.
|
<urn:uuid:0282f05d-a297-4d07-8241-fec95a39233b>
|
CC-MAIN-2013-20
|
http://www.theoaklandpress.com/articles/2012/12/07/life/doc50c240ecdd482448084195.txt?viewmode=default
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.925567
| 1,668
| 2.765625
| 3
|
[
"climate"
] |
{
"climate": [
"climate change",
"cryosphere",
"global warming"
],
"nature": []
}
|
{
"strong": 3,
"weak": 0,
"total": 3,
"decision": "accepted_strong"
}
|
Name ID 1199
The first professional hunters came in 1913. They found the wildlife plentiful, especially the lions, but saw no elephants. Seven years later, an American arrived in a strange new contraption known as a Ford motor-car and news of the wonders of the Serengeti had reached the outside world. Because the Hunting of lions made them so scarse (they were considered 'vermin'), it was decided to make a partial Game Reserve in the area in 1921 and a full one in 1929. With the growing awareness of the need for conservation, it was expanded and upgraded to a National Park in 1951. Eight years later the Ngorongoro Conservation Area was established in the south-east as a separate unit.
Arusha: A Brochure of the Northern Province and its Capital Town
Page Number: 13-15-17
Extract Date: 1929
It is safe to say that Tanganyika holds a front place among our East African Colonies for the number and variety of its game animals. The belt from Tanga through to Lake Victoria is where game is most numerous. There is an abundance of the commoner antelope, and in certain parts the rarer species such as the Greater and Lesser Kudu, Gerenuk, etc., are still fairly plentiful. Big game like the Elephant, Rhinocerous, Lion and Buffalo, all of which hold for the hunter a new thrill and experience, are to be found in this area in such a variety of country and cover that the Hunting of no two animals is ever alike.
Here the hunter passes through most interesting country; Kilimanjaro with its snow-capped dome, running streams and dense forests, across the plains to the Natron Lakes and the Great Rift Wall with its volcanic formation and on to the great Crater, Ngorongoro. In his travels he will come into contact with some of the most interesting and picturesque tribes that inhabit Africa such as the Masai, Wambulu, etc., each with their own quaint customs and histories.
The Ngorongoro Crater, the greatest crater in the world, measuring approximately 12 miles in diameter, seen from the Mbulu side, is a delight to the eye with its teeming herds of game ; Wildebeest alone running into tens of thousands. This scene conveys to one the idea of a great National Park. Nature has provided the crater with a precipitous rock fence for tns most part and with lakes and streams to slake the thirst of the great game herds which inhabit it. The unalienated part of the crater is now a complete game reserve in which a great variety of game is to be found such as Rhinoceros, Hippopotamus, Lion, and all the smaller fry. The Elephant although not in the crater is to be found in the forests nearby.
The Serengetti Plains lying away to the northwest of the crater holds its full share of animal life and here the sportsman has the widest possible choice of trophies. The Lion in this area holds full sway and is still to be seen in troops of from ten to twenty. Recently, Serengetti and Lion pictures have become synonymous. The commoner species of game are here in abundance and the plains are second only to the crater for game concentration. The country lying between the Grumeti River - Orangi River and the Mbalangeti from Lake Victoria to the Mou-Kilimafetha Road has recently been declared a game reserve.
Game animals that inhabit the northern area are well protected an'd their existence is assured to posterity by the great game sanctuaries and regulations which govern the Hunting or photographing of game.
In the Northern area there are six complete reserves and two closed areas. These are as follows:
(2) Mount Meru.
(3) Lake Natron
(4) Northern Railway.
The closed areas are :
Pienaar's Heights, near Babati and Sangessa Steppe in the Kondoa district. The boundaries for these are laid down in the Game Preservation Ordinance No. 41 of 1921. There are, however, vast areas open to the hunter and the abovementioned sanctuaries do not in any way detract from the available sport which the Northern Tanganyika has to offer.
The following game licences are now in force (Shillings)
:Visitor's Full Licence - 1500
Visitor's Temporary Licence (14 days) - 200
Resident's Full Licence - 300
Resident's Temporary Licence (14 days) - 60
Resident's Minor Licence - 80
Giraffe Licence - 150
Elephant Licence 1st. - 400
2nd. - 600
To hunt the Black Rhinoceros in the Northern Province it is now necessary to hold a Governor's Licence, the fee for which is 150/-. This entitles the holder to hunt one male Rhinoceros. Elephant, Giraffe, and Rhinoceros Licences may only be issued to holders of full game licences.
Now that the Railway is through to Arusha it is not too much to hope that with the assistance of a healthy public opinion the Sanya Plains may become restocked with game which would be a great source of interest and an attraction to the traveller visiting these parts.
Extract Date: 1931
Herne, Brian White Hunters: The golden age of African Safaris
Page Number: 375
Extract Date: 1965
Safari Hunting in East Africa was forever changed by the masterly blueprint of Brian Nicholson, a former white hunter turned game warden. The disciple and successor of C.I. P. Ionides, the "Father of the Selous game reserve," Nicholson conceived a plan for administering Tanzania's expansive wildlife regions. In 1965 he changed most of the vast former controlled Hunting areas, or CHAs, into Hunting concessions that could be leased by outfitters from the government for two or more years at a time. Nicholson also demarcated the Selous game reserve's 20,000 square miles of uninhabited country into 47 separate concessions. Concessions were given a limited quota of each game species, and outfitters were expected to utilize quotas as fully as possible, but not exceed them.
Nicholson's plan gave outfitters exclusive rights over Hunting lands, providing powerful incentives for concession holders to police their areas, develop tracks, airfields, and camps, and, most importantly, preserve the wild game. When the system was put into effect, it was the larger outfitting organizations - safari outfitters who could muster the resources to bid and who had a clientele sufficient to fulfill the trophy quotas Nicholson had set (done in order to provide government revenue by way of fees for anti-poaching operations, development, and research) - that moved quickly to buy up the leases on the most desirable blocks of land. Smaller safari companies who could not compete on their own banded together and formed alliances so that they, too, could obtain Hunting territories.
Herne, Brian White Hunters: The golden age of African Safaris
Page Number: 389
Extract Date: 1973 Sep 7
By the end of 1973 Kenya was the sole remaining tourist destination in East Africa. While the neighboring country of Uganda was still in the throes of military anarchy, Tanzania surprised the world on September 7 by issuing an overnight ban on all Hunting and photographic safaris within its territory. Government authorities moved quickly to seize and impound foreign-registered Land Cruisers, supply trucks, minibuses, aircraft, and equipment.
The stunned collection of safari clients as well as sundry mountain climbers, bird-watchers, and beachcombers who had been visiting the country at the time of the inexplicable edict were summarily escorted to Kilimanjaro airport outside of Arusha to await deportation. The residue of tourists stranded without flights were trucked to the northern town of Namanga where they were left on the dusty roadside to cross into Kenya on foot. All tourist businesses, including the government-owned Tanzania Wildlife Safaris, were closed down. No government refunds were ever made to tourists or to foreign or local safari outfitters
Africa News Online
Extract Date: 2000 June 5
Panafrican News Agency
Frequent acrimony, currently depicting the relationship between game Hunting companies and rural communities in Tanzania, will be a thing of the past after the government adopts a new wildlife policy.
Designated as wildlife management areas, the communities will benefit from the spoils of game Hunting, presently paid to local authorities by companies operating in those areas.
The proposed policy seeks to amend Tanzania's obsolete Wildlife Act of 1975, and, according to the natural resources and tourism minister, Zakia Meghji, 'it is of utmost priority and should be tabled before parliament for debate soon'.
She said the government would repossess all Hunting blocks allocated to professional hunters and hand them over to respective local authorities.
In turn local governments, together with the communities, would be empowered to allocate the Hunting blocks to whichever company they prefer to do business with.
'Guidelines of the policy are ready and are just being fine-tuned,' she said.
Communities set to benefit from this policy are chiefly those bordering rich game controlled areas and parks. They include the Maasai, Ndorobo, Hadzabe, Bahi, Sianzu and Kimbu in northeastern Tanzania.
Members of these communities are often arrested by game wardens and fined for trespassing on game conservation areas. As a result, they have been extremely bitter about being denied access to wildlife resources, which they believe, naturally, belong to them.
Under the new policy, Meghji said, the government will ensure that people undertake increased wildlife management responsibilities and get benefits to motivate them in the conservation of wildlife resources.
The East African
Extract Author: John Mbaria
Extract Date: February 4, 2002
KENYA COULD end up losing 80 per cent of its wildlife species in protected areas bordering Tanzania to hunters licensed by the Tanzanian government.
The hunters have been operating for about a decade in a section of the migratory route south from Kenya to Tanzania's Serengeti National Park.
They shoot large numbers of animals as they move into the park during the big zebra and wildebeest migration between July and December.
There are fears that the Maasai Mara National Park and most of Kenya's wildlife areas bordering Tanzania could lose much of their wildlife population, threatening the country's Ksh20 billion ($256 million) a year tourism industry.
Kenya banned Hunting in 1977 but the sport is legal in Tanzania, where it is sold as "Safari Hunting."
"The product sold is really the experience of tracking and killing animals, the services that go with this and the prestige of taking home the trophies," says a policy document from the Tanzania Wildlife Corporation (Tawico).
Tanzania wildlife officials said wild animals that cross over from Kenya are hunted along their migratory routes in the Loliondo Game Controlled Area in Ngorongoro district of Arusha region, 400 km northwest of Arusha. The area was designated by the British colonial power as a sports Hunting region for European royalty.
The officials said the area is now utilised by a top defence official from the United Arab Emirates (UAE), trading as Ortelo Business Company (OBC), through a licence issued in 1992 by former Tanzania President Ali Hassan Mwinyi. The permit allows the company to hunt wild game and trap and take some live animals back to the UAE.
Safari Hunting earns big money for the Tanzanian government, which charges each hunter $1,600 a day to enter the controlled area.
A hunter is also required to pay fees for each kill, with an elephant costing $4,000, a lion and a leopard $2,000 each and a buffalo $600. The document has no quotation for rhinos.
The sport is organised in expeditions lasting between one and three weeks in the five Hunting blocks of Lake Natron Game Controlled Area, Rungwa Game Reserve, Selous Mai, Selous U3 and Selous LU4.
For the period the hunters stay in each of the Hunting blocks, they pay between $7,270 and $13,170 each. Part of this money is shared out among the many Ujamaa villages, the local district councils and the central government.
Although Tawico restricts the number of animals to be culled by species, poor monitoring of the activities has meant indiscriminate killing of game.
"Some of the animals are snared and either exported alive or as meat and skins to the United Arab Emirates and other destinations," local community members told The EastAfrican during a recent trip to the area.
They claimed the hunters were provided with "blank Hunting permits," giving them discretion over the number of animals to be hunted down. Kenya wildlife conservation bodies are concerned that big game Hunting in the Ngorongoro area is depleting the wildlife that crosses the border from Kenya.
"Kenya is losing much of its wildlife to hunters licensed by the Tanzanian government," the chairman of the Maasai Environmental and Resource Coalition (MERC), Mr Andrew ole Nainguran, said. MERC was set up in 1999 to sensitise members of the Maasai community in Kenya and Tanzania to the benefits of wildlife conservation.
Kenya and Tanzania wildlife authorities have regularly discussed the problem of security and poaching in Arusha. However, the KWS acting director, Mr Joe Kioko, said legalised Hunting has never been discussed in any of the meetings.
The hunters are said to fly directly from the UAE to the area using huge cargo and passenger planes which land on an all-weather airstrip inside the OBC camp. The planes are loaded with sophisticated Hunting equipment, including four-wheel drive vehicles, weapons and communication gadgets.
On their way back, the planes carry a variety of live animals, game trophies and meat. Employees at the camp said the hunters are sometimes accompanied by young Pakistani and Filipino women.
The International Fund for Animal Welfare regional director, Mr Michael Wamithi, said Kenya and Tanzania should discuss the negative impact of the sport Hunting on Kenya's conservation efforts.
"The two neighbours have a Cross-Border Law Enforcement Memorandum of Understanding where such issues could be dealt with."
Kenya seems to be alone in adhering to strict protection of wildlife, a policy famously demonstrated by President Daniel arap Moi's torching of ivory worth $760,000 in 1989.
Although the country has made significant progress in securing parks from poachers, it is yet to embrace a policy on "consumptive utilisation" of animals advocated by Kenyan game ranchers and Zimbabwe, which wants the international trade ban on ivory lifted.
The animals in the Hunting block have been reduced to such an extent that the OBC camp management has been spreading salt and pumping water at strategic places to attract animals from Serengeti and the outlying areas.
"We will not have any animals left in the vicinity unless the Hunting is checked," a local community leader, Mr Oloomo Samantai ole Nairoti, said, arguing that the area's tourism economy was being jeopardised.
Mysterious fires in the area to the south of Serengeti have also forced animals to seek refuge in the Hunting blocks.
Locals said the camp is exclusively patronised by Arab visitors. The camp is usually under tight security by Tanzanian police.
The permit granted by Mr Mwinyi has raised controversy in Tanzania and was at one stage the subject of a parliamentary probe committee because members of UAE's royal family were not entitled to the Hunting rights in the country.
"Only presidents or monarchs are entitled to hunt in the area," an official said, adding that the UAE royal family had abused their permit by killing animals outside their given quotas or specified species.
The government revoked the licence in 1999 after realising that OBC was airlifting many wild animals to the Middle East, only to renew the permit in 2000. The current permit runs until 2005.
The withdrawal of the permit followed the recommendations of a 1994 parliamentary probe commission set up to "investigate the Hunting behaviour" of the UAE company.
Sources said permanent Hunting is prohibited in the Loliondo Game Controlled Area for fear of depleting animals from the four parks, which host the bulk of the region's tourist resorts.
The area is in a natural corridor where wild animals cross while roaming between the Ngorongoro Conservation Area and Serengeti National Park in Tanzania and Maasai Mara Game Reserve and Amboseli National Park in Kenya.
The late founding president of Tanzania, Mwalimu Julius Nyerere, took to himself the powers to issue Hunting permits for Loliondo when Tanzania became independent in 1961, but he never granted any.
After obtaining the permit, the UAE hunters created Hunting blocks in the area covering over 4,000 sq km.
No other Hunting companies have been granted permits, the source said.
The UAE royal family has donated passenger aircraft to the Tanzania army and a number of vehicles to the Wildlife Division.
The 1974 Wildlife Act set up five categories of wildlife conservation areas.
These are national parks, game reserves, partial game reserve, open areas and Ngorongoro Conservation Area. Hunting is prohibited in the national parks and Ngorongoro Conservation Area, but allowed in other areas during the seasonal Hunting period from July to December.
Additional reporting by Apolinari Tairo in Dar es Salaam
Tomlinson, Chris Big game hunting threatening Africa
Extract Author: Chris Tomlinson
Extract Date: 2002 03 20
Loliondo GAME CONTROL AREA, Tanzania - At a dirt airstrip in rural Tanzania, a desert camouflaged cargo plane from the United Arab Emirates air force taxis up to pallets stacked with large coolers full of game meat, the harvest of a successful Hunting season.
As Tanzanian immigration and customs officials fill out documents under a thatched shelter, brand-new, four-wheel-drive trucks and dune buggies drive to and from a nearby luxury campsite, the base for one of Tanzania's most expensive - and secretive - game Hunting operations, Otterlo Business Corp.
Hundreds of members of Arab royalty and high-flying businessmen spend weeks in the Loliondo Game Control Area each year Hunting antelope, lion, leopard and other wild animals. The area is leased under the Otterlo name by a member of an emirate royal family who is a senior officer in the UAE defense ministry.
While neighboring Kenya outlawed big game Hunting in 1978, the Tanzanian government says Hunting is the best use of the land and wildlife. But villagers and herders say big money has led government officials to break all the Hunting rules, resulting in the destruction of most of the area's non-migratory animals and putting East Africa's most famous national parks under threat.
Loliondo is on the main migratory route for wildlife north of Ngorongoro Crater, east of Serengeti National Park and south of Kenya's Masai Mara National Reserve. The summer Hunting season coincides with the migration of wildebeest and zebra through the area, where they eventually cross into the Serengeti and the Masai Mara. Predatory animals follow the migration.
During the colonial era, Loliondo was set aside for European royalty as a Hunting area. Since independence, Loliondo has remained a Hunting reserve, but it is supposed to be managed by area residents for their benefit.
Local leaders, who refuse to speak publicly because they fear retribution, say they have not been consulted about the lease that was granted in 1995 by national officials in Tanzania's political capital, Dodoma. They say government officials have tried to silence criticism.
"The lease was given by the government and the Maasai landowners were not involved," said one Maasai leader. "All the resident animals have been killed ... (now) they carry out Hunting raids in the Serengeti National Park, but the government closes its eyes."
Maasai warriors told The Associated Press that hunters give cash to anyone who can lead them to big game, especially leopards. They also said that Otterlo officials have begun pumping water into some areas to attract more animals and that what the warriors call suspicious fires in the Serengeti have caused animals to move into Loliondo.
An Otterlo manager, who gave his name only as Khamis, initially agreed to an interview with AP but later did not return repeated phone calls.
In an interview with the newspaper, The East African, Otterlo managing director Juma Akida Zodikheri said his company adheres to Tanzanian law, and he denied hunters killed animals indiscriminately. He said the owner of the company is Maj. Gen. Mohammed Abdulrahim al Ali, deputy defense minister of the UAE.
While Tanzania has strict rules on game Hunting, Maasai who have worked at the lodge say guests are never told of the limits and hunt as much as they want. Tanzanian officials deny that.
Col. A.G.N. Msangi, district commissioner for Ngorongoro District, said all applicable rules are enforced. He accused the Maasai of rumor-mongering in an effort to discredit Otterlo.
The company "is following the system the government wants," Msangi said. "OBC has invested more money here than any other company in the district."
Msangi said Hunting companies request permission to kill a certain number of animals. Once the request is approved by wildlife experts at the Ministry of the Environment, the company pays a fee based on that number whether they actually kill the animals or not, he said.
"We have police and ministry people making sure they don't exceed what they have paid for," Msangi said. The tourists are also required to employ professional hunters to ensure no female or young animals are killed, he added.
Compared to the numbers in Serengeti National Park, very few large animals were seen during a three-hour drive through Loliondo. But without any independent survey of the animal population, it is impossible to know whether Msangi's conservation efforts are working.
Msangi described his main duty as balancing the needs of people, animals and conservation. He said not only does Hunting revenue finance wildlife conservation, but Otterlo, like most tourism companies, also makes charitable donations to help pay for schools and development projects and it provides badly need jobs.
Also appeared in http://www.washtimes.com/world/20020801-22110374.htm
1 Aug 2002
Internet Web Pages
Extract Author: Lifer
Extract Date: April 16 2002
Posted - April 16 2002 : 20:53:22
The East African Newspaper of 4-10 February 2002 carried an article titled "Game Carnage in Tanzania Alarms Kenya", written by John Mbaria with supplement information from Apolinari Tairo of Dar es Salaam. The article was on The Ortello Business Hunting Company, which started to hunt in the Loliondo Game Controlled Area in 1992.
The following are issues raised in the article:
a) Hunting activities carried out in Liliondo Game Controlled Area near the Tanzania / Kenyan border causes loses of 80% of the Kenyan wildlife.
b) Hunting is conducted in the migratory route in the south between Kenya and Serengeti National Park. The animals are hunted during the migratory period as they move to Kenya and on their way back to Tanzania in July to December.
c) Hunting is threatening the Kenyan tourism industry, which earns the country USD 256.0 annually.
d) The Hunting kills animals haphazardly, without proper guidance and monitoring of actual number of animals killed and exported outside the country.
e) Airplanes belonging to Ortello Business Corporation (OBC) carry unspecified type of live animals and birds from Loliondo on their way back to UAE. Further more, the air planes fly directly in and out of Loliondo without stopping at Kilimanjaro International Airport (KIA).
The following are responses to the issues raised:
2.0 Conservation of wildlife in Tanzania
Tanzania is among the top ten countries in the world rich in biodiversity. Tanzania is also leading in wildlife conservation in Africa. It has 12 National Parks, including the famous Serengeti National Park, 34 Game Reserves and 38 Game Controlled Areas. The wildlife –protected areas cover 28% of the land surface area of Tanzania. In recognition of the good conservation works, Tanzania was awarded a conservation medal in 1995 by the Safari Club International whose headquarters is in the United States of America.
Tanzania has a number of important endangered animal species in the world. Such animal species are: Black Rhino, Wild Dog, Chimpanzee, Elephant and Crocodile (Slender Snorted Crocodile).
In 1998, the Government of Tanzania adopted a Wildlife Policy, which gives direction on conservation and advocate sustainable use of wildlife resources for the benefit of the present and future generations.
3.0 Tourist Hunting
Regulated tourist Hunting or any other type of Hunting that observes conservation ethics does not negatively affect wild animal populations. This is because Hunting ethics is based on selective Hunting and not random shooting of animals. Hunting was banned in Tanzania from 1972 to 1978. The resultant effect was increased poaching and reduced government revenue from wildlife conservation. Low revenue caused low budgetary allocations to wildlife conservation activities and the lack of working gear and equipment. When the tourist Hunting resumed Elephant population increased from 44,000 (in 1989) to 45,000 (in 1994). Elephant is a keystone species in the Hunting industry and is a good indicator in showing population status of other animal species in their habitat.
In 1989 to 1993 the government revenue from the Hunting industry increased from USD 2,422,500.00 to USD 7,377,430.00. The government earned a total of USD 9.3 Million from tourist Hunting in the year 2002. Increased revenue and keystone species such as Elephant are the results of efficient implementation of good plans and policies in conservation and sustainable use of wildlife resources.
4.0 Response to the issues raised in the article
4.1 Hunting against the law by OBC
OBC is one of the 40 Hunting companies operating in Tanzania. The Company belongs to the United Arab Emirates (UAE). Different from other Hunting companies, OBC does not conduct tourist Hunting business. The Kingdom of UAE has been the client Hunting in the Loliondo Game Controlled Area since 1992.
In conducting Hunting in Loliondo Game Controlled Area, the Company adhere to the law and regulations governing the tourist Hunting industry, namely:
4.1.1 Payment of concession fee amounting to USD 7,500.00 per Hunting block per year.
4.1.2 Requesting for a Hunting quota from the Director of Wildlife, before issuance of Hunting permit.
4.1.3 Payment of game fees as stipulated by the Government.
4.1.4 Hunting only those animals shown in the Hunting permit.
4.1.5 Contributing to the development of the Hunting block, local communities’ development projects and anti-poaching activities.
The following is what OBC has done so far:
· Contribution towards the development of the Ngorongoro District of USD 46,000.00
· Construction of Waso Primary and Secondary Schools, six bore holes and cattle dips and has purchased two buses to enhance/local transportation. Furthermore, OBC contributed TSh. 30.0M to six villages in the Hunting area, for providing secondary school education to 21 children.
· Purchased a generator and water pump worth TSh. 11.0M for provision of water to six villages. It has also constructed all weather roads and an airstrip within Loliondo area.
4.1.6. Different from the rest of the Hunting companies OBC Hunting period is very short. Normally the Hunting season lasts for six months, but OBC hunts for a maximum of four months. Few animals are shot from the Hunting permit.
4.2 Animals hunted in migratory routes.
The Government of Tanzania has permitted Hunting in the Loliondo Game Controlled Area and not in the migratory route between Masai Mara and Serengeti National Park. The Loliondo Game Controlled Area is a plain bordering the Serengeti National Park to the east.
4.3 The right for Tanzania to use wildlife in the Loliondo Game Controlled Area
The wildlife found in Tanzania is the property of the Government of Tanzania. The notion that these animals belong to Kenya is not correct. The wild animals in Loliondo Game Controlled Area do not have dual citizenship . Since some animal species move back and forth between Tanzania and Kenya it is better understood that these animals would be recognised to belong to either party during the time they are in that particular country. Animals in Masai Mara, Serengeti, Loliondo and Ngorongoro belong to one ecosystem namely, Serengeti ecosystem. However, Tanzania being a sovereign State with her own policies has the right by law to implement them. The same applies to Kenya, which has the right to implement its no-Hunting policy basing on the administration of her laws. Tanzania has therefore, not done anything wrong to undertake Hunting on her territory.
4.4 Hunting is threatening Kenyan tourism
Migratory animals move into Kenya during the rainy season. After the rainy season they move back to Tanzania. Animals that are hunted in Liliondo Game Controlled Area during this time of the year are very few. In the year 2000, only 150 animals were hunted, and in the year 2001 only 139 animals were hunted. It is therefore, not true that 80% of the animals in the border area were hunted. Based on this argument, it is also not true that Hunting conducted by OBC is threatening the Kenyan tourism industry. Tanzania does not allow Hunting of elephants 10 kilometres from the Tanzania/Kenya international boundary. (CITES meeting held at the Secretariat Offices in Geneva in 1993). This is an example of the measures taken to control what was erroneously referred to by the East African Paper as “haphazard Hunting of animals of Kenya”.
Furthermore, it is not true that the Wildlife Division does not know the number of animals that are killed. Control of Hunting is done by the Wildlife Division, District Council and other Law Enforcement agencies. The OBC does not capture and export live animals since it does not possess valid licence to do so.
4.5 OBC airplances export assorted number of live animals from Loliondo to UAE
Capture and export of live animals and birds is conducted in accordance with the Wildlife Conservation Act No. 12 of 1974 and resolutions of the Convention on International Trade of Endangered Species of Wild Fauna and Flora (CITES). The live animal trade is also conducted in accordance with the International Air Transport Association (IATA) regulations, with regard to the size of the boxes/containers allowed to transport specific animal species in order to avoid injuries or death of the same. The principle behind the live animal trade is sustainability. CITES may prohibit exportation of animals whose trade is not sustainable. On these grounds it is obvious that CITES and therefore, its 150 members recognise that the Tanzanian live animal trade is sustainable.
Live animal traders who exports animals, birds and other live specimens are obliged to adhere to the following procedure:
i) Must hold valid licence to trade on live animals.
ii) Must hold a capture permit and thereafter an ownership permit./certificate. The number of animals possessed and the number of animals listed on the ownership permit must be consistent with the number of animals that were listed in the capture permit and actually captured and certified.
iii) Must obtain an export permit for animals listed on the ownership permit/certificate.
iv) The Officer at the point of exit must certify that the animals exported are those listed on the certificate of export. The number of animals to be exported must tally with the number listed on the certificate of export.
Verification of exported animals is conducted in collaboration with the police and customs officials.
v) The plane that will carry live animals is inspected by the Dar es Salaam and Kilimanjaro Handling Companies’ Officials.
vi) For animals listed under CITES, appropriate export and import certificates are used to export the said specimens. If there is any anomaly in exporting CITES species, the importing country notifies CITES Secretariat, which in turn notifies the exporting country, and the animals in question are immediately returned to the country of export.
4.6 Other specific isues
4.6.1 Hunters are given blank permits
Companies are issued Hunting quotas before they commence Hunting activities. Each hunter is given a permit, which shows the animals that he/she is allowed to hunt depending on the quota issued and the type of safari. There are four types of safari Hunting as follows: 7, 14, 16 and 21 days safari. Each Hunting safari indicates species and numbers of animals to be hunted. When an animal is killed or wounded the officer in-charge overseeing Hunting activities signs to certify that the respective animal has been killed. If the animal has been wounded, the animal is tracked down and killed to ensure that no other animal is killed to replace the wounded animal at large. This procedure is a measure of monitoring of animals killed by hunters.
4.6.2 Good Neighbourhood Meetings between Tanzania and Kenya
There are three platforms on which Tanzania and Kenya meet to discuss conservation issues as follows:
a) The Environment and Tourism Committee of the EAC.
b) The Lusaka Agreement. In the Lusaka Agreement Meeting conservation and anti-poaching matters amongst member countries are discussed. The HQ of the Lusaka Agreement is in Nairobi.
c) Neighbourhood meeting. Experts in the contiguous conservation areas meet to discuss areas of cooperation between them, for example, in joint anti-poaching operations. Based on the regulations that govern the Hunting industry animals are not threatened by extinction since the animals that are hunted are old males for the purpose of obtaining good trophies. Trophies are attractions in this Hunting business. It is on this basis that tourist Hunting is not discussed in the said meetings, because is not an issue for both countries.
4.6.3 OBC airplanes flies directly to and from Loliondo without passing through KIA
The Tanzania Air Traffic Law requires that all airplanes land at KIA before they depart to protected areas. When the airplanes are at KIA and DIA the respective authorities conduct their duties according. The same applies when airplanes fly to UAE. They are required to land at KIA in order to go through immigration and customs checks. The allegation that OBC airplane does not land in KIA is therefore false. Furthermore, Tanzania Air Traffic Control regulates all airplanes includingly, OBC airplane at entry points.
4.6.4 OBC sprays salt in some parts of the Loliondo Game Controlled Area in order to attract animals from Serengeti National Park.
These allegations are baseless since the Tourist Hunting Regulations (2000) prohibit distribution of water and salt at the Hunting site in order to attract animals for Hunting. Besides the Game Scouts who supervise Hunting had never reported this episode. Furthermore, there are no reports that OBC is responsible for wild fires that gutters the south of the Serengeti National Park.
4.6.5 Cancellation of OBC block permit in 1999 since it was involved in the exportation of live animals.
This allegation is not true. The truth is that Hunting blocks are allocated to Hunting companies after every five years. The allocation that was done in 1995 expired in 1999. The next allocation was done in year 2000 and the companies will use the allocated blocks until 2004.
4.6.6 The UAE Royal Family contributions to the Wildlife Division
This is true. The Wildlife Division had received support from the UAE including: vehicles, transceivers and field gear in 1996. This was part of the fulfilment of the obligation by all Hunting companies to contribute towards conservation and anti-poaching activities.
Records in the Ministry of Natural Resources and Tourism show that there is no other District in Tanzania with Hunting area, other than Ngorongoro District, that receives enormous funds from Hunting business for community development. OBC contributes up to TSh. 354,967,000.00 annually for community development in Loliondo.
The Government of Tanzania has no reasons to stop the Hunting activities in Loliondo Game Controlled Area. The government sees that local communities and the Ngorongoro District Council benefit from the Hunting industry.
Edited by - lifer on 04/16/2002 20:57:41
Extract Author: Yannick Ndoinyo
Extract Date: 17 august 2002
ISSN 0856-9135; No. 00233
A rejoinder to the Ministry’s press release on Loliondo and OBC
We are replying in a critical analysis to the Ministry of Natural Resources and Tourism Press Release in the East African paper of April 1-7 2002 regarding the "Game Carnage in Tanzania Alarms Kenya" in the same paper (East Africa February 4-10 2002). Special reference was given to Hunting activities by OBC and our analysis base to same Company.
As it appears in the Press Release, OBC is the property of the UAE, but in reality some top influential people in Tanzania have some shares in the company. The OBC has been in Loliondo since 1992, even though ever since the whole local community in Loliondo refused to accept its presence and goes to the present day. OBC, to us, is not a normal Hunting company. It seems as if the Company has the right of ownership over land and other natural resources like water and wildlife. OBC has constructed expensive and luxurious houses, airstrip and big godowns on water source without the local people’s authority while they depend on such water for dry season grazing. Our surprise is that the government has always denied this fact and defended the Company. Why? The Company may be adhering to regulations and laws governing the tourist Hunting business in the books only and not practically. There are no monitoring schemes to make sure that the Company adheres to the said regulations.
It is true that OBC contributes 30 million to six villages, which is 5 million per village, and it was initially 2.5 million per village. The amount was raised two years ago. The issue here is that the amount was determined by OBC alone and therefore paid when they feel like doing, no binding mechanisms to endure payment on regular basis. The former OBC director was once quoted as saying, "I am paying this money as this money as a goodwill only because the government does not wish me to do so". The amount however does not compensate or match the resources extracted from the land of six villages. The implication is that OBC has entered into agreement with the government only and not with the villages. The provision of education to 21 children as indicated the Press release, is basically not true or correct.
The 30 million is the annual goodwill contribution from OBC to the six villages and not purposely meant for education only. The plan to utilize this money is upon the villages themselves.
It is also true that OBC has constructed Wasso secondary school but not Wasso primary school. The secondary school, which was built for the six villages in which OBC operated and the whole of Ngorongoro district has been taken by the government thus limiting the number of children hailing from these villages and Ngorongoro district an opportunity to obtain education from the same school.
In regard to bore holes, there are only four known boreholes and all these are built in Wasso and Loliondo townships. There are no any boreholes existing in any of the six villages, except only that a water pump machine, which currently does not work, was purchased for Mondorosi hamlet (Kitongoji) of Soit-Sambu village. Again, there is no virtually any cattle dip that OBC did dig or rehabilitated in six villages as mentioned in the Press Release from the Ministry of Natural Resources and Tourism. The information that OBC purchased a generator and water pump worth 11million is to six villages for water provision is false and misleading. Most villages in Loliondo have water problems and it is impossible for a generator to sustain a single village leave alone six villages. No single village has received such service from OBC.
In regard to transport, the two buses were either bought or just brought as second hand vehicles. These buses are expensive to run and spares are not easily obtained. At present they are just grounded at Ngorongoro district workshop/garage. There was a time the councilors debated whether or not to sell because of difficult management. At present the people of Loliondo use an extremely old SM bus for transport.
Again in the aspect of transport, there are no all weather roads in Loliondo that OBC constructed as it was said the Press release. Roads in most parts of Loliondo are murramed roads and mostly were constructed by Ngongoro district council using money from TANROADS and not OBC.
It is indeed true that OBC Hunting period is very short. There is a lot that can happen in short period especially if the team of hunters is composed of professional hunters. Our concern here is the interference and interruptions that OBC causes to the life systems of the people in Loliondo. The Maasai cannot resume their grazing patterns and often are forced to move by OBC. Where should we graze our cattle while our grazing land is occupied and by the Hunting company and protected by the gun? In six villages of Loliondo, five operate non-consumptive tourism that gives them more earnings except one which is dominated by OBC. The villages can now send children to school, construct basic infrastructure like health centers, classrooms, teacher houses water supply and food security ultimately eradicating poverty. This is all done using the money from the non-consumptive tourism. The OBC constantly interrupts this system and agreement between in the villages saying that the villages have no right to operate such tourism on ‘his land’. Is it his or our land? The other major problem besides the Arabs is the constant reprimand from the government, as it discourages this kind of tourism business that benefits the local people in the villages more. We favour this kind of tourism because it does not disturb our normal pattern of life system. At the same time it does not kill wild animals, they just camp and go to the Serengeti Park. The allegations that OBC airplanes fly directly from Loliondo to UAE without passing KIA and whether it exports live animals have existed and many people have spoken and written about it. However, we cannot confirm anything about without much scrutiny. We do not know much now.
Also in the Press Release the spray of salt to attract animals was referred to. The distribution of water in a certain site to attract animals was applied sometime ago. We are sure of this as it happened some years ago. What we are not exact is whether the practice continues to the present day.
In the Press Release, the records in the Ministry show that OBC pays annually 354,967,000/= for community development in Loliondo? We have some reservations in regard to such records. First of all they are just records and anything can be written. Secondly, how is it that the Ministry has such records while we in Loliondo, the base of OBC operation, do not have?
Thirdly, where is the provision in the agreement that forces OBC to annually pay to the district such amount of money? It may be that the amount is used to be paid annually but to individuals only and not to the district as it said.
In its conclusion the Ministry sees no reason to stop Hunting activities in Loliondo simply because the local community and Ngorongoro district council benefit from the Hunting business. We strongly feel that there is every reason to stop the Hunting activities in Loliondo for several reasons. First, the local community did not consent to the granting of their land to the Hunting company to the present day.
Secondly, the local community and Ngorongoro district council do not benefit in a way it should be from this Hunting business in Loliondo.
Thirdly, the presence of OBC has interrupted and interfered with our life systems including grazing, culture and other alternative means of business to the local community.
In our conclusion we feel that even though the government operates under the law set in Dar es Salaam and Dodoma without the involvement of the local people, it is very important to respect the localpeople.
Someone in the ministry who has never been to Loliondo, we firmly conclude, either wrote the Press Release, or the story was made. We suggest that the villagers or OBC people be contacted for more definite facts. Please feel free to contact us for any queries you might have regarding this article.
Tel: 0744 390 626
Extract Author: Arusha times Reporters
Extract Date: Aug 17 2002
ISSN 0856-9135; No. 00233
The Hunting plot controversy reigning within the Longido Game Controlled Area in Monduli district two weeks ago threatened the life of the American ambassador to Tanzania, Robert Royall who was Hunting in the block .
Riding in a Toyota Land Cruiser Station Wagon with registration numbers TZP 9016, owned by Bush Buck Safaris Limited, Ambassador Royall found himself being confronted by 16 armed men.
The incident took place on Saturday the 27th of July this year, at about 13.00 hours, in the Hunting block which is under the authority of Northern Hunting Enterprises Limited.
It is reported that, while Royall and his family were driving in the area, another vehicle, Land Cruiser with registration numbers TZP 3867, drove toward them and blocked them off.
Sixteen men, armed with traditional weapons including spears, machetes, doubled edged swords (simis) and clubs jumped out, ready for an attack.
However, both the ambassador, his team and driver Carlous Chalamila happened to be fully equipped and likewise drew their weapons.
Seeing modern weapons, the mob got frightened and decided to flee. But the ambassador’s driver, Chalamila followed them to find out what they wanted. Contacts were subsequently made with the wildlife department of the Ministry of Tourism and Natural Resources. Some wildlife officers were dispatched to the scene from Arusha and when they arrived, they found the attackers already gone.
Regional Police Commander for Arusha, James Kombe admitted that the incident did take place but declined comments on the issue. However, already five people have been arrested in connection with the incident, these are: Omar Mussa, David Bernard, Salimba Lekasaine, Kiruriti Ndaga and the only lady in the team, Nuria Panito Kennedy.
This week, Arusha Times learned that, the five suspects are out on bail.
Speaking by phone from Monduli, the Monduli District Commissioner (DC), Anthony Malle said there was indeed some controversy regarding the Hunting bloc of Longido Game Controlled Area (LGCA) in which two Hunting companies of Kibo Safaris and Northern Hunting Enterprises (T) Limited, were at logger heads.
Captain Malle added that, even the residents of the Singa village in the area, have been divided into two groups each supporting either companies.
The District Commissioner however, pointed out that only the Ministry will decide which of the two parties have the right to the 1,500 square kilometre Hunting bloc.
DC Malle also said that he and other district officials have already held various meetings to address the issue and together have signed an official letter which was sent to the Principal Secretary of the Ministry of Tourism and Natural Resources in order for the office to settle the matter once and for all.
Efforts to contact both Kibo Safaris and Northern Hunting Enterprises ended in vain.
The East African
Extract Author: John Mbaria
Extract Date: December 2, 2002
The East African (Nairobi) Posted to the web December 4, 2002
.. .. ..
Unlike in Kenya, the law in Tanzania promotes commercial wildlife utilisation activities such as safari Hunting and actually prohibits photographic tourism in areas declared as Hunting zones.
Under the WCA of 1974, the wildlife division can only regulate the capture, Hunting and commercial photography of wildlife.
The report adds that the director of wildlife can issue Hunting licences on village land, but he "does not have the power to give a hunter or Hunting company authority to hunt on village land without the permission of the village government."
On their part, the licensed persons are expected to seek the permission of the village government before engaging in any Hunting. However, reports indicate that the practice of safari Hunting has so far ignored this law. The report says that most Hunting companies put up facilities on village lands without the permission of the village government and the respective village assemblies.
The report gives the example of the Loliondo GCA, in Loliondo division of Ngorongoro district, where a Hunting company associated with a United Arab Emirates minister, "has built an airstrip and several large houses without the permission of the relevant village governments."
"Such actions are contrary to the VLA which, under section 17, requires any non-village organisation that intends to use any portion of the village land to apply for that land to the village council, which will then forward that application and its recommendation for approval or rejection to the Commissioner for Land."
In January, The EastAfrican published an exclusive story on the manner with which the Hunting company conducts Hunting activities in Loliondo.
.. .. ..
Extract Author: Indigenous Rights for Survival International
Page Number: b
Extract Date: 2/1/03
[click on the link to see the original MS Word document]
Indigenous Rights for Survival International
P.O. Box 13357
Dar Es Salaam.
Alternative E-mail: [email protected]
The United Republic of Tanzania
P.O. Box 9120
Dar Es Salaam.
If it pleases the Honourable President Benjamin Mkapa
Re: Stop the killing fields of Loliondo
I am a Tanzanian citizen, a strong believer in social justice. Under the same spirit I am the Co-coordinator of an informal group called Indigenous Rights for Survival International (IRSI). IRSI is a loose network of young people with an interest in public policy issues in Africa. We mainly discuss policy issues through emails communications and ultimately write articles in the press. IRSI as an entity takes no position on any of the discussed issues instead it simply stimulates, steers, and co-ordinates discussions and debates on public policy issues of members’ interest.
Mr. President, I have all along believed that you can stop the crime against humanity being inflicted upon the people of Loliondo, Ngorongoro District of Arusha Region by a no less authority than the Government of Tanzania.
Mr. President, Loliondo Division is located in Maasai ancestral lands in the northern part of Tanzania along the common border with Kenya. It borders the Ngorongoro highlands to the south, Serengeti National Park to the west, and the Maasai Mara Game Reserve in Kenya to the north. The Loliondo Game Controlled Area (LCGA) encompasses an estimated 4,000 sq km. There is no physical barrier separating the LGCA from other protected areas. It is a continuous ecosystem. LGCA was initially established in 1959 as a Game Reserve by the British colonialists under the then Fauna Conservation Ordinance, Section 302, a legal instrument the colonial authorities used to set aside portions of land for wildlife conservation. The legal status of the reserve was later changed to that of a Game Controlled Area to allow for commercial Hunting, a status that defines LGCA today and haunts its wildlife.
Mr. President, Loliondo forms an important part of the semi-annual migratory route of millions of wildebeests and other ungulates northward into the Maasai Mara Game Reserve and Amboseli National Park in Kenya between April and June, and returning southward later in the year. The survival of the Ngorongoro-Serengeti-Maasai Mara ecosystem and the wildlife it supports is linked to the existence of Loliondo and other surrounding communal Maasai lands in Tanzania and Kenya. Similarly, the survival of the Maasai people is dependent entirely upon the protection of their ancestral land for economic viability and cultural reproduction. Land to the Maasai is the foundation for their spirituality and the base for identity.
Mr. President, the people of Ngorongoro District in general and Loliondo Division in particular have suffered for a long time various established pains such as irrational grabbing of their ancestral land for “development”, tourism (consumptive and non-consumptive) and cultivation. While the people of Loliondo have lost much of their ancestral land to cultivation, the Government is evidently supporting private investors to further put Maasai pastoralists of Loliondo at a very awkward corner.
In 1992, the administration of the former president Ali Hassan Mwinyi granted the entire Loliondo Game Controlled Area (LGCA) as a Hunting concession to the Otterlo Business Corporation Ltd (OBC), a game-Hunting firm based in the United Arab Emirates (UAE). The Government issued a 10-year Hunting permit, under the controversial agreement, to the Brigadier Mohammed Abdulrahim Al-Ali, believed to be a member of the royal family of the UAE, of Abu Dhabi in the UAE who owns (OBC). The grabbed land is a birthright land of thousands of villagers of Arash, Soitsambu, Oloipiri, Ololosokwan, Loosoito and Oloirien villages of Loliondo.
Mr. President, a Parliamentary Committee was formed to probe the Loliondo Gate saga. It revoked the dirty agreement. Strangely, a similar agreement was established.
In January 2000, OBC was granted another 5-year Hunting permit in the said area. As usual, without the villagers’ consent. OBC constructed an airstrip. The villagers have been witnessing live animals being exported through the airstrip. OBC constructed structures near water sources. Hearing of the new permit, the Maasai sent a 13-men protest delegation to Dar Es Salaam in April 2000. The intention was to sort out the matter with you Mr. President. Unfortunately, they did not see you.
However, the delegation managed to hold a press conference at MAELEZO, National Information Corporation Centre. The Maasai contemplated a number of actions to be taken against both your Government and the Arab in connection with the plunder of the resources. The Maasai said that before a mass exodus of the Maasai to Kenya the first thing was to eliminate wild animals. Thereafter, the delegation retreated to Loliondo, as gravely frustrated as before.
The general election was scheduled for 2000, so the saga had to be explained away. The official statement was that power hungry opposition politicians were pushing the elders and that all the claims by the Maasai were “unfounded” and “baseless.” To its credit, The Guardian went to Loliondo. It reported the following:
Maasai elders in Loliondo, Arusha Region, who recently declared a land dispute against OBC Ltd, a foreign game-Hunting firm, have accused some top Government officials of corrupt practices, saying the conflict is not political. The Arusha Regional Commissioner, Daniel ole Njoolay, recently described the simmering land dispute between the Maasai pastoralists and OBC, as a political issue.
Francis Shomet [the former Chairman for Ngorongoro District Council] claimed that Njoolay had misled Tanzanians to believe that the allegations recently raised by Maasai elders were unfounded and baseless. Fidelis Kashe, Ngorongoro District Council Chairman maintained, “We cannot stand idle to see our land being taken away by Arabs. We will kill all the animals in the area as these are the ones attracting the Arabs into our land” (The Guardian May 30, 2000).
The next morning Government officials were reported to have said the following:
The Minister for Natural Resources and Tourism, Zakia Meghji, yesterday assured Ngorongoro residents that no land has been sold or grabbed by Arabs in Loliondo. Flanked by the Arusha Regional Commissioner, Daniel ole Njoolay and the Director of Wildlife, Emanuel Severre, Meghji commented, “There is no clause on the sale of land in the contract signed between OBC and the six villages of Ololosokwan, Arash, Maaloni, Oloirien, Oloipiri and Soitsambu.”
However an inquiry conducted by The Guardian in Loliondo last week established that the Maasai elders were not involved in the re-lease of the Hunting block to the company. According to Megji, her probe established that the building has been constructed about 400 metres from the water source, 200 metres more than the distance recommended by law. But The Guardian investigation shows that the structures are less than 50 metres from a spring. And another spring has dried up (The Guardian May 31, 2000).
Mr. President, underline two points. First, the Minister said the building has been constructed 400 metres from the water source. Second, “The Guardian investigation shows that the structures are less than 50 metres from a spring.” Now unless one’s mathematics teacher at school was daft, there is a huge different between 50 and 400! When did 50 metric metres turn to mean 400 metric metres? Can it be claimed that the Maasai were party to this so-called agreement? I am at a loss why this-well known-Minister has not been made to face the full force of the law.
In the proposal, Brigadier Al Ali outlined the benefits of his operations in Loliondo to the Government, local communities, and wildlife conservation in the Serengeti-Maasai Mara-Ngorongoro ecosystem. Among its important objectives were:
• To conserve an area contiguous to the Serengeti National Park, which is essential to the long-term survival of the ecosystem and its migration.
• To develop a new role and image for the Arab world as regards wildlife conservation, management, and human development.
• To improve locals’ revenue, development facilities, and create employment.
• To generate revenues for the Central and District Governments.
The OBC now stands accused of self-contradiction and violation of legal and moral obligations in virtually all the above areas, resulting instead in environmental destruction; unfulfilled promises and exploitation of the local communities; and direct undermining of the stability of the region’s wildlife and natural habitats.
It has become evident that OBC had a long-term agenda for exploiting the high concentration of wildlife in Loliondo. Its Hunting operations are guaranteed by the continuous flow of wildlife from the Serengeti, Ngorongoro, Maasai Mara, and other areas. According to the International Union for Conservation of Nature, OBC "was taking advantage of migratory patterns of wildlife coming out of Serengeti."
Mr. President, be informed that the villages in and adjacent to protected areas in Tanzania have no Government-supported infrastructures. Take Ngorongoro District for instance. There is no Government hospital in Ngorongoro. It may take a week to travel from Arusha to Loliondo, just less than 400 km, depending on weather, for there is no road. There is no even a single Government advanced level secondary education school in six (repeat six) Districts in the Greater Serengeti Region. This situation brings to question the legitimacy of wildlife conservation vis-à-vis the right of rural people to lead a decent life given nature endowment in their localities.
Mr. President, the Maasai of Loliondo have for a long time accused OBC of grave human rights abuses. They have described acts of intimidation, harassment, arbitrary arrest and detention, and even torture by OBC staff, Tanzanian police and military in the name of OBC; brazen violations of grazing and land rights; and wanton environmental destruction and imminent extermination of wildlife. They have seen leaders who once opposed OBC’s practices corrupted and bought-off.
The OBC operates like a separate arm of the Government. Many people in Loliondo believe that OBC is even more powerful than the Government. The Maa word for "the Arab", Olarrabui, is often used to refer Brigadier Al Ali, and by extension OBC. The word Olarrabui has become synonymous with power, authority, brutality, fear, and entities larger than life.
Mr. President, you do not need to be a rocket scientist to comprehend that this is the clearest case of abuse of office. It is suggested, for those willing to avert disaster, the Tanzania Government included, that immediate steps be taken to put to an end the violation of fundamental human rights in Ngorongoro. As to lands lost in Loliondo, the Government is advised to return this to its owners. Land should not be grabbed senselessly. The Government, should at once, re-look into the whole matter.
Navaya ole Ndaskoi.
- The International Court of Justice
- The United Nations High Commission for Human Rights
- The United Nations Working Group on Indigenous Populations
- Human Rights Groups around the World
- Faculty of Law of the University of Dar Es Salaam
- Local and International Conservation Agencies
- Ministry of Tourism and Natural Resources
- The Attorney General
- The Chief Justice
- The Speaker of the United Republic of Tanzania Parliament
- The Press, print and electronic
- Political parties in Tanzania
- Tanganyika Law Society
- Other interested parties.
Navaya ole Ndaskoi
see also Extract 3734
The Maasai protest delegation holding a press conference in Dar Es Salaam in 2000
|
<urn:uuid:5d77d1f0-76c4-490b-82ef-19a18180c341>
|
CC-MAIN-2013-20
|
http://www.ntz.info/gen/n01199.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.954722
| 13,012
| 2.609375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"food security"
],
"nature": [
"biodiversity",
"conservation",
"ecosystem",
"endangered species",
"habitat"
]
}
|
{
"strong": 5,
"weak": 1,
"total": 6,
"decision": "accepted_strong"
}
|
SAVINGS ACTION 59: Lower the temperature on your water heater
In a typical household, the water heater thermostat is set to around 140 degrees Fahrenheit. But did you know that setting it to 120 is usually fine? Each 10-degree reduction saves 3 to 5 percent on your energy costs and 600 pounds of CO2 per year for an electric water heater, or 440 pounds for a gas heater. Reducing your water temperature to 120 also slows mineral buildup and corrosion in your water heater and pipes. This helps your water heater last longer and operate at its maximum efficiency. Here's another interesting tidbit from the folks at Power ScoreCard: "If every household turned its water heater thermostat down 20 degrees, we could prevent more than 45 million tons of annual CO2 emissions — the same amount emitted by the entire nations of Kuwait or Libya."
Savings: 3-5 percent on your monthly energy costs
Environmental Impact: 440-600 pounds of CO2 emissions reduced annually
TAKE THE CHALLENGE and LEARN MORE about SAVING ENERGY! Visit: www.SouthCoastEnergyChallenge.org
|
<urn:uuid:16c2826a-a9f5-48bb-a8bf-780688d4bd23>
|
CC-MAIN-2013-20
|
http://www.southcoasttoday.com/apps/pbcs.dll/article?AID=/20130107/NEWS/301070325/-1/ARCHIVE
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.909796
| 232
| 3.015625
| 3
|
[
"climate"
] |
{
"climate": [
"co2"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Publisher Council on Foreign Relations
Release Date Last Updated: May 21, 2013
Scope of the Challenge
Oceans are the source of life on earth. They shape the climate, feed the world, and cleanse the air we breathe. They are vital to our economic well being, ferrying roughly 90 percent of global commerce, housing submarine cables, and providing one-third of traditional hydrocarbon resources (as well as new forms of energy such as wave, wind, and tidal power). But the oceans are increasingly threatened by a dizzying array of dangers, from piracy to climate change. To be good stewards of the oceans, nations around the world need to embrace more effective multilateral governance in the economic, security, and environmental realms.
The world's seas have always been farmed from top to bottom. New technologies, however, are making old practices unsustainable. When commercial trawlers scrape the sea floor, they bulldoze entire ecosystems. Commercial ships keep to the surface but produce carbon-based emissions. And recent developments like offshore drilling and deep seabed mining are helping humans extract resources from unprecedented depths, albeit with questionable environmental impact. And as new transit routes open in the melting Arctic, this once-forgotten pole is emerging as a promising frontier for entrepreneurial businesses and governments.
But oceans are more than just sources of profit—they also serve as settings for transnational crime. Piracy, drug smuggling, and illegal immigration all occur in waters around the world. Even the most sophisticated ports struggle to screen cargo, containers, and crews without creating regulatory friction or choking legitimate commerce. In recent history, the United States has policed the global commons, but growing Indian and Chinese blue-water navies raise new questions about how an established security guarantor should accommodate rising—and increasingly assertive—naval powers.
And the oceans themselves are in danger of environmental catastrophe. They have become the world's garbage dump—if you travel to the heart of the Pacific Ocean, you'll find the North Pacific Gyre, where particles of plastic outweigh plankton six to one. Eighty percent of the world's fish stocks are depleted or on the verge of extinction, and when carbon dioxide is released into the atmosphere, much of it is absorbed by the world's oceans. The water, in response, warms and acidifies, destroying habitats like wetlands and coral reefs. Glacial melting in the polar regions raises global sea levels, which threatens not only marine ecosystems but also humans who live on or near a coast. Meanwhile, port-based megacities dump pollution in the ocean, exacerbating the degradation of the marine environment and the effects of climate change.
Threats to the ocean are inherently transnational, touching the shores of every part of the world. So far, the most comprehensive attempt to govern international waters produced the United Nations Convention on the Law of the Sea (UNCLOS). But U.S. refusal to join the convention, despite widespread bipartisan support, continues to limit its strength, creating a leadership vacuum in the maritime regime. Other states that have joined the treaty often ignore its guidelines or fail to coordinate policies across sovereign jurisdictions. Even if it were perfectly implemented, UNCLOS is now thirty years old and increasingly outdated.
Important initiatives—such as local fishery arrangements and the United Nations Environment Program Regional Seas Program—form a disjointed landscape that lacks legally-binding instruments to legitimize or enforce their work. The recent UN Conference on Sustainable Development ("Rio+20") in Rio de Janeiro, Brazil, convened over one hundred heads of state to assess progress and outline goals for a more sustainable "blue-green economy." However, the opportunity to set actionable targets to improve oceans security and biodiversity produced few concrete outcomes. As threats to the oceans become more pressing, nations around the world need to rally to create and implement an updated form of oceans governance.
Oceans Governance: Strengths and Weaknesses
Overall assessment: A fragmented system
In 1982, the United Nations Convention on the Law of the Sea (UNCLOS) established the fundamental legal principles for ocean governance. This convention, arguably the largest and most complex treaty ever negotiated, entered into force in 1994. Enshrined as a widely accepted corpus of international common law, UNCLOS clearly enumerates the rights, responsibilities, and jurisdictions of states in their use and management of the world's oceans. The treaty defines "exclusive economic zones" (EEZs), which is the coastal water and seabed—extending two hundred nautical miles from shore—over which a state has special rights over the use of marine resources; establishes the limits of a country's "territorial sea," or the sovereign territory of a state that extends twelve nautical miles from shore; and clarifies rules for transit through "international straits." It also addresses—with varying degrees of effectiveness—resource division, maritime traffic, and pollution regulation, as well as serves as the principal forum for dispute resolution on ocean-related issues. To date, 162 countries and the European Union have ratified UNCLOS.
UNCLOS is a remarkable achievement, but its resulting oceans governance regime suffers several serious limitations. First, the world's leading naval power, the United States, is not party to the convention, which presents obvious challenges to its effectiveness—as well as undermines U.S. sovereignty, national interests, and ability to exercise leadership over resource management and dispute resolution. Despite the myriad military, economic, and political benefits offered by UNCLOS, a small but vocal minority in the United States continues to block congressional ratification.
Second, UNCLOS is now thirty years old and, as a result, does not adequately address a number of emerging and increasingly important international issues, such as fishing on the high seas—a classic case of the tragedy of the commons—widespread maritime pollution, and transnational crime committed at sea.
Third, both UNCLOS and subsequent multilateral measures have weak surveillance, capacity-building, and enforcement mechanisms. Although various UN bodies support the instruments created by UNCLOS, they have no direct role in their implementation. Individual states are responsible for ensuring that the convention's rules are enforced, which presents obvious challenges in areas of overlapping or contested sovereignty, or effectively stateless parts of the world. The UN General Assembly plays a role in advancing the oceans agenda at the international level, but its recommendations are weak and further constrained by its lack of enforcement capability.
Organizations that operate in conjunction with UNCLOS—such as the International Maritime Organization (IMO), the International Tribunal on the Law of the Sea (ITLOS), and the International Seabed Authority (ISA)—play an important role to protect the oceans and strengthen oceans governance. The IMO has helped reduce ship pollution to historically low levels, although it can be slow to enact new policy on issues such as invasive species, which are dispersed around the world in ballast water. Furthermore, ITLOS only functions if member states are willing to submit their differences to its judgment, while the ISA labors in relative obscurity and operates under intense pressure from massive commercial entities.
Fourth, coastal states struggle to craft domestic policies that incorporate the many interconnected challenges faced by oceans, from transnational drug smuggling to protecting ravaged fish stocks to establishing proper regulatory measures for offshore oil and gas drilling. UNCLOS forms a solid platform on which to build additional policy architecture, but requires coastal states to first make comprehensive oceans strategy a priority—a goal that has remained elusive thus far.
Fifth, the system is horizontally fragmented and fails to harmonize domestic, regional, and international policies. Domestically, local, state, and federal maritime actors rarely coordinate their agendas and priorities. Among the handful of countries and regional organizations that have comprehensive ocean policies—including Australia, Canada, New Zealand, Japan, the European Union, and most recently the United States—few synchronize their activities with other countries. The international community, however, is attempting to organize the cluttered oceans governance landscape. The UN Environmental Programme Regional Seas Program works to promote interstate cooperation for marine and coastal management, albeit with varying degrees of success and formal codification. Likewise, in 2007 the European Union instituted a regional Integrated Maritime Policy (IMP) that addresses a range of environmental, social, and economic issues related to oceans, as well as promotes surveillance and information sharing. The IMP also works with neighboring partners to create an integrated oceans policy in places such as the Arctic, the Baltic, and the Mediterranean.
Lastly, there is no global evaluation framework to assess progress. No single institution is charged with monitoring and collecting national, regional, and global data on the full range of oceans-related issues, particularly on cross-cutting efforts. Periodic data collecting does take place in specific sectors, such as biodiversity conservation, fisheries issues, and marine pollution, but critical gaps remain. The Global Ocean Observing System is a promising portal for tracking marine and ocean developments, but it is significantly underfunded. Without concrete and reliable data, it is difficult to craft effective policies that address and mitigate emerging threats.
Despite efforts, oceans continue to deteriorate and a global leadership vacuum persists. Much work remains to modernize existing institutions and conventions to respond effectively to emerging threats, as well as to coordinate national actions within and across regions. The June 2012 United Nations Conference on Sustainable Development , also known as Rio+20, identified oceans (or the "blue economy") as one of the seven priority areas for sustainable development. Although experts and activists hoped for a new agreement to strengthen the sustainable management and protection of oceans and address modern maritime challenges such as conflicting sovereignty claims, international trade, and access to resources, Rio+20 produced few concrete results.
Maintaining freedom of the seas: Guaranteed by U.S. power, increasingly contested by emerging states
The United States polices every ocean throughout the world. The U.S. navy is unmatched in its ability to provide strategic stability on, under, and above the world's waters. With almost three hundred active naval ships and almost four thousand aircraft, its battle fleet tonnage is greater than the next thirteen largest navies combined. Despite recently proposed budget cuts to aircraft carriers, U.S. naval power continues to reign supreme.
The United States leverages its naval capabilities to ensure peace, stability, and freedom of access. As Great Britain ensured a Pax Britannicain the nineteenth century, the United States presides over relatively tranquil seas where global commerce is allowed to thrive. In 2007, the U.S. Navy released a strategy report that called for "cooperative relationships with more international partners" to promote "greater collective security, stability, and trust."
The United States pursues this strategy because it has not faced a credible competitor since the end of the Cold War. And, thus far, emerging powers have largely supported the U.S. armada to ensure that the oceans remain open to commerce. However, emerging powers with blue-water aspirations raise questions about how U.S. naval hegemony will accommodate new and assertive fleets in the coming decades. China, for instance, has been steadily building up its naval capabilities over the past decade as part of its "far sea defense" strategy. It unveiled its first aircraft carrier in 2010, and is investing heavily in submarines outfitted with ballistic missiles. At the same time, India has scaled up its military budget by 64 percent since 2001, and plans to spend nearly $45 billion over the next twenty years on its navy.
Even tensions among rising powers could prove problematic. For example, a two-month standoff between China and the Philippines over a disputed region of the South China Sea ended with both parties committing to a "peaceful resolution."China, Taiwan, Vietnam, Malaysia, Brunei, and the Philippines have competing territorial and jurisdictional claims to the South China Sea, particularly over rights to exploit its potentially vast oil and gas reserves. Control over strategic shipping lanes and freedom of navigation are also increasingly contested, especially between the United States and China.
Combating illicit trafficking: Porous, patchy enforcement
In addition to being a highway for legal commerce, oceans facilitate the trafficking of drugs, weapons, and humans, which are often masked by the flow of licit goods. Individual states are responsible for guarding their own coastlines, but often lack the will or capacity to do so. Developing countries, in particular, struggle to coordinate across jurisdictions and interdict. But developed states also face border security challenges. Despite its commitment to interdiction, the United States seizes less than 20 percent of the drugs that enter the country by maritime transport.
The United Nations attempts to combat the trafficking of drugs, weapons, and humans at sea. Through the Container Control Program (PDF), the UN Office on Drugs and Crime (UNODC) assists domestic law enforcement in five developing countries to establish effective container controls to prevent maritime drug smuggling. The UNODC also oversees UN activity on human trafficking, guided by two protocols to the UN Convention on Transnational Organized Crime. Although UN activity provides important groundwork for preventing illicit maritime trafficking, it lacks monitoring and enforcement mechanisms and thus has a limited impact on the flow of illegal cargo into international ports. Greater political will, state capacity, and multilateral coordination will be required to curb illicit maritime trafficking.
New ad hoc multilateral arrangements are a promising model for antitrafficking initiatives. The International Ship and Port Facility Security Code, for instance, provides a uniform set of measures to enhance the security of ships and ports. The code helps member states control their ports and monitor both the people and cargo that travel through them. In addition, the U.S.-led Proliferation Security Initiative facilitates international cooperation to interdict ships on the high seas that may be carrying illicit weapons of mass destruction, ballistic missiles, and related technology. Finally, the Container Security Initiative (CSI), also spearheaded by the United States, attempts to prescreen all containers destined for U.S. ports and identify high-risk cargo (for more information, see section on commercial shipping).
One way to combat illicit trafficking is through enhanced regional arrangements, such as the Paris Memorandum of Understanding on Port State Control. This agreement provides a model for an effective regional inspections regime, examining at least 25 percent of ships that enter members' ports for violations of conventions on maritime safety. Vessels that violate conventions can be detained and repeat offenders can be banned from the memorandum's area. Although the agreement does not permit searching for illegal cargo, it does show how a regional inspections regime could be effective at stemming illegal trafficking.
Securing commercial shipping: Global supply chains at risk
Global shipping is incredibly lucrative, but its sheer scope and breadth presents an array of security and safety challenges. The collective fleet consists of approximately 50,000 ships registered in more than 150 nations. With more than one million employees, this armada transports over eight billion tons (PDF) of goods per year—roughly 90 percent of global trade. And the melting Arctic is opening previously impassable trade routes; in 2009, two German merchant vessels traversed the Northeast Passage successfully for the first time in recent history. But despite impressive innovations in the shipping industry, maritime accidents and attacks on ships still occur frequently, resulting in the loss of billions of dollars of cargo. Ensuring the safety and security of the global shipping fleet is essential to the stability of the world economy.
Internationally, the International Maritime Organization (IMO) provides security guidelines for ships through the Convention on the Safety of Life at Sea, which governs everything from construction to the number of fire extinguishers on board. The IMO also aims to prevent maritime accidents through international standards for navigation and navigation equipment, including satellite communications and locating devices. Although compliance with these conventions has been uneven, regional initiatives such as the Paris Memorandum of Understanding have helped ensure the safety of international shipping.
In addition, numerous IMO conventions govern the safety of container shipping, including the International Convention on Safe Containers, which creates uniform regulations for shipping containers, and the International Convention on Load Lines, which determines the volume of containers a ship can safely hold. However, these conventions do not provide comprehensive security solutions for maritime containers, and illegal cargo could be slipped into shipping containers during transit. Since 1992, the IMO has tried to prevent attacks on commercial shipping through the Convention for the Suppression of Unlawful Acts against the Safety of Maritime Navigation, which provides a legal framework for interdicting, detaining, and prosecuting terrorists, pirates, and other criminals on the high seas.
In reality, most enforcement efforts since the 9/11 attacks have focused on securing ports to prevent the use of a ship to attack, rather than to prevent attacks on the ships themselves. Reflecting this imperative, the IMO, with U.S. leadership, implemented the International Ship and Port Facility Security Code (ISPS) in 2004. This code helped set international standards for ship security, requiring ships to have security plans and officers. However, as with port security, the code is not obligatory and no clear process to audit or certify ISPS compliance has been established. Overall, a comprehensive regime for overseeing the safety of international shipping has not been created.
The United States attempts to address this vulnerability through the Container Security Initiative (CSI), which aims to prescreen all containers destined for the United States, and to isolate those that pose a high-security risk before they are in transit. The initiative, which operates in fifty-eight foreign ports, covers more than 86 percent of container cargo en route to the United States. Several international partners and organizations, including the European Union, the Group of Eight, and the World Customs Organization, have expressed interest in modeling security measures for containerized cargo based on the CSI model. Despite these efforts, experts estimate that only 2 percent of containers destined for U.S. ports are actually inspected.
Confronting piracy: Resurgent scourge, collective response
After the number of attacks reached a record high in 2011, incidences of piracy dropped 28 percent in the first three months of 2012. Overall, the number of worldwide attacks decreased from 142 to 102 cases, primarily due to international mobilization and enhanced naval patrols off the coast of Somalia. However, attacks intensified near Nigeria and Indonesia as pirates shifted routes in response to increased policing, raising fresh concerns over the shifting and expanding threat of piracy. In addition to the human toll, piracy has significant economic ramifications. According to a report by the nonprofit organization Oceans Beyond Piracy, Somali piracy cost the global economy nearly $7 billion in 2011. Sustained international coordination and cooperation is essential to preventing and prosecuting piracy.
Recognizing this imperative, countries from around the world have shown unprecedented cooperation to combat piracy, particularly near the Gulf of Aden. In August 2009, the North Atlantic Treaty Organization commenced Operation Ocean Shield in the horn of Africa, where piracy increased close to 200 percent between 2007 and 2009. This effort built upon Operation Allied Protector and consisted of two standing maritime groups with contributions from allied nations. Although the efforts concentrate on protecting ships passing through the Gulf of Aden, they also renewed focus on helping countries, specifically Somalia, prevent piracy and secure their ports. Meanwhile, the United States helped establish Combined Task Force 151 to coordinate the various maritime patrols in East Africa. Other countries including Russia, India, China, Saudi Arabia, Malaysia, and South Korea, have also sent naval vessels to the region.
At the same time, regional organizations have also stepped up antipiracy efforts. The Regional Cooperation Agreement on Combating Piracy and Armed Robbery against Ships in Asia was the first such initiatives, and has been largely successful in facilitating information-sharing, cooperation between governments, and interdiction efforts. And in May 2012, the European Union naval force launched its first air attack against Somali pirates' land bases, the first strike of its kind by outside actors to date.
Like individual countries, international institutions have condemned piracy and legitimized the use of force against pirates. In June 2008, the UN Security Council unanimously passed Resolution 1816, encouraging greater cooperation in deterring piracy and asking countries to provide assistance to Somalia to help ensure coastal security. This was followed by Resolution 1846, which allowed states to use "all necessary means" to fight piracy off the coast of Somalia. In Resolution 1851, the UN Security Council legitimized the use of force on land as well as at sea to the same end. Outside the UN, watchdogs such as the International Maritime Bureau, which collects information on pirate attacks and provides reports on the safety of shipping routes, have proven successful in increasing awareness, disseminating information, and facilitating antipiracy cooperation.
However, such cooperative efforts face several legal challenges. The United States has not ratified the UN Convention on the Law of the Sea (UNCLOS), which governs crimes, including piracy, in international waters. More broadly, the international legal regime continues to rely on individual countries to prosecute pirates, and governments have been reluctant to take on this burden. Accordingly, many pirates are apprehended, only to be quickly released. In addition, many large commercial vessels rely on private armed guards to prevent pirate attacks, but the legal foundations governing such a force are shaky at best.
National governments have redoubled efforts to bring pirates to justice as well. In 2010, the United States held its first piracy trial since its civil war, soon followed by Germany's first trial in over four hundred years. Other agreements have been established to try pirates in nearby countries like Kenya, such as the UNODC Trust Fund to Support the Initiatives of States to Counter Piracy of the Coast of Somalia, established in January 2010. Under the mandate of the Contact Group on Piracy off the Coast of Somalia, the fund aims to defray the financial capital required from countries like Kenya, Seychelles, and Somalia to prosecute pirates, as well as to increase awareness within Somali society of the risk associated with piracy and criminal activity. Future efforts to combat piracy should continue to focus on enhancing regional cooperation and agreements, strengthening the international and domestic legal instruments necessary to prosecute pirates, and addressing the root causes of piracy.
Reducing marine pollution and climate change: Mixed progress
Pollution has degraded environments and ravaged biodiversity in every ocean. Much contamination stems from land-based pollutants, particularly along heavily developed coastal areas. The UN Environment Program (UNEP) Regional Seas Program has sponsored several initiatives to control pollution, modeled on a relatively successful program in the Mediterranean Sea. In 1995, states established the Global Program of Action for the Protection of the Marine Environment from Land-Based Activities, which identifies sources of land-based pollution and helps states establish priorities for action. It has been successful in raising awareness about land-based pollution and offering technical assistance to regional implementing bodies, which are so often starved for resources. More recently, 193 UN member states approved the Nagoya Protocol on biodiversity, which aims to halve the marine extinction rate by 2020 and extend protection to 10 percent of the world's oceans.
Shipping vessels are also a major source of marine pollution. Shipping is the most environmentally friendly way to transport bulk cargoes, but regulating maritime pollution remains complicated because of its inherently transnational nature. Shipping is generally governed by the International Maritime Organization (IMO), which regulates maritime pollution through the International Convention for the Prevention of Pollution from Ships (MARPOL). States are responsible for implementing and enforcing MARPOL among their own fleets to curb the most pernicious forms of maritime pollution, including oil spills, particulate matter such as sulfur oxide (SOx) and nitrous oxide (NOx), and greenhouse gas emissions. Port cities bear the brunt of air pollution, which devastates local air quality because most ships burn bunker fuel (the dirtiest form of crude oil). The IMO's Marine Environmental Protection Committee has also taken important steps to reduce SOx and NOx emissions by amending the MARPOL guidelines to reduce particulate matter from ships. Despite such efforts, a 2010 study (PDF) from the Organization for Economic Development and Cooperation found that international shipping still accounts for nearly 3 percent of all greenhouse gasses.
The IMO has achieved noteworthy success in reducing oil spilled into the marine environment. Despite a global shipping boom, oil spills are at an all-time low. The achievements of the IMO have been further strengthened by commitments by the Group of Eight to cooperate on oil pollution through an action plan that specifically targets pollution prevention for tankers. The IMO should strive to replicate this success in its efforts to reduce shipping emissions.
Climate change is also exacerbating environmental damage. In June 2009, global oceans reached their highest recordedaverage temperature: 17 degrees Celsius. As the world warms, oceans absorb increased levels of carbon dioxide, which acidifies the water and destroys wetlands, mangroves, and coral reefs—ecosystems that support millions of species of plants and animals. According to recent studies, ocean acidity could increase by more than 150 percent by 2050 if counteracting measures are not taken immediately. Moreover, melting ice raises sea levels, eroding beaches, flooding communities, and increasing the salinity of freshwater bodies. And the tiny island nation of the Maldives, the lowest country in the world, could be completely flooded if sea levels continue to rise at the same rate.
Individual states are responsible for managing changes in their own marine climates, but multilateral efforts to mitigate the effect of climate change on the oceans have picked up pace. In particular, the UNEP Regional Seas Program encourages countries sharing common bodies of water to coordinate and implement sound environmental policies, and promotes a regional approach to address climate change.
Sustainable fisheries policies on the high seas: An ecological disaster
States have the legal right to regulate fishing in their exclusive economic zones (EEZs), which extend two hundred nautical miles from shore—and sometimes beyond, in the case of extended continental shelves. But outside the EEZs are the high seas, which do not fall under any one country's jurisdiction. Freedom of the high seas is critical to the free flow of global commerce, but spells disaster for international fisheries in a textbook case of the tragedy of the commons. For years, large-scale fishing vessels harvested fish as fast as possible with little regard for the environmental costs, destroying 90 percent of the ocean's biomass in less than a century. Overall, fisheries suffer from two sets of challenges: ineffective enforcement capacity and lack of market-based governance solutions to remedy perverse incentives to overfish.
Although there are numerous international and multilateral mechanisms for fisheries management, the system is marred by critical gaps and weaknesses exploited by illegal fishing vessels. Articles 117 and 118 of the UN Convention on the Law of the Sea (UNCLOS) enumerate the specific fisheries responsibilities of state parties, placing the onus on national governments to form policies and regional agreements that ensure responsible management and conservation of fish stocks in their respective areas. UNCLOS was further strengthened by the UN Fish Stocks Agreement (FSA), which called for a precautionary approach toward highly migratory and straddling fish stocks that move freely in and out of the high seas. Seventy-eight countries have joined the FSA thus far, and a review conference in May 2010 was hailed as a success due to the passage of Port State Measures (PSMs) to combat illegal, unreported, and unregulated (IUU) fishing. Yet fish stocks have continued to stagnate or decline to dangerously low levels, and the PSMs have largely failed to prevent IUU operations.
Regional fishery bodies (RFBs) are charged with implementation and monitoring. The RFBs provide guidelines and advice on a variety of issues related to fishing, including total allowable catch, by-catch, vessel monitoring systems, areas or seasons closed for fishing, and recording and reporting fishery statistics. However, only a portion of these bodies oversee the management of their recommendations, and some RFBs allow members to unilaterally dismiss unfavorable decisions. Additionally, RFBs are not comprehensive in their membership and, for the most part, their rules do not apply to vessels belonging to a state outside the body.
Even when regional bodies make a binding decision on a high-seas case, implementation hinges on state will and capacity. In 2003, the UN General Assembly established a fund to assist developing countries with their obligations to implement the Fish Stocks Agreement through RFBs. The overall value of the fund remains small, however, and countries' compliance is often constrained by resource scarcity. This results in spotty enforcement, which allows vessels to violate international standards with impunity, particularly off the coasts of weak states. Migratory species like blue fin tuna are especially vulnerable because they are not confined by jurisdictional boundaries and have high commercial value.
Some of the RFBs with management oversight, such as the Commission for the Conservation of Antarctic Marine Living Resources and the South East Atlantic Fisheries Organization, have been relatively effective in curbing overfishing. They have developed oversight systems and specific measures to target deep-water trawl fishing and illegal, unreported, and unregulated fishing in the high seas. Many regional cooperative arrangements, however, continue to suffer from weak regulatory authority. At the same time, some regions like the central and southwest Atlantic Ocean lack RFBs. Some have suggested filling the void with market-based solutions like catch shares, which could theoretically alter the incentives toward stewardship. Catch shares (also known as limited access privilege programs) reward innovation and help fisheries maximize efficiency by dedicating a stock of fish to an individual fisherman, community, fishery association, or an individual state. Each year before the beginning of fishing season, commercial fishermen would know how much fish they are allowed to catch. They would then be allowed to buy and sell shares to maximize profit. By incorporating free-market principles, fisheries could reach a natural equilibrium at a sustainable level. According to research, more sustainable catch shares policies could increase the value of the fishing industry by more than $36 billion. Although allocating the shares at the domestic—much less international—level remains problematic, the idea reflects of the kind of policy work required to better manage the global commons.
Managing the Arctic: At a crossroads
Arctic ice is melting at unprecedented rates. At this pace, experts estimate that the Arctic could be seasonally ice free by 2040, and possibly much earlier. As the ice recedes and exposes valuable new resources, multilateral coordination will become even more important among states (and indigenous groups) jockeying for position in the region.
The melting ice is opening up potentially lucrative new sea routes and stores of natural resources. Since September 2009, cargo ships have been able to traverse the fabled Northwest and Northeast Passages, which are significantly shorter than traditional routes around the capes or through the canals. Widening sea routes also means that fishing fleets can travel north in search of virgin fishing stock, and that cruise ships can carry tourists chasing a last glimpse of the disappearing ice. At the same time, untapped resources such as oil, natural gas, rare earth minerals, and massive renewable wind, tidal, and geothermal energy hold enormous potential. In a preliminary estimate, the U.S. Geographic Society said that the Arctic could hold 22 percent of the world's hydrocarbon resources, including 90 billion barrels of oil and 1,670 trillion cubic feet of natural gas. Beyond oil and gas, the Arctic has valuable mineral commodities such as zinc, nickel, and coal.
But new opportunities in the Arctic also portend new competition among states. In August 2007, Russia symbolically planted a flag on the Arctic floor, staking a claim to large chunks of Arctic land. Other Arctic powers including the United States, Canada, Norway, and Denmark have also laid geographical claims. The European Union crafted a new Arctic policy, and China sent an icebreaker on three separate Arctic expeditions. Each country stands poised to grab new treasure in this increasingly important geostrategic region.
The UN Convention on the Law of the Sea (UNCLOS) is a solid foundation on which to build and coordinate national Arctic policies, especially articles 76 and 234, which govern the limits of the outer continental shelf (OCS) and regulate activities in ice-covered waters, respectively. However, there remains a formidable list of nagging sovereignty disputes that will require creative bilateral and multilateral resolutions. The Arctic Council, a multilateral forum comprising eight Arctic nations, has recently grown in international prominence, signing a legally binding treaty on search and rescue missions in May 2011 and drawing high-level policymakers to its meetings. While these are significant first steps, the forum has yet to address other issues such as overlapping OCS claims, contested maritime boundaries, and the legal status of the Northwest Passage and the Northern Sea Route.
U.S. Ocean Governance Issues
The United States championed many of the most important international maritime organizations over the past fifty years. It helped shape the decades-long process of negotiating the United Nations Convention on the Law of the Sea (UNCLOS) and has played a leading role in many UNCLOS-related bodies, including the International Maritime Organization. It has also served as a driving force behind regional fisheries organizations and Coast Guard forums. Domestically, the United States has intermittently been at the vanguard of ocean policy, such as the 1969 Stratton Commission report, multiple conservation acts in the 1970s, the Joint Ocean Commission Initiative, and, most recently, catch limits on all federally-managed fish species. The U.S.-based Woods Hole Oceanographic Institution and the Monterrey Bay Research Institute have long been leaders in marine science worldwide. And from a geopolitical perspective, the U.S. Navy secures the world's oceans and fosters an environment where global commerce can thrive.
Yet the United States lags behind on important issues, most notably regarding its reluctance to ratify UNCLOS. And until recently, the United States did not have a coherent national oceans policy. To address this gap, U.S. president Barack Obama created the Ocean Policy Task Force in 2009 to coordinate maritime issues across local, state, and federal levels, and to provide a strategic vision for how oceans should be managed in the United States. The task force led to the creation of a National Ocean Council, which is responsible for "developing strategic action plans to achieve nine priority objectives that address some of the most pressing challenges facing the ocean, our coasts, and Great Lakes." Although it has yet to make serious gains (PDF), this comprehensive oceans policy framework could help clear the way for the spadework of coordinating U.S. ocean governance and harmonizing international efforts.
Should the United States ratify the UN Convention on the Law of the Sea?
Yes: The UN Convention on the Law of the Sea (UNCLOS), which created the governance framework that manages nearly three-quarters of the earth's surface, has been signed and ratified by 162 countries and the European Union. But the United States remains among only a handful of countries to have signed but not yet ratified the treaty—even though it already treats many of the provisions as customary international law. Leaders on both sides of the political aisle as well as environmental, conservation, business, industry, and security groups have endorsed ratification in order to preserve national security interests and reap its myriad benefits, such as securing rights for U.S. commercial and naval ships and boosting the competitiveness of U.S. companies in seafaring activities. Notably, all of the uniformed services—and especially the U.S. Navy—strongly support UNCLOS because its provisions would only serve to strengthen U.S. military efforts. By remaining a nonparty, the United States lacks the credibility to promote its own interests in critical decision-making forums as well as bring complaints to an international dispute resolution body.
No: Opponents argue that ratifying the treaty would cede sovereignty to an ineffective United Nations and constrain U.S. military and commercial activities. In particular, critics object to specific provisions including taxes on activities on outer continental shelves; binding dispute settlements; judicial activism by the Law of the Sea Tribunal, especially with regard to land-based sources of pollution; and the perceived ability of UNCLOS to curtail U.S. intelligence-gathering activities. Lastly, critics argue that because UNCLOS is already treated as customary international law, the United States has little to gain from formal accession.
Should the United States lead an initiative to expand the Container Security Initiative globally?
Yes: Some experts say the only way to secure a global economic system is to implement a global security solution. The U.S.-led Container Security Initiative (CSI) helps ensure that high-risk containers are identified and isolated before they reach their destination. Fifty-eight countries are already on board with the initiative, and many others have expressed interest in modeling their own security measures on the CSI. The World Customs Organization called on its members to develop programs based on the CSI, and the European Union agreed to expand the initiative across its territory. With its robust operational experience, the United States is well positioned to provide the technical expertise to ensure the integrity of the container system.
No: Opponents maintain that the United States can hardly commit its tax dollars abroad for a global security system when it has failed to secure its own imports. To date, more than $800 million and considerable diplomatic energy has been invested in CSI to expand the program to fifty-eight international ports, where agents are stationed to screen high-risk containers. Given the scale of world trade, the United States imports more than 10 million containers annually, and only a handful of high-risk boxes can be targeted for inspection. After huge expenditures and years of hard work to expand this program after September 11, 2001, only about 86 percent of the cargo that enters the United States transits through foreign ports covered under CSI, and of that, only about 1 percent is actually inspected (at a cost to the U.S. taxpayer of more than $1,000 per container). Despite congressional mandates to screen all incoming containers, critics say that costs make implementing this mandate virtually impossible. The limited resources the United States has available, they argue, should be invested in protecting imports bound specifically for its shores.
Should the United States be doing more to address the drastic decline in the world's fisheries?
Yes: Advocates say that the further demise of global fish stocks, beyond being a moral burden, undermines the commercial and national security interests of the United States. Depleting fish stocks are driven in large part by the prevalence of illegal, unreported, and unregulated (IUU) fishing and the overcapitalization of the global commercial fishing fleet from domestic subsidies. To protect domestic commercial fisheries and the competitiveness of U.S. exports in the international seafood market, the United States should enhance efforts by the National Oceanic and Atmospheric Administration to manage, enforce, and coordinate technical assistance for nations engaging in IUU fishing.
Domestically, the United States has taken important steps to address the critical gaps in fisheries management. In 2012, it became the first country to impose catch limits on all federally-managed fish species. Some species like the mahi mahi will be restricted for the first time in history. Many environmental experts hailed the move as a potential model for broader regional and international sustainable fisheries policy. To capitalize on such gains, the United States should aggressively work to reduce fishing subsidies in areas such as Europe that promote overcapitalization and thus global depletion of fish stocks. The United States could also promote market-based mechanisms, like catch shares and limited access privilege programs, to help fishermen and their communities curb overfishing and raise the value of global fisheries by up to $36 billion.
No: Critics argue that fisheries management is by and large a domestic issue, and that the United States has little right to tell other nations how to manage their own resources, particularly when such measures could harm local economies. They contend that the science behind overfishing is exaggerated, as are the warnings about the consequences of an anticipated fisheries collapse. Existing conventions like the 1995 Fish Stock Agreement already go far enough in addressing this issue. Any additional efforts, they contend, would be a diplomatic overreach, as well as an excessive burden on a struggling commercial fishing industry. Critics also question how market-based mechanisms, such as catch-shares, would be distributed, traded, and enforced, warning that they would lead to speculative bubbles.
Should the United States push for a more defined multilateral strategy to cope with the melting Arctic?
Yes: The melting Arctic holds important untapped political, strategic, and economic potential for the U.S. government, military, and businesses. This emerging frontier could potentially support a variety of economic activities, including energy exploration, marine commerce, and sustainable development of new fisheries. Countries such as Russia, Canada, Norway, and China have already made claims to the region, yet the United States remains on the sideline without a comprehensive Arctic strategy. The UN Convention on the Law of the Sea (UNCLOS) remains the premier forum of negotiating and arbitrating disputes over contested territory. As a nonparty, however, the United States loses invaluable leverage and position. In addition, the U.S. military does not have a single icebreaker, whereas Russia operates over thirty. Experts argue that the U.S. government should also adopt the recently proposed Polar Code, which is a voluntary agreement that "sets structural classifications and standards for ships operating in the Arctic as well as specific navigation and emergency training for those operating in or around ice-covered waters."
No: Opponents argue that Arctic Council activities and the 2009 National Security Presidential Directive, which updated U.S. Arctic polices, are sufficient. Any collaboration with Canada to resolve disputes over the Northwest Passage might undermine freedom of navigation for U.S. naval assets elsewhere, especially in the Strait of Hormuz and the Taiwan Straits, and this national security concern trumps any advantages from collaborating on security, economic, or environmental issues in the Arctic. Last, given the dominant Russian and Canadian Arctic coastlines, future Arctic diplomacy might best be handled bilaterally rather than through broader multilateral initiatives.
April 2013: Japan included in Trans-Pacific Partnership negotiations
Japan agreed to join negotiations over the Trans-Pacific Partnership (TPP), an ambitious free trade agreement between counties along the Pacific rim. Since the broad outline of the agreement was introduced in November 2011, sixteen rounds of negotiations have thus far brought eleven countries together to discuss the TPP. The addition of Japan, a major economic force in the region, as the twelfth participant comes as an important step in creating a robust agreement. Already, the South China Sea is the second-busiest shipping lane in the world, and should the TTP become a reality, transpacific shipping would dramatically increase. The seventeenth round of negotiations will take place in May and the current goal for agreement is October 2013.
March 2013: IMO pledges to support implementation of new code of conduct on piracy
At a ministerial meeting in Cotonou, Benin, the International Maritime Organization (IMO) pledged to support the implementation of a new code of conduct on piracy and other illicit maritime activity. The Gulf of Guinea Code of Conduct, drafted by the Economic Community of Central African States and the Economic Community of West African States, in partnership with the IMO, contains provisions for interdicting sea- and land-based vehicles engaged in illegal activities at sea, prosecuting suspected criminals, and sharing information between state parties. The code builds on several existing frameworks to create a sub-regional coast guard. The agreement is set to open for signature in May 2013.
March 2013: New fishing restrictions on sharks and rays
Delegates attending the annual meeting on the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) voted to place robust export restrictions on five species of sharks and two species of manta rays. Over the past fifty years, the three shark species—the oceanic whitetip, hammerhead, and porbeagle—have declined by more than 70 percent. Although experts cautioned that the new rules would be difficult to enforce in practice, the decision marked an important victory over economic interests, particularly of China and Japan.
January 2013: Philippines to challenge China's maritime claims in South China Sea
The Philippine government announced its intention to take China to an international arbitration tribunal based on claims that China violated the UN Convention on the Law of the Sea. The dispute dates back to mid-2012, when tensions flared over the Scarborough shoal, which is claimed by both countries.
China, Taiwan, Vietnam, Malaysia, Brunei, and the Philippines have competing territorial and jurisdictional claims to the South China Sea, particularly over rights to exploit its potentially vast oil and gas reserves. Control over strategic shipping lanes and freedom of navigation are also increasingly contested, especially between the United States and China.
September 2012: Arctic ice reaches record low
In September 2012, ice in the Arctic Ocean reached an all-time low of 24 percent, shattering the previous record of 29 percent from 2007. The finding not only has implications for climate change and environmental stability, but also for heightened competition among states jockeying for access to critical resources in the region. For the first time, the melting Arctic has exposed troves of natural resources including oil, gas, and minerals, as well as newly accessible shipping routes. The United States, Russia, and several European states already control parts of the Arctic, and China is also an increasing presence.
September 2012: Tensions flare in the East China Sea
In September 2012, Japan purchased three islands in the East China Sea that form part of the Senkaku Islands, known as the Diaoyu Islands to the Chinese. The islands, claimed by both countries, have been controlled by Japan since 1895, but sovereignty remains hotly contested. Following Japan's announcement, protests broke out across China, and Chinese leaders accused Japan of "severely infringing" upon their sovereignty.
In a move to affirm its claim to the islands, China announced its intention to submit their objections to the Commission on the Limits of the Continental Shelf under the UNCLOS, and dispatched patrol ships to monitor the islands. In December 2012, tensions flared after a Chinese small aircraft flew into airspace over the islands, and both countries sent naval vessels to patrol nearby waters. Both sides remain adamant that there is no room for negotiations over their control of the islands, which are in close proximity to strategic shipping routes, fishing grounds, and potentially lucrative oil reserves.
Options for Strengthening Global Ocean Governance
There are a series of measures, both formal and informal, that can be taken to strengthen U.S. and global ocean governance. First, the United States must begin by finally ratifying the UN Convention on the Law of the Sea. On this foundation, the United States should then tap hitherto underused regimes, update twentieth-century agreements to reflect modern ocean challenges, and, in some cases, serve as the diplomatic lead in pioneering new institutions and regimes. These recommendations reflect the views of Stewart M. Patrick, senior fellow and director of the International Institutions and Global Governance Program, and Scott G. Borgerson, former visiting fellow for ocean governance.
In the near term, the United States and its international partners should consider the following steps:
- Ratify UNCLOS
The United States should finally join the UN Convention on the Law of the Sea (UNCLOS), an action that would give it further credibility and make the United States a full partner in global ocean governance. This carefully negotiated agreement has been signed and ratified by 162 countries and the European Union. Yet despite playing a central role shaping UNCLOS's content, the United States has conspicuously failed to join. It remains among only a handful of countries with a coastline, including Syria, North Korea, and Iran, not to have done so.
Emerging issues such as the melting Arctic lend increased urgency to U.S. ratification. By rejecting UNCLOS, the United States is freezing itself out of important international policymaking bodies, forfeiting a seat at decision-making forums critical to economic growth and national security interests. One important forum where the United States has no say is the commission vested with the authority to validate countries' claims to extend their exclusive economic zones, a process that is arguably the last great partitioning of sovereign space on earth. As a nonparty to the treaty, the United States is forgoing an opportunity to extend its national jurisdiction over a vast ocean area on its Arctic, Atlantic, and Gulf coasts—equal to almost half the size of the Louisiana Purchase—and abdicating an opportunity to have a say in deliberations over other nations' claims elsewhere.
Furthermore, the convention allows for an expansion of U.S. sovereignty by extending U.S. sea borders, guaranteeing the freedom of ship and air traffic, and enhancing the legal tools available to combat piracy and illicit trafficking. Potential participants in U.S.-organized flotillas and coalitions rightly question why they should assist the United States in enforcing the rule of law when the United States refuses to recognize the convention that guides the actions of virtually every other nation.
- Coordinate national ocean policies for coastal states
The creation of a comprehensive and integrated U.S. oceans policy should be immediately followed by similar efforts in developing maritime countries, namely Brazil, Russia, India, and China (BRIC) . These so-called BRIC nations will be critical players in crafting domestic ocean policies that together form a coherent tapestry of global governance. Ideally, such emerging powers would designate a senior government official, and in some cases the head of state, to liaison with other coastal states and regional bodies to coordinate ocean governance policies and respond to new threats. Consistent with the Regional Seas Program, the ripest opportunity for these efforts is at the regional level. With UN assistance, successful regional initiatives could then be harmonized and expanded globally.
- Place a moratorium on critically endangered commercial fisheries
Commercial fishing, a multi-billion dollar industry in the United States, is in grave danger. The oceans have been overfished, and it is feared that many fish stocks may not rebound. In the last fifty years, fish that were previously considered inexhaustible have been reduced to alarmingly low levels. Up to 90 percent of large predatory fish are now gone. Nearly half of fish stocks in the world have been fully exploited and roughly one-third have been overexploited. The recent imposition of catch limits on all federally-managed fish species is an important and long overdue first step, which should be expanded and strengthened to a moratorium on the most endangered commercial fisheries, such as the Atlantic blue fin tuna. But tuna is hardly alone in this predicament, and numerous other species are facing the same fate. Policymakers should stand up to intense political pressure and place fishing moratoriums on the most threatened fisheries to give them a chance to rebound. Doing so would be a courageous act that would help rescue collapsing fish while creating a commercially sustainable resource.
In the longer term, the United States and its international partners should consider the following steps:
- Strengthen and update UNCLOS
The UN Convention on the Law of the Sea (UNCLOS) and related agreements serve as the bedrock of international ocean policy. However, UNCLOS is thirty years old. If it is to remain relevant and effective, it must be strengthened and updated to respond to emerging threats such as transnational crime and marine pollution, as well as employing market-based principles of catch shares to commercial fisheries, especially in the high seas. Lastly, UNCLOS Article 234, which applies to ice-covered areas, should be expanded to better manage the opening Arctic, which will be an area of increasing focus and international tension over the coming years.
The international community should also counter the pressure of coastal states that unilaterally seek to push maritime borders seaward, as illustrated by China's claim to all of the South China Sea. Additionally, states should focus on using UNCLOS mechanisms to resolve nagging maritime conflicts, such as overlapping exclusive economic zones from extended continental shelf claims, and sovereignty disputes, such as that of the Spratly and Hans Islands.
- Bolster enforcement capacity
Many ocean-related governance issues have shortcomings not because rules for better management do not exist, but because weak states cannot enforce them. A failure in the oversight of sovereign waters inevitably leads to environmental degradation and, in cases like Somalia, can morph into problems with global implications, such as piracy. Accordingly, the international community should help less developed coastal states build the capacity to enforce (1) fisheries rules fleets; (2) International Convention for the Prevention of Pollution From Ships regulations to reduce ocean dumping and pollution; (3) other shipping regulations in states with open registries such as Liberia, Panama, Malta, and the Marshall Islands; (4) and existing mandates created to stop illicit trafficking. Developed countries should also help less developed areas monitor environmental variables such as acidification, coral reefs, and fisheries.
|
<urn:uuid:0778adf2-8d1c-4e4e-b67a-56d782cc034d>
|
CC-MAIN-2013-20
|
http://www.cfr.org/energyenvironment/global-oceans-regime/p21035
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.940544
| 10,787
| 3.109375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon dioxide",
"climate change",
"greenhouse gas",
"nitrous oxide"
],
"nature": [
"biodiversity",
"conservation",
"ecological",
"ecosystems",
"endangered species",
"invasive species",
"mangroves",
"wetlands"
]
}
|
{
"strong": 11,
"weak": 1,
"total": 12,
"decision": "accepted_strong"
}
|
Measuring environmental impact
Our approach in adopting a responsible attitude to climate change starts with the assessment of the environmental impact of each stage in our products' life by the means of a Life Cycle Assessment (LCA).
As part of our internal environmental program, Ecophon has carried out LCAs on our products in collaboration with Ecobilan SA (Pricewaterhouse Coopers), and according to the international ISO 14040 to ISO 14044 series of standards.
To give you an overview of the process and an idea of our products' carbon footprint, we will consider a fictitious panel, the "Ecophon index sound-absorbing acoustic panel" based on the most common physical characteristics of Ecophon products:
Format: 1000 x 1000mm
Edge design: Edge E
Surface: Akutex FT
Weight: 1.5 kg/m2
Based on "cradle-to-grave" LCA calculations, the total carbon footprint of our fictitious panel is 2.7kg CO2/m2, from extraction of raw materials through to the end-of-life phase (landfill). Click on the links below to find out more about the various phases of our panel's life and its environmental impact.
|
<urn:uuid:e30642db-8c7c-4a30-bae3-ec71b266514b>
|
CC-MAIN-2013-20
|
http://www.ecophon.com/uk/Topmenu-right-Toolbar-container/Ecophon/Towards-a-better-environment-/Measuring-environmental-impact/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.885527
| 249
| 2.53125
| 3
|
[
"climate"
] |
{
"climate": [
"climate change",
"co2"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
- OurWorld 2.0 - http://ourworld.unu.edu/en -
What does Cancún offer the climate generation?
Posted By Huw Oliphant On December 8, 2010 @ 8:30 pm In General | 4 Comments
Across the globe, young people are paying attention to what is happening at the COP16 climate negotiations in Cancun, Mexico. They are the “climate generation” — the ones who are going to have to live with our climate legacy, our tendency to prevaricate and our collective global failure to act sooner. The hope is that we will see substantive progress in Cancun towards an internationally binding agreement.
This hope was echoed by a group of 30 young climate champions (from Japan, Korea, Thailand, Indonesia, Australia and Vietnam) who met in Vietnam on 22-26 November 2010 and who called on the leaders of the world to stop talking and take action on climate change.
They were brought together by the British Council Climate Generation Project in a workshop that focused on green business and entrepreneurship, aimed to help participants develop their green projects through project management and leadership skills training, as well as interaction with experts in the field.
What can the climate generation do?
To begin with, business as usual is not an option. These young people are green entrepreneurs and they are all developing “cool” projects aimed at addressing climate and sustainability issues in their community.
These young people are green entrepreneurs and they are all developing cool projects aimed at addressing climate and sustainability issues in their community.
For instance, Syuichi Ishibashi from Japan has developed an impressive energy literacy project to help monitor on-line energy use at home. Other projects include a factory making furniture from recycled materials in Thailand; a community based conservation project (link in Indonesian) near Lake Buret in Tulungagung, East Java; a green fashion show in Korea; and a project aimed at developing smart grid systems in Vietnam.
Young leaders can make a difference
Siti Nur Alliah, a British Council Climate Champion from Indonesia, is tackling a burning issue in the heart of Indonesia’s countryside in a quest to reverse climate change and lift a community out of poverty. As a community organiser, Alliah was keenly aware that farmers in Sekonyer village, central Borneo, remained poor no matter how much they burned surrounding forest to expand their farmland.
“Tragically, no matter how many resources are exploited, they are still considered as a community living under the poverty line,” she said.
So Alliah decided to see if there were more environmentally friendly agriculture methods that would have the added benefits of raising the quality and quantity of farm yields. The result was the Forest Farming project, which works with villagers in Sekonyer to expand their knowledge of agriculture to find better ways of managing their land.
Alliah says the project depended on cultivating local knowledge, encouraging the community to take action and fostering commitment in the village to sustained involvement. She says the great willingness of the indigenous people to work with her and her team to change their farming practices is a real sign that they are making a difference.
Climate Champion Hiroki Fukushima is part of a network of Japanese students who have issued the Climate Campus Challenge to educational institutions throughout the country. To help cut the education sector’s emissions of greenhouse gas, the students encourage universities to use renewable energy sources, retrofit campus buildings and buy green products. They then rank the colleges according to their success.
“We made an environmental survey of 334 universities and assessed them according to criteria, such as energy consumption per student, reduction in energy consumption, climate change policies, climate education for students and unique initiatives,” Hiroki said. “We published an Eco University Ranking and awarded certificates to universities with good environmental policies and activities, and organised a seminar to promote universities with good practice.”
Hiroki says the number of Japanese students participating in environmental activities has been on the rise since the 1990s but few of the activities were aimed at tackling climate change on campus. To remedy the situation, the core group studied projects overseas and thought how the activities could be best adapted to implement the campus challenge. They then enlisted students at various universities and established a network to realise the project.
Thai Climate Champion Panita Topathomwong is driving home her environmental message through her Cool Bus Cool Smile project. Panita is encouraging residents to cut their greenhouse gas emissions by leaving the car in the garage and hopping on public transport. She says that transport is one of the big sources of carbon dioxide emissions in urban centres and cutting back on car use would help lower the sector’s environmental impact and have the added benefit of easing gridlock.
So, with the support of the Bangkok Mass Transit Authority (BMTA) and the Ministry of Transport, Panita and her team organised a design competition to decorate three city buses and take the message to the road. The decorated buses toured Bangkok streets for three months after a high-profile launch in April 2009 attended by the Vice Minister of Transport, the Director of the BMTA, a representative from the British Embassy Bangkok and various media.
Climate Champions are young green entrepreneurs who are developing innovative projects aimed at addressing climate and sustainability issues in their community.
“We believe that by redecorating the buses with great images we conveyed the message about climate change to wider audiences in Bangkok. The buses were like mobile billboards which convinced everybody to be concerned about climate change,” Panita said.
It is cool to be a Climate Champion
Since 2008, the British Council Climate Generation has worked with over 120,000 young people from across the world interested in tackling climate change. Through the Climate Generation project, young people have the chance to come up with grassroots projects to combat and offset the effects of climate change. The participants are given the training and resources they need to realise their proposals and spread the word about the issue in their communities.
Climate Generation encourages young people interested in tackling climate change to connect with each other, come up with local solutions and reach out to local, national and international decision-makers.
As Climate Champions, programme participants have access to the training and information they need to ignite discussion in their communities and devise projects that will help people adapt to and mitigate climate change. The result is a global network of enthusiastic young people with the knowledge, contacts and on-the-ground resources to take action on climate change and make positive contributions to people’s lives.
Climate Champions have come from a wide variety of backgrounds including government, business, entrepreneurship, NGOs, education and media. Through training in communication and negotiation, they can learn how to put their plans into practice and give voice to the concerns of their generation.
Clearly, the Climate Champions are acting in advance of the outcomes at COP16 and “doing it themselves” but hope that real progress can be made at Cancun.
• ♦ •
For more information on the programme, please contact Huw Oliphant .
Article printed from OurWorld 2.0: http://ourworld.unu.edu/en
URL to article: http://ourworld.unu.edu/en/what-does-cancun-offer-for-the-climate-generation/
URLs in this post:
stop talking and take action: http://www.youtube.com/watch?v=b8APNC8R57w
Climate Generation Project: http://climatecoolnetwork.ning.com/
energy literacy project: http://e-idea2010.climate-change.jp/en/ishibashi.php
factory: http://www.kokoboard.com
community based conservation project: http://www.pplhmangkubumi.or.id/
Huw Oliphant: mailto:[email protected]
Copyright © 2008 OurWorld 2.0. All rights reserved.
|
<urn:uuid:d0496760-34af-4506-8ade-8a67600d956c>
|
CC-MAIN-2013-20
|
http://ourworld.unu.edu/en/what-does-cancun-offer-for-the-climate-generation/print/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.947348
| 1,654
| 2.546875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon dioxide",
"climate change",
"greenhouse gas",
"renewable energy"
],
"nature": [
"conservation"
]
}
|
{
"strong": 5,
"weak": 0,
"total": 5,
"decision": "accepted_strong"
}
|
The Durban climate change negotiations are about three gaps.
The first is scientific and has to do with the amount of Greenhouse gases we are pumping into the atmosphere, and the levels we need to reduce to in order to prevent irreversible changes to the climate system. The second is political and affects the course of the negotiations. The third is the gap between public perception of climate change and the reality of the change that has already begun.
All of these gaps are interrelated.
In 2009 countries made pledges to reduce greenhouse gas emissions in order to keep global average temperature from increases below 2 degrees Celsius. Last week the United Nations Environment Programme issued “The Emissions Gap Report: Are the Copenhagen Accord Pledges Sufficient to Limit Global Warming to 2oC or 1.5oC?”
Following the conclusion of the 15th Conference of the Parties (COP 15) in Copenhagen, 42 industrialized countries submitted “economy-wide” emissions targets for 2020. Another 43 developing countries also submitted “nationally appropriate mitigation actions.” While the Copenhagen negotiations failed to produce an expected new climate change treaty, these pledges have become the basis for “analyzing the extent to which the global community is on track to meet long-term temperature goals” (for details, see the Emissions Gap Report).
They are also the source of much of the wrangling that continues in the Durban negotiations. The 2020 date is significant because that’s the date by which the Fourth Assessment Report of the Intergovernmental Panel on Climate Change states that emissions cuts need to start being made if the global average temperature increases are to be kept below 2oC.
The report states that while it is possible to reach the required level of reductions by 2020, the current reduction pledges “are not adequate to reduce emissions to a level consistent with the 2°C target, and therefore lead to a gap.”
Research tells us that annual emissions need to be around 44 gigatonnes (Gt) of CO2 equivalent by 2020 to have a likely chance of holding global average temperatures to 2°C or less. (A gigatonne is equal to 1 billion tonnes of carbon. According to the International Energy Agency, the world emitted a record 30.6 tonnes of CO2 in 2010 - a 5% increase over the previous record level in 2008.
The report states that if the “highest ambitions” of all the countries that signed the Copenhagen Accord are implemented, annual emissions of greenhouse gases would be cut by around 7 (Gt) of CO2 equivalent by 2020.
This represents a cut in annual emissions to around 49 Gt of CO2 equivalent, which would still leave a gap of around 5 Gt compared with where we need to be—”a gap equal to the total emissions of the world’s cars, buses and trucks in 2005.”
However, if only the lowest ambition pledges are implemented, and if no clear rules are set in the negotiations, emissions could be around 53 Gt of CO2 equivalent in 2020 – “not that different from business-as-usual so the rules set in the negotiations clearly matter.” If it’s “business as usual” emissions could rise to 56 Gt by the same date.
This points to the second gap -- in political will. The differences between those who are feeling the brunt of rapid climate change right now -- primarily many of the Small Island Developing States and Least Developed Countries -- and the major emitters, is growing. The Framework Convention on Climate Change, negotiated nearly 20 years ago in Rio de Janeiro, calls on all nations to cooperate “...in accordance with their common but differentiated responsibilities and respective capabilities and their social and economic conditions.”
This has been traditionally interpreted to mean that the industrialized nations of Europe and North America bear a large portion of the responsibility for dealing with the problem since they are primarily responsible for the vast majority of historical emissions. Indeed, the United States was only recently surpassed by China as the largest GHG emitter. However, the US, Canada and others have higher per capita emissions levels.
Nevertheless, Canada has led the charge of developed nations that now say that it is “unfair” for developed countries to cut total emissions while developing nations such as China, India and Brazil are not bound to reduce in the same way. Canada has become the first nation in the world to say it will not abide by the Kyoto Protocol, nor will it agree to a second commitment period.
This approach has widened the gap between those that support a renewed Kyoto Protocol -- most of the developing world and the European Union (subject to developing nations such as China getting onboard) -- and those that say they don’t want a second commitment period. This group is led by Canada, Russia and Japan but others have now joined, so many that there is little chance that the Kyoto Protocol, which lapses next year, will be renewed in Durban.
Such positioning and politicking is widening the gap between developed and developing countries. For some, like the Small Island Developing States, there is a sense of being abandoned to their fates. Delay does not bode well for the Arctic either. People from both regions need immediate and sustained emissions reductions.
The third gap is between what we think or believe is (or is not) happening and what the facts are telling us. The latter are based on observation and a growing body of scientific knowledge. There are literally thousands of peer reviewed scientific papers that demonstrate that the climate is changing and we are primarily responsible.
The public in developed countries -- and many developing nations -- still has not embraced this reality. While millions of people are mobilized and calling for action, there is still a strong predilection to ignore the signals that are all around us, or to use the current economic downturn as an excuse for inaction. Both lead to the same end -- they are further narrowing the window in which we have time to act. And they are passing the problem of adaptation on to future generations, when the price will be much higher.
|
<urn:uuid:219517cd-2a03-4a78-ba3a-ac236f11d909>
|
CC-MAIN-2013-20
|
http://www.grida.no/polar/blog.aspx?id=5055&p=5
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.956705
| 1,250
| 3.40625
| 3
|
[
"climate"
] |
{
"climate": [
"2°c",
"adaptation",
"climate change",
"climate system",
"co2",
"ghg",
"global warming",
"greenhouse gas"
],
"nature": []
}
|
{
"strong": 6,
"weak": 2,
"total": 8,
"decision": "accepted_strong"
}
|
William Gilly has seen a Kraken. The mythical squid beast with
ship-dooming tentacles surely exists, Gilly says, because he's seen
a baby one. "It was this big around," he says, making a circle as
big as a tire with his arms, a proud, boyish smile on his face.
Fishermen spotted the carcass of the 2.4-metre-long, 181-kilogram
baby giant squid in Monterey Bay three years ago, according to
"If you're an assistant professor proposing to study it, I don't
think you'd get tenure," he says. "But one has to exist."
laboratory at Stanford University's Hopkins Marine Station
in Pacific Grove brims with squid décor: stuffed animals, preserved
specimens, glass figurines, drawings, photographs, plastic toys,
and piñatas. He has studied these denizens of the sea for almost
four decades, and he's handled countless numbers of the fabled
Kraken's smaller cousins, the very real jumbo Humboldt squid. Like
their giant counterparts, Humboldt squid are enigmatic. No one has
seen them mate or lay eggs. No one has watched them develop from
egg to adult. No one knows how many exist.
These tentacled titans have now ensnared marine biologists with
another riddle: they have left their normal gathering grounds in
the Sea of Cortez in Baja California, Mexico. Fishermen worry that
a critical part of their livelihood may be gone. Many families have
depended on Humboldt squid since the slippery creatures moved in
droves into the Gulf of California's Guaymas Basin in the
"We have a huge problem on our hands," says coordinator Juan
Pedro Vela Arreola of the Alianza de Ribereños y
Armadores, an association of fishermen and producers in Mexico.
"Fishermen are desperate."
Gilly blames the most recent El Niño in the Pacific Ocean for
forcing some Humboldt squid to migrate away. Others have physically
shrunk -- just one bizarre adaptation among their many strange
body-shifting traits. Leaders of Mexican fisheries and scientists
are banding together to figure out whether the diablos
rojos (red devils) will come back, and how to cope in the
"You have to learn to live in a very unpredictable way," says
Unai Markaida of El Colegio de la Frontera Sur (ECOSUR)
in Campeche, Mexico. "You live like a squid, and you adapt to the
life it leads."
"They're very mysterious," says marine biologist Danna Staaf,
stressing very in a slightly higher pitch. "And
Staaf, a self-described "cephalopodiatrist,"
completed her PhD with Gilly's team. She studied Humboldt squid on
two summer research cruises in the Gulf of California before the
latest El Niño drove them away. She and her crewmates pulled up to
15 squid out of the water every night, when they would come up to
the surface from depths of more than 975 metres to feed. Many of
the 9-kilogram cephalopods she caught were about 1.2 metres long.
Most were a year old. Scientists can tell their age by looking at
the number of rings on tiny crystals located near their brain
called statoliths, like botanists can age a tree by counting its
rings. These stones help a squid detect gravity and maintain their
balance and sense of space.
The scientists stored some of the catch in their "squid condo,"
an oversize cooler with continuously flowing water and six clear,
plastic tubes. They housed squid in separate tubes to keep them
from attacking one another. Over the next few days, they studied
the squids' behaviours and neurophysiology. For instance, altering
the temperature and oxygen content of the water could trigger or
change their escape response -- the way squid dart away when
startled by a predator.
Staaf dissected other squid to study their development, the
focus of her dissertation. She used their eggs and sperm to make
squid babies. She wanted to discover the temperature range at which
their eggs could hatch and develop. At the time, Humboldt squid had
spread to waters off the coast of Monterey. Marine scientists
thought the squid might establish new colonies there, but Staaf
wanted to know whether it was even possible for the animals to grow
in the much cooler waters of California's Pacific coast. In the
lab, she found, their
eggs can grow between temperatures of 15 and 25 degrees
Celsius, suggesting it is possible they could set up a breeding
population. Whether squid babies have the same preferences in the
wild is still unknown, Staaf says.
Other team members studied squid chromatophores, structures in
their muscles that allow them to change colour. Humboldts sometimes
flash from red to white as they propel themselves through dark
waters. Scientists don't know how they control these colour shifts,
whether it's the brain or local nerve cells that set off the
flickering. They also don't fully understand why they suddenly
change colours, but some speculate it's a form of
When they're not in direct contact with other squid, their flesh
flickers from red to white in a random, mosaic-like pattern. Gilly
compares this to visual white noise. But as they interact with
their species, the irregular colour waves become more organised,
like a cephalopod Morse code, Gilly says. They can change the
intensity of the flashing, as well as the rhythm and frequency.
Squid might use these "calls," together with arm posturing, to
attract mates or to establish hierarchies, for example.
Gilly and his team have seen some of these behaviours through
underwater "Crittercams," about
the size of 340-gram drink bottle, attached to a squid's back.
"Ideally, we'd also like to see what they eat," Gilly says, because
that might help his team study these animals. Scientists can't get
Humboldts to eat in the lab, so squid last only a few days in
Indeed, squid dislike being trapped, strongly. Sometimes they
get so anxious, they ram themselves against the tank. "They've
never seen the bottom. They've certainly never seen the wall of a
tank," Gilly says. "They're intelligent organisms. And when you put
one in an abnormal situation, they get totally freaked out. They
don't do their flickering in the lab," he notes.
Scientists do know that during the day, Humboldt squid forage
for silvery lanternfish and other small animals in the oxygen
minimum zone, a dark, cold netherworld more than 900 metres deep.
Here, oxygen is scarce, and while few animals can withstand its
hostile environment, Humboldts seem to thrive. "Living in low
oxygen -- that's surprising, especially for an animal that's an
athlete," Markaida says. How they move and how their nervous system
functions under these conditions is still a mystery.
Widespread changes in the temperatures of Earth's oceans have
compelled many creatures, including the powerful Humboldt squid, to
seek new climes. Their dominion usually stretches from Argentina to
California, but more recently they've been spotted in Canada and
Alaska. Scientists don't yet understand how the squid are settling
into their new hangouts. This month, a few Humboldts stranded
themselves on the beaches of Pacific Grove, Calif, while
whale-watching boats have spotted them near Point Pinos. These
sightings might be a sign they are returning, Gilly says.
|
<urn:uuid:04746e75-8bc8-45a0-b5cd-762614b6d55a>
|
CC-MAIN-2013-20
|
http://www.wired.co.uk/news/archive/2012-10/12/the-krakens-cousin?page=all
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.956129
| 1,686
| 2.734375
| 3
|
[
"climate"
] |
{
"climate": [
"adaptation",
"el niño"
],
"nature": []
}
|
{
"strong": 1,
"weak": 1,
"total": 2,
"decision": "accepted_strong"
}
|
Definition of Alveoli
Alveoli: The plural of alveolus. The alveoli are tiny air sacs within the lungs where the exchange of oxygen and carbon dioxide takes place.
Last Editorial Review: 6/14/2012
Back to MedTerms online medical dictionary A-Z List
Need help identifying pills and medications?
Get the latest health and medical information delivered direct to your inbox FREE!
|
<urn:uuid:01f8086d-a557-4ede-bf39-69a511ca63a8>
|
CC-MAIN-2013-20
|
http://www.medterms.com/script/main/art.asp?articlekey=2212
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.737227
| 87
| 3.03125
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Will our beaches become an example of the Tragedy of the Commons?
Click on image for full size
NCAR Digital Library
The Tragedy of the Commons
The term "Tragedy of the Commons" was coined by Garrett Hardin in a 1968 article in Science magazine. The concept, however, dates back to the days of Aristotle. Briefly, it states that a shared resource is inevitably ruined by uncontrolled use.
The metaphor that Hardin uses to explain the concept is that of a community common or park on which the town’s people bring their cows to be fed. In the back of everyone’s mind is the fact that the common is going to be ruined because the grass is going to be eaten to depletion. Still, everyone wants to get grass for their cows. No one thinks or cares about the consequences of so many cows eating the grass, and the Tragedy of the Commons occurs.
Human actions that many categorize as examples of the phenomenon include human-created air pollution; the hunting of the American buffalo to near extinction in the 1800s; the widespread abuse and destruction of rainforests and our oceans’ coral reefs; and human-induced climate change due largely to the burning of fossil fuels for energy use.
Some people believe that the Tragedy of the Commons can only be averted by making most commodities private property. But how does someone own the air or the ocean? And can either the air or ocean stay unpolluted with populations of 10 million or more in the world’s megacities? Others believe the "Tragedy of the Commons" can be avoided through laws and taxing devices which make it more costly to serve one’s self interest over the common good.
What almost everyone can agree on for now is that such vital resources need some form of control so that the world’s natural resources can be sustained and the Tragedy of the Commons can be avoided.
Last modified February 19, 2006 by Teri Eastburn.
Shop Windows to the Universe Science Store!Cool It!
is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store
You might also be interested in:
What do smog, acid rain, carbon monoxide, fossil fuel exhausts, and tropospheric ozone have in common? They are all examples of air pollution. Air pollution is not new. As far back as the 13 th century,...more
Rainbows appear in the sky when there is bright sunlight and rain. Sunlight is known as visible or white light and is actually a mixture of colors. Rainbows result from the refraction and reflection of...more
The Earth travels around the sun one full time per year. During this year, the seasons change depending on the amount of sunlight reaching the surface and the Earth's tilt as it revolves around the sun....more
Scientists sometimes travel in specially outfitted airplanes in order to gather data about atmospheric conditions. These research aircraft have special inlet ports that bring air from the outside into...more
An anemometer is a weather instrument used to measure the wind (it can also be called a wind gauge). Anemometers can measure wind speed, wind direction, and other information like the largest gust of wind...more
Thermometers measure temperature. "Thermo" means heat and "meter" means to measure. You can use a thermometer to measure the temperature of many things, including the temperature of...more
Weather balloons are used to carry weather instruments that measure temperature, pressure, humidity, and winds in the lowest few miles of the atmosphere. The balloons are made of rubber and weigh up to...more
|
<urn:uuid:b9dff943-69f1-4fa7-baac-eabdcadfb431>
|
CC-MAIN-2013-20
|
http://www.windows2universe.org/earth/Atmosphere/tragedy_commons.html&edu=high
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.937407
| 782
| 3.1875
| 3
|
[
"climate"
] |
{
"climate": [
"climate change"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
|Huang Ti about 1000 BC|
That left as the oldest known medical book through most of history as Nei Ching written by Huang Ti about 2,697 BC. The work basically consists of conversations between the emperor and his physician Ch'i Pai. Parts of the dialog are recorded medical descriptions of asthma-like symptoms and possible remedies.
Huang Ti Nei Ching Su Wen was known as the Yellow Emperor who reined over Ancient China from approximately 2697-2598 B.C. He is known as the Father of Chinese Medicine. (1)
He is believed to have ruled China as the third of China's first five rulers from 2696-2598 B.C. There is an ongoing debate as to whether he actually existed or was the work of legends. There's also an ongoing debate as to whether he actually wrote the Nei Ching Su Wen or whether it was actually written about 1000 B.C and antedated "as to enhance it's value," according to one historian. Veith quotes one historian who questioned that Ti could possibly rule a nation and still have plenty of time for dialog with his physician ch'i Pai and also time to write it all down. (2)
Of significance in the Nei Ching is the relevance of Yang, Yin and Tao. Ilza Veith, in her 2002 book, "The Yellow Emperor's Classic of Internal Medicine," described that in the beginning there was chaos between the three primary substances -- force, form and substance. This ultimately results in a light substance rising to form heaven, and a heavy substance sinking to form the earth.
|The Nei Chung|
This whole concept was similar to other civilizations, such as in the West the belief was that disease was the result of an imbalance of the four humours. In China disease was believed to be caused by too much of the essence yang and/ or too little yin.
So in order to maintain health one would have to maintain a balance of Yang and Yin, which was ultimately accomplished by good behaviours towards Tao, which refers to "the way." Veith explains that man must completely adjust to the "flow" of the Universe, which was the responsibility of Tao. For example, the earth was dependent on the heavens, such as rain was needed to end drought, sun was needed to melt snow, etc. In this way the yearly cycle of life flowed smoothly and was completed in a year. This was Tao, or the way. Essentially, there was "the Tao of Heaven, the Tao of the Earth, and the Tao of Man, one fitting into the other as an indivisible entity."
The first paragraphs of the first chapter of the Nei Ching has the emperor asking his physician, Ch'i Po, why it is that ancient people used to live to be 100 years old and now people only live half that long. The physician answered:
"In ancient times those people who understood Tao (the way of self cultivation) patterned themselves upon the Yin and the Yang (the two principles of nature) and they lived in harmony with the arts of divination.... There was temperance in eating and drinking. Their hours of rising and retiring were regular and not disorderly and wild. By these means the ancients kept their bodies united with their souls, so as to fulfill their allotted span completely, measuring unto a hundred years before they passed away.... nowadays people are not like this; they use wine as beverage and they adopt recklessness as usual behavior. They enter the chamber (of love) in an intoxicated condition; their passions exhaust their vital forces; their cravings dissipate their true (essence); they do not know how to find contentment within themselves; they're not skilled in the control of their spirits. They devote all their attention to the amusement of their minds, thus cutting themselves off from the joys of long (life). Their risings and retiring is without regularity. For these reasons they reach only one half of the hundred years and then they degenerate." (3)
It is this "degeneration" then that causes diseases which plague a person in life, and many of which cause an early death. People that lived to be 100 are "in harmony with Tao, the Right Way." Health, or longevity, was completely dependent on a person's "behavior towards Tao, Veith explained. "Thus, man saw the universe endowed with a spirit that was indomitable in its strength and unforgiving toward disobedience." Longevity, thus, was a "token of sainthood." (4)
So asthma-like symptoms were believed to be caused by an imbalance of Yin and Yang. The lungs were believed to be responsible for metabolism and flow of fluids through the body, and an imbalance of Yang and Yin in the lungs will cause too much phlegm, edema, sweat and cause diseases such as breathing disorders. (5)
This ultimately obstructs Qi (also referred to as Chi). Imbalances of Yang and Yin are believed to be caused by obstruction of Qi, which may be described as the energy or life force that keeps the humors in balance and the body functioning properly.
The force of Qi was the essential force of keeping the body healthy, and it was inhaled with each breath after birth. Once inhaled it was up to each healthy organ to transfer both Qi and nutrients throughout the body.
In order for the organs of the body to function properly, Qi must continue to flow properly throughout the body. So dysfunction of the lung will result in failure of respiration, "leading to failure of fresh air to be inhaled and the turbid Qi of the body to be exhaled, with the resultant inadequate formation of Qi." (6)
Likewise, the lungs were associated with mucus. Yang was heat and Yin was cold. Cold was believed to diminish Yin in the lungs, and this resulted in an imbalance of Qi in the lungs, which resulted in an increase in mucus, which ultimately resulted in difficulty in breathing, or asthma-like symptoms.
It should be noted here that unlike Western medical doctrines such as the Hippocratic Corpus, the Nei Ching failed to specifically define any diseases. So terms equivalent to asthma and dyspnea were not used. Instead, diseases were referred to as "'injuries of the heart,' 'injuries of the lungs,' etc." (7)
The Nei Ching basically called for diagnosing diseases by measuring the pulse, and treating diseases by remedies that reset Yang and Yin, which mainly involved mental balance, herbal medicine, diet, massage, acupuncture (inserting needle into certain regions of the body) or moxibustion (placing cones of powdered leaves on various regions of the body and burning them until blisters form). Since diseases of the lungs were caused by an imbalance of Yin caused by cold, asthma remedies were believed to warm the lungs, balance Yin, decrease mucus, and make breathing easier.
Another neat similarity between the Nei Ching and the later written document the Hippocratic Corpus (such as the Hippocratic Oath) is that both writings mention the use of careful technique and responsibility by the physician.
The Nei Ching notes that "The most important requirement of the art of healing is that no mistake or neglect occur... poor medical workmanship is neglectful and careless and must therefore be combated, because a disease that is not completely cured can easily breed new disease or there can be a recurrence of the old disease... illness is comparable to the root; good medical work is comparable to the topmost branch; if the root is not reached, the evil influences cannot be subjugated... The superior physician helps before the early budding of the disease. The inferior physician begins to help when the disease has already developed; he helps when destruction has already set in. And since his help comes when the disease has already developed, it is said of him that he is ignorant. " (8)
While Nei Ching is the oldest known recorded Chinese medical treaties, Shen Nung, who lived from 2838-2698 is often considered as the founder of Chinese Medicine as well as the "Fire Emporor." (9)
|Shen Nung (2838-2698)|
The leaves and/ or stems of the Ma Huang plant were dried prepared in such a way that it was served as a drink, often as a bitter tasting tea. Nung believed Ma Huang worked by reversing the flow of Qi.
One of the truly interesting things about ancient Chinese asthma treatment is the use of Ma Huang to treat asthma-like symptoms. The modern world refers to this plant as ephedra, and from it was derived the bronchodilator ephedrine in 1901
Leaves of the plant were crushed and served in a bitter tasting yellow tea. This may actually have provided relief from an asthma attack. While Veith describes that Western medicine reached China early in the 17th century, (10) it would be another 300 years before ephedrine would play a significant role in the treatment of asthma in the U.S. and Europe, as I describe in this post.
So while Ancient Chinese asthmatics may have been able to obtain asthma relief by using ephedra, the rest of the world (except for maybe Japan and Korea) would have to wait.
Click here for more asthma history.
- Saunders, M, J.B. Dec, "Huang Ti Nei Ching Su Wen -- The Yellow Emperor's Classic of Internal Mediciner," Calif Med. 1967 July; 107(1): 125–126
- Veith, Ilza, author /translator, "The Yellow emperor's Classic of Internal Medicine," 2002, Los Angeles, pages 4-6
- Ibid, page 97-8
- Ibid, pages pages 98 and 10-14
- "Qi Theory, damo-qigong.net, http://damo-qigong.net/qi-theory1.htm
- Ibid, http://damo-qigong.net/qi-theory.htm
- Veith, op cit, pages 49 and 50
- Veith, op cit, pages 57-8, also see chapter 26 beginning on page 217
- Navara, Tova, "The Encyclopedia of Asthma and Respiratory Disorders," 2003, New York, page 177
- Veith, op cit, page 1
|
<urn:uuid:3bf9344e-3e6b-4df5-9c07-b8fc8908c2f3>
|
CC-MAIN-2013-20
|
http://hardluckasthma.blogspot.com/2011/08/1000-bcasthma-in-ancient-china-and.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.976942
| 2,143
| 3.28125
| 3
|
[
"climate"
] |
{
"climate": [
"drought"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
This 1914 fire aerial ladder truck was in service at the 1915 Panama-Pacific Exposition (world's fair) held in San Francisco to celebrate the completion of the Panama Canal.
It is a 1905 Cadillac with an original price tag of $950.00 from Detroit. It was restored and has been part of the museum for 35 years. The sign says only 4,029 of these vehicles were made and very few remain today.
Drum roll, please...
From what I read, early fire departments used baking soda and acid (vinegar) together to form carbon dioxide gas to extinguish fires; thus, the term "chemical" hose.
It was purchased for $4,345.00 in 1890, a lot of money back then, and used in the 1906 earthquake. The fire station collapsed, but the firemen dug out the engine and used it to fight fires for 2 straight days.
|
<urn:uuid:f3afbfd6-9473-46c0-a378-b1af41522c4f>
|
CC-MAIN-2013-20
|
http://www.avcr8teur.blogspot.com/2013/01/san-jose-fire-museum.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.963261
| 183
| 2.953125
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Ike Dike Proposed to Protect Texas Coast
One year after the Hurricane Ike's 20-foot storm surge devastated Galveston, Texas, and surrounding areas, local governments are in the midst of making a decision about the proposed "Ike Dike" - an expansion of the existing Galveston Seawall that would cost the federal government billions of dollars.
The Ike Dike, as proposed by Texas A&M University at Galveston professor William Merrell, would extend the existing seawall by over 50 miles and add floodgates that would close before an approaching hurricane (more details). The idea is based on the Delta Works, a series of dams and barriers that protect the southwest Netherlands from storm surge and coastal flooding.
Supporters suggest that these additions would prevent future damage to Galveston Bay and the Port of Houston, protecting the region's oil production and shipping industry, in addition to Galveston Island's residents, infrastructure and the Galveston National Laboratory, a high-security medical research facility that houses some of the most contagious diseases in the world. Given the estimated $32 billion in damages to the Houston-Galveston area already caused by Ike and the likelihood of another catastrophic hurricane and rising sea levels, the Ike Dike's $2 to 3 billion dollar price tag and ten year timeline could make the project a worthwhile investment.
Opponents voice concern that the proposed storm barrier would alter the fragile Gulf of Mexico ecosystem, cause more coastal erosion and offer no protection from hurricane-force winds, which contributed to damage during Ike. They argue that the region could take alternative actions and that the Ike Dike should not be considered a final solution, but rather one step toward protecting the coast.
The questions remain: if rising sea levels and strong hurricanes remain constant threats to the Gulf Coast, should development along the shoreline and on barrier islands continue? Would the Ike Dike prove to be a lasting solution or just a band-aid to the already-injured subtropical shoreline that will be susceptible to hurricanes indefinitely?
| September 9, 2009; 10:00 AM ET
Categories: Posegate, Tropical Weather
Save & Share: Previous: Forecast: Overcast, Cool & Damp Through Friday
Next: A Hot and Dry August for the Region
Posted by: nematode1 | September 9, 2009 11:55 AM | Report abuse
Posted by: Ann-CapitalWeatherGang | September 9, 2009 12:16 PM | Report abuse
Posted by: Jamie66 | September 9, 2009 12:30 PM | Report abuse
Posted by: VaTechBob | September 9, 2009 2:22 PM | Report abuse
Posted by: Ann-CapitalWeatherGang | September 9, 2009 6:50 PM | Report abuse
Posted by: Mr_Q | September 10, 2009 2:23 AM | Report abuse
Posted by: Mr_Q | September 10, 2009 11:34 AM | Report abuse
The comments to this entry are closed.
|
<urn:uuid:827b927e-1157-45bc-b405-7c9f454c86b7>
|
CC-MAIN-2013-20
|
http://voices.washingtonpost.com/capitalweathergang/2009/09/ike_dike_proposes_to_protect_g.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.932778
| 616
| 2.984375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"storm surge"
],
"nature": [
"ecosystem"
]
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
System designed for household and neighborhood power generation
RICHLAND, Wash. – Individual homes and entire neighborhoods could be powered with a new, small-scale solid oxide fuel cell system that achieves up to 57 percent efficiency, significantly higher than the 30 to 50 percent efficiencies previously reported for other solid oxide fuel cell systems of its size, according to a study published in this month's issue of Journal of Power Sources.
The smaller system, developed at the Department of Energy's Pacific Northwest National Laboratory, uses methane, the primary component of natural gas, as its fuel. The entire system was streamlined to make it more efficient and scalable by using PNNL-developed microchannel technology in combination with processes called external steam reforming and fuel recycling. PNNL's system includes fuel cell stacks developed earlier with the support of DOE's Solid State Energy Conversion Alliance.
"Solid oxide fuels cells are a promising technology for providing clean, efficient energy. But, until now, most people have focused on larger systems that produce 1 megawatt of power or more and can replace traditional power plants," said Vincent Sprenkle, a co-author on the paper and chief engineer of PNNL's solid oxide fuel cell development program. "However, this research shows that smaller solid oxide fuel cells that generate between 1 and 100 kilowatts of power are a viable option for highly efficient, localized power generation."
Sprenkle and his co-authors had community-sized power generation in mind when they started working on their solid oxide fuel cell, also known as a SOFC. The pilot system they built generates about 2 kW of electricity, or how much power a typical American home consumes. The PNNL team designed its system so it can be scaled up to produce between 100 and 250 kW, which could provide power for about 50 to 100 American homes.
Goal: Small and efficient
Knowing the advantages of smaller SOFC systems (see the "What is an SOFC?" sidebar below for more information), the PNNL team wanted to design a small system that could be both more than 50 percent efficient and easily scaled up for distributed generation. To do this, the team first used a process called external steam reforming. In general, steam reforming mixes steam with the fuel, leading the two to react and create intermediate products. The intermediates, carbon monoxide and hydrogen, then react with oxygen at the fuel cell's anode. Just as described in the below sidebar, this reaction generates electricity, as well as the byproducts steam and carbon dioxide.
Steam reforming has been used with fuel cells before, but the approach requires heat that, when directly exposed to the fuel cell, causes uneven temperatures on the ceramic layers that can potentially weaken and break the fuel cell. So the PNNL team opted for external steam reforming, which completes the initial reactions between steam and the fuel outside of the fuel cell.
The external steam reforming process requires a device called a heat exchanger, where a wall made of a conductive material like metal separates two gases. On one side of the wall is the hot exhaust that is expelled as a byproduct of the reaction inside the fuel cell. On the other side is a cooler gas that is heading toward the fuel cell. Heat moves from the hot gas, through the wall and into the cool incoming gas, warming it to the temperatures needed for the reaction to take place inside the fuel cell.
Efficiency with micro technology
The key to the efficiency of this small SOFC system is the use of a PNNL-developed microchannel technology in the system's multiple heat exchangers. Instead of having just one wall that separates the two gases, PNNL's microchannel heat exchangers have multiple walls created by a series of tiny looping channels that are narrower than a paper clip. This increases the surface area, allowing more heat to be transferred and making the system more efficient. PNNL's microchannel heat exchanger was designed so that very little additional pressure is needed to move the gas through the turns and curves of the looping channels.
The second unique aspect of the system is that it recycles. Specifically, the system uses the exhaust, made up of steam and heat byproducts, coming from the anode to maintain the steam reforming process. This recycling means the system doesn't need an electric device that heats water to create steam. Reusing the steam, which is mixed with fuel, also means the system is able to use up some of the leftover fuel it wasn't able to consume when the fuel first moved through the fuel cell.
The combination of external steam reforming and steam recycling with the PNNL-developed microchannel heat exchangers made the team's small SOFC system extremely efficient. Together, these characteristics help the system use as little energy as possible and allows more net electricity to be produced in the end. Lab tests showed the system's net efficiency ranged from 48.2 percent at 2.2 kW to a high of 56.6 percent at 1.7 kW. The team calculates they could raise the system's efficiency to 60 percent with a few more adjustments.
The PNNL team would like to see their research translated into an SOFC power system that's used by individual homeowners or utilities.
"There still are significant efforts required to reduce the overall cost to a point where it is economical for distributed generation applications," Sprenkle explained. "However, this demonstration does provide an excellent blueprint on how to build a system that could increase electricity generation while reducing carbon emissions."
|
<urn:uuid:a1fdaa33-21e9-4ea5-ba98-baa8acf5bb16>
|
CC-MAIN-2013-20
|
http://www.labmanager.com/?articles.view/articleNo/7831/title/New-Small-Solid-Oxide-Fuel-Cell-Reaches-Record-Efficiency/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.958939
| 1,121
| 2.875
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide",
"methane"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
With the exceptions of peas and broad beans, fruit vegetables are warm-season crops, and with the exception of sweet corn and peas, all are subject to chilling injury. Fruit vegetables are not generally adaptable to long-term storage. Exceptions are the hard rind (winter) squashes and pumpkin. A useful classification for postharvest discussion of the fruit vegetables is based on the stage of maturity at harvest. This presents an overview of the general postharvest requirements and handling systems for this group of commodities.
Legumes: snap, lima, and other beans, snow pea, sugar snap and garden peas
Cucurbits: cucumber, soft rind squashes, chayote, bitter melon, luffa
Solanaceous vegetables: eggplant, peppers, tomatillo
Others such as okra and sweet corn
Cucurbits: cantaloupe, honeydew, and other muskmelons; watermelon, pumpkin, hard-rind squashes
Solanaceous vegetables: mature green and vine-ripe tomatoes, ripe peppers
The harvest index for most immature fruit vegetables is based principally on size and color. Immature soft-rind squashes, for example, may be harvested at several sizes or stages of development, depending upon market needs. Fruit that are too developed are of interior internal quality and show undesirable color change after harvest. This also applies to other immature fruit vegetables such as cucumber and bell peppers.
The harvest index for mature fruit vegetables depends on several characteristics, and proper harvest maturity is the key to adequate shelf life and good quality of the ripened fruit. For cantaloupe, the principal harvest indices are surface color and the development of the abscission zone.
Most fruit vegetables are harvested by hand. Some harvest aids may be used, including pickup machines and conveyors for melons. Cantaloupe is also harvested with "sack" crews who empty the melons into bulk trailers. Crenshaw and other specialty melons are easily damaged and require special care in handling and transport to the packing area. Mature green tomatoes are usually hand harvested into buckets and emptied into field bins or gondolas. Almost all fresh market tomatoes grown in California are bush type, and the plants are typically harvested only once or twice. At the time of harvest, 5 to 10 percent of the tomatoes have pink and yellow color and are separated out later on the packing line as vine-ripest
Immature fruit vegetables generally have very tender skins that are easily damaged in harvest and handling. Special care must be taken in all handling operations to prevent product damage and subsequent decay. Sweet corn, snap beans, and peas may be harvested mechanically or by hand.
Many of the mature fruit vegetables are hauled to packinghouses, storage, or loading facilities in bulk bins (hard rind squashes, peppers, pink tomatoes), gondolas (mature green tomatoes and peppers), or bulk field trailers or trucks (muskmelons, hard rind squashes).
Harvesting at night, when products are the coolest, is common for sweet corn and is gaining in use for cantaloupe. Products reach their lowest temperature near daybreak. Night harvest may reduce the time and costs of cooling products, may result in better and more uniform cooling, and helps maintain product quality. Fluorescent lights attached to mobile packing units have permitted successful night harvesting of cantaloupe in California.
The trend is increasing toward field packing of fruit vegetables. Grading, sorting, sizing, packing, and palletizing are carried out in the field. The products are then transported to a central cooling facility. Mobile packing facilities are commonly towed through the fields for cantaloupe, honeydew melon, eggplant, cucumber, summer squashes, and peppers. Field-pack operations entail much less handling of products than in packinghouses. This reduces product damage and, therefore, increases packout yield of products. In melons, for example, field packing means less rolling, dumping, and dropping and thus helps reduce the "shaker" problem, in which the seed cavity loosens from the pericarp wall. It also reduces scuffing of the net which reduces subsequent water loss. Handling costs are also reduced in field pack operations. One difficulty with field packing, however, is the need for increased supervision to maintain consistent quality in the packed product. Field packing is not used for commodities that require classification for both color and size, such as tomato.
Loaded field vehicles should be parked in shade to prevent product warming and sunburning. Products may be unloaded by hand (soft rind squashes, eggplant, some muskmelons, cucumber, watermelon), dry-dumped onto sloping, padded ramps (cantaloupe, honeydew melon, sweet peppers) or onto moving conveyor belts (tomatoes), or wet-dumped into tanks of moving water to reduce physical injury (honeydew melon, tomatoes, and peppers). Considerable mechanical damage occurs in dry-dumping operations; bruising, scratching, abrading and splitting are common examples. The water temperature in wet-dump tanks for tomatoes should be slightly warmer than the product temperature to prevent uptake of water and decay-causing organisms into the fruits. The dump tank water needs to be chlorinated. An operation may have two tanks separated by a clean water spray to improve overall handling sanitation.
Presizing. For many commodities, fruit below a certain size are eliminated manually or mechanically by a presizing belt or chain. Undersize fruit are diverted to a cull conveyor or used for processing.
Sorting or selection. The sorting process eliminates cull, overripe, misshapen, and otherwise defective fruit and separates products by color, maturity, and ripeness classes (e.g. tomato and muskmelons). Electronic color sorters are used in some tomato operations.
Grading. Fruit are sorted by quality into two or more grades according to U.S. standards, California grade standards, or a shipper's own Trade standards.
Waxing. Food grade waxes are commonly applied to cucumber, eggplant, sweet peppers, cantaloupe, and tomato, and occasionally to some summer squashes. The purpose is to replace some of the natural waxes removed in the washing and cleaning operations, to reduce water loss, and to improve appearance. Waxing may be done before or after sizing, and fungicides may be added to the wax. Application of wax and postharvest fungicides must be indicated on each shipping container. Waxing and fungicides are used only in packinghouse handling of fruit vegetables. European cucumbers are frequently shrink-wrapped rather than waxed.
Sizing. After sorting for defects and color differences, the fruit vegetables are segregated into several size categories. Sizing is done manually for many of the fruit vegetables, including the legumes, soft and hard rind squashes, cucumber, eggplant, chili peppers, okra, pumpkin, rnuskmelons, and watermelon. Cantaloupes may be sized by volumetric weights, or diverging roll sizers, sweet peppers are sized commonly by diverging bar sizers, and tomatoes are sized by diameter with belt sizers or by weight.
Packing. Mature green and pink tomatoes, sweet and chili peppers, okra, cucumber, and legumes are commonly weight- or volume-filled into shipping containers. All other fruit type vegetables and many of the above are place-packed into shipping containers by count, bulk bins (hard rind squashes. pumpkin, muskmelons, and watermelon) or bulk trucks (watermelon). Fruit type vegetables that are place-packed are often sized during the same operation.
Palletizing. Packed shipping containers of most fruit vegetables in large-volume operations are palletized for shipment. This is a common practice with cantaloupe, muskmelons, sweet peppers, and tomato. Except for sweet corn, the immature fruit vegetables are often handled in low volume operations, where palletizing is not common because of lack of forklifts. In these cases, the products are palletized at a centralized cooling facility or as they are loaded for transport. Palletizing is usually done after hydrocooling or package-ice cooling, but before forced-air cooling. In field-pack operations, palletizing is generally done in the field.
Various methods are used for cooling fruit vegetables. The most common methods are discussed here.
Forced-air cooling is used for beans, cantaloupe, cucumbers, muskmelons, peas, peppers, soft rind squashes, and tomato. Forced-air evaporative cooling is used to a limited extent on chilling-sensitive commodities such as squashes, peppers, eggplant, and cherry tomato.
Hydrocooling is used before grading, sizing, and packing of beans, cantaloupe, sweet corn, and okra. Sorting of defective products is done both before and after cooling. Hydrocooling cycles are rarely long enough during hot weather. The need to maintain a continuous, adequate supply of cantaloupes to the packers often results in the melons being incompletely cooled. This can be remedied if, after packing and palletizing, enough time is allowed in the cold room to cool the product to recommended temperatures before loading for transport to markets.
Package icing and liquid-icing are used to a limited extent for cooling cantaloupe and routinely as a supplement to hydrocooling for sweet corn.
Temporary cold storage. In large-volume operations, most fruit vegetables are placed in cold storage rooms after cooling and before shipment. Cold rooms are less used in small farm operations; the products are often transported to central cooperatively owned or distributor-owned facilities for cooling and short-term storage.
Loading for transport. Some tomatoes, cantaloupe, and other muskmelons are shipped in refrigerated railcars, but most fruit vegetables are shipped in refrigerated trucks or container vans. Except for the major volume products such as cantaloupe and tomato, most are shipped in mixed loads, sometimes with ethylene-sensitive commodities. Among the immature fruit type vegetables, products such as cucumber, legumes, bitter melon, and eggplant are sensitive to ethylene exposure. Among the mature fruit types, watermelon is detrimentally affected by ethylene, resulting in softening of the whole fruit, flesh mealiness, and rind separation.
For uniform and controlled ripening, ethylene is often applied to mature green tomatoes and sometimes to honeydew, casaba, and Crenshaw melons. Ethylene treatments may be done at the shipping point or the destination, although final fruit quality is generally considered best if the treatment is applied at the shipping point soon after harvest. Satisfactory ripening occurs at 12.5° to 25°C (55° to 77°F), the higher the temperature, the faster the ripening (table 29.3). Above 30°C (86°F), red color development of tomato is inhibited. An ethylene concentration of about 100 ppm is commonly used. Honeydew melons (usually class 12 melons) are sometimes held in ethylene up to 24 hours; tomatoes are usually held at 20°C (68°F) and treated for up to 3 days.
Tomatoes may be ethylene-treated before or after packing, but most are treated after packing. An advantage of treating before packing is that the warmer conditions favor development of any decay-causing pathogens on the fruit, so infected fruit can be eliminated before final packout. Packing after ethylene treatment also permits a more uniform packout. Because most of the mature green tomatoes produced in California are packed and then treated with ethylene, "checkerboarding" may still occur and make a repack operation necessary.
Modified atmospheres are seldom used commercially for these commodities, although shipments of melons and tomato under modified atmospheres are being tested for long-distance markets. Consumer packaging of vine-ripe tomatoes may also involve the use of modified atmospheres. For tomatoes held at recommended temperatures, oxygen levels of 3 to 5 percent slow ripening, with carbon dioxide levels held below 5 percent to avoid injury. Muskmelons have been less studied, but recommended atmospheres under normal storage conditions are 3 to 5 percent oxygen and 10 to 20 percent carbon dioxide.
Recommended storage/transit conditions
For mature fruit type vegetables temperature can effectively control the rate of ripening. Most mature-harvested fruit vegetables are sensitive to chilling injury when held below the recommended storage temperature. Chilling injury is cumulative, and its severity depends on the temperature and the duration of exposure. In the case of tomato, exposure to chilling temperatures below 10°C (50° F) results in lack of color development decreased flavor, and increased decay
The optimum temperatures for short-term storage and transport are:
Mature green tomatoes, pumpkin, and hard rind squashes: 12.5° to 15° C (55° to 60° F)
Partially to fully ripe tomatoes, muskmelons (except cantaloupe): 10° to 12.5° C (50° to 55° F).
Honeydew melons that are ripening naturally or have been induced with ethylene are best held at 5° to 7.5° C (41° to 45° F).
Watermelon: 7° to 10° C (45° to 50° F)
Cantaloupe: 2.5° to 5° C (36° to 41° F)
The optimum relative humidity range is 85 to 90 percent for tomato and muskmelons (except cantaloupe), 90 to 95 percent for cantaloupe, and 60 to 70 percent for pumpkin and hard rind squashes.
Immature fruit vegetables
All fruit vegetables harvested immature are sensitive to chilling injury. Exceptions are the peas and sweet corn, which are stored best at 0° C (32° F) and 95 percent RH.
The optimum product temperatures with RH at 90 to 95 percent for short-term storage and transport are as follows:
Eggplant, cucumber, soft rind squashes, okra: 10° to 12.5° C (50° to 55° F)
Peppers: 5° to 7° C (41° to 45° F)
Lima beans, snap beans: 5° to 8° C (41° to 46° F)
Division of Agriculture and Natural Resources, University of California.
All contents copyright © 2011 The Regents of the University of California. All Rights Reserved.
Development funding from the University of California and USDA, CSREES.
Please e-mail your comments to: [email protected]
Last updated: February 15, 2012 | Website design by Lauri Brandeberry
|
<urn:uuid:4a6c64f0-5e25-47bc-b259-5553cb046919>
|
CC-MAIN-2013-20
|
http://vric.ucdavis.edu/postharvest/fruitveg.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.919746
| 3,102
| 3.265625
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
May 16, 2002 A region in the western tropical Pacific Ocean may help scientists understand how Venus lost all of its water and became a 900-degree inferno. The study of this local phenomenon by NASA scientists also should help researchers understand what conditions on Earth might lead to a similar fate here.
The phenomenon, called the ‘runaway greenhouse’ effect, occurs when a planet absorbs more energy from the sun than it can radiate back to space. Under these circumstances, the hotter the surface temperature gets, the faster it warms up. Scientists detect the signature of a runaway greenhouse when planetary heat loss begins to drop as surface temperature rises. Only one area on Earth – the western Pacific ‘warm pool’ just northeast of Australia – exhibits this signature. Because the warm pool covers only a small fraction of the Earth’s surface, the Earth as a whole never actually ‘runs away.’ However, scientists believe Venus did experience a global runaway greenhouse effect about 3 billion to 4 billion years ago.
"Soon after the planets were formed 4.5 billion years ago, Earth, Venus and Mars probably all had water. How did Earth manage to hold onto all of its water, while Venus apparently lost all of its water?" asked Maura Rabbette, Earth and planetary scientist at NASA Ames Research Center in California’s Silicon Valley. "We have extensive earth science data to help address that question."
Rabbette and her co-investigators from NASA Ames, Christopher McKay, Peter Pilewskie and Richard Young, used atmospheric conditions above the Pacific Ocean, including data recorded by NASA’s Earth Observing System of satellites, to create a computer model of the runaway greenhouse effect. They determined that water vapor high in the atmosphere produced the local signature of a runaway greenhouse.
At sea surface temperatures above 80 F (27 C), evaporation loads the atmosphere with a critical amount of water vapor, one of the most efficient greenhouse gases. Water vapor allows solar radiation from the sun to pass through, but it absorbs a large portion of the infrared radiation coming from the Earth. If enough water vapor enters the troposphere, the weather layer of the atmosphere, it will trap thermal energy coming from the Earth, increasing the sea surface temperature even further.
The effect should result in a chain reaction loop where sea surface temperature increases, leading to increased atmospheric water vapor that leads to more trapped thermal energy. This would cause the temperature increase to ‘run away,’ causing more and more water loss through evaporation from the ocean. Luckily for Earth, sea surface temperatures never reach more than about 87 F (30.5 C), and so the runaway phenomenon does not occur.
"It’s very intriguing. What is limiting this effect over the warm pool of the Pacific?" asked Young, a planetary scientist. He suggests that cloud cover may affect how much energy reaches or escapes Earth, or that the ocean and atmosphere may transport trapped energy away from the local hotspot. "If we can model the outgoing energy flux, then maybe we can begin to understand what limits sea surface temperature on Earth," he said. The Ames researchers are not the first to study the phenomenon, but no consensus has been reached regarding the energy turnover or the limitation of sea surface temperature.
Rabbette analyzed clear-sky data above the tropical Pacific from March 2000 to July 2001. She determined that water vapor above 5 kilometers (3 miles) altitude in the atmosphere contributes significantly to the runaway greenhouse signature. She found that at 9 kilometers (5.6 miles) above the Pacific warm pool, the relative humidity in the atmosphere can be greater than 70 percent - more than three times the normal range. In nearby regions of the Pacific where the sea surface temperature is just a few degrees cooler, the atmospheric relative humidity is only 20 percent. These drier regions of the neighboring atmosphere may contribute to stabilizing the local runaway greenhouse effect, Rabbette said.
It is important to note that the Ames team uses real climate information such as relative humidity and temperature–not hypothetical numbers–in the Moderate Resolution Atmospheric Radiative Transfer, or MODTRAN, modeling program. The program calculates how much energy escapes back to space from the top of Earth’s atmosphere. The researchers plan to experiment with the model to test the runaway greenhouse signature’s sensitivity to climate conditions. By varying the abundance of other greenhouse gases such as carbon dioxide and by adding clouds in the model, they will see the overall effect on the outgoing energy.
The model may help researchers uncover why Venus experienced a complete runaway greenhouse and lost its water over a period of several hundred million to a billion years. The research may also help determine which planets in the so-called ‘habitable zone’ of a solar system might lack water, an essential ingredient for life as we know it.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.
|
<urn:uuid:3fb482ac-1425-4f4f-86d5-2eff0e862e7f>
|
CC-MAIN-2013-20
|
http://www.sciencedaily.com/releases/2002/05/020516080752.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.909667
| 1,032
| 4.1875
| 4
|
[
"climate"
] |
{
"climate": [
"carbon dioxide"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 212
Desalination: A National Perspective 8 A Strategic Research Agenda for Desalination As noted in Chapter 3, desalination is likely to have a niche in the water management portfolio of the future, although the significance of this niche cannot be definitively determined at this time. The potential for desalination to meet anticipated water demands in the United States is not constrained by the source water resources or the capabilities of current technology, but instead it is constrained by financial, social, and environmental factors. Over the past 50 years the state of desalination technology has advanced substantially, and improvements in energy recovery and declining membrane material costs have made brackish water and seawater desalination a more reasonable option for some communities. However, desalination remains a higher-cost alternative for water supply in many communities, and concerns about potential environmental impacts continue to limit the application of desalination technology in the United States. For inland desalination facilities, there are few, if any, cost-effective environmentally sustainable concentrate management technologies. Meanwhile, as noted in Chapter 2, there is no integrated and strategic direction to current federal desalination research and development efforts to help address these concerns. In this chapter, long-term research goals are outlined for advancing desalination technology and improving the ability of desalination to address U.S. water supply needs. A strategic national research agenda is then presented to address these goals. This research agenda is broadly conceived and includes research that could be appropriately funded and conducted in either the public or private sectors. The committee recognizes that research cannot address all barriers to increased application of desalination technology in regions facing water scarcity concerns; therefore, practical implementation issues are discussed separately in Chapter 7. Recommendations related to implementing the proposed research agenda are also provided in this chapter.
OCR for page 213
Desalination: A National Perspective LONG-TERM RESEARCH GOALS Based on the committee’s analyses of the state of desalination technology, potential environmental impacts, desalination costs, and implementation issues in the United States (see Chapters 4-7), the committee developed two overarching long-term goals for further research in desalination: Understand the environmental impacts of desalination and develop approaches to minimize these impacts relative to other water supply alternatives, and Develop approaches to lower the financial costs of desalination so that it is an attractive option relative to other alternatives in locations where traditional sources of water are inadequate. Understanding the potential environmental impacts of desalination in both inland and coastal communities and developing approaches to mitigate these impacts relative to other alternatives are essential to the future of desalination in the United States. The environmental impacts of both source water intakes and concentrate discharge remain poorly understood. Although the impacts of coastal desalination are suspected to be less than those of other water supply alternatives, the uncertainty about potential site-specific impacts and their mitigation are large barriers to the application of coastal desalination in the United States. This uncertainty leads to stakeholder disagreements and a lengthy and costly planning and permitting process. For inland desalination, uncertainties remain about the sustainability of brackish groundwater resources and the environmental impacts from concentrate discharge to surface waters. Without rigorous scientific research to identify specific potential environmental impacts (or a lack of impacts), planners cannot assess the feasibility of desalination at a site or determine what additional mitigation steps are needed. Once potential impacts are clearly understood, research can be focused on developing approaches to minimize these impacts. The second goal focuses on the cost of desalination relative to the cost of other water supply alternatives. At present, costs are already low enough to make desalination an attractive option for some communities, especially where concentrate management costs are modest. In fact, desalination plants are being studied or implemented in at least 30 municipalities nationwide (GWI, 2007). The economic costs of desalination, however, as well as the costs of water supply alternatives, are locally variable. Costs are influenced by factors such as source water quality, siting considerations, potential environmental impacts, local regulations and permitting requirements, and available concentrate management op-
OCR for page 214
Desalination: A National Perspective tions. Desalination remains a higher-cost alternative for many locations, and increasing awareness of potential environmental impacts is raising the costs of permitting and intake and outfall configurations in the United States. Inland communities considering brackish groundwater desalination may soon face more restrictions on surface water discharge and, therefore, will have fewer low-cost alternatives for concentrate management. Meanwhile, the future costs of energy are uncertain. If the total costs of desalination (including environmental costs) were reduced relative to other alternatives, desalination technology would become an attractive alternative to help address local water supply needs. STRATEGIC DESALINATION RESEARCH AGENDA The committee identified research topics as part of a strategic agenda to address the two long-term research goals articulated earlier. This agenda is driven by determination of what is necessary to make desalination a competitive option among other water supply alternatives. The agenda is broadly conceived, including research topics of clear interest to the public sector—and therefore of interest for federal funding—and research that might be most appropriately funded by private industry. The suggested research areas are described in detail below and are summarized in Box 8-1. Specific recommendations on the roles of federal and nonfederal organizations in funding the agenda are described in an upcoming section. BOX 8-1 Priority Research Areas The committee has identified priority research areas to help make desalination a competitive option among water supply alternatives for communities facing water shortages. These research areas, which are described in more detail in the body of the chapter, are summarized here. The highest priority topics are shown in bold. Some of this research may be most appropriately supported by the private sector. The research topics for which the federal government should have an interest—where the benefits are widespread and where no private-sector entities are willing to make the investments and assume the risk—are marked with asterisks. GOAL 1. Understand the environmental impacts of desalination and develop approaches to minimize these impacts relative to other water supply alternatives Assess environmental impacts of desalination intake and concentrate management approaches**
OCR for page 215
Desalination: A National Perspective Conduct field studies to assess environmental impacts of brackish groundwater development** Develop protocols and conduct field studies to assess the impacts of concentrate management approaches in inland and coastal settings** Develop laboratory protocols for long-term toxicity testing of whole effluent to assess long-term impacts of concentrate on aquatic life** Assess the environmental fate and bioaccumulation potential of desalination-related contaminants** Develop improved intake methods at coastal facilities to minimize impingement of larger organisms and entrainment of smaller ones** Assess the quantity and distribution of brackish water resources nationwide** Analyze the human health impacts of boron, considering other sources of boron exposure, to expedite water-quality guidance for desalination process design** GOAL 2. Develop approaches to lower the financial costs of desalination so that it is an attractive option relative to other alternatives in locations where traditional sources of water are inadequate Improve pretreatment for membrane desalination Develop more robust, cost-effective pretreatment processes Reduce chemical requirements for pretreatment Improve membrane system performance Develop high-permeability, fouling-resistant, high-rejection, oxidant-resistant membranes Optimize membrane system design Develop lower-cost, corrosion-resistant materials of construction Develop ion-selective processes for brackish water Develop hybrid desalination processes to increase recovery Improve existing desalination approaches to reduce primary energy use Develop improved energy recovery technologies and techniques for desalination Research configurations and applications for desalination to utilize low-grade or waste heat** Understand the impact of energy pricing on desalination technology over time** Investigate approaches for integrating renewable energy with desalination** Develop novel approaches or processes to desalinate water in a way that reduces primary energy use** GOAL 1 and 2 Crosscuts Develop cost-effective approaches for concentrate management that minimize potential environmental impacts**
OCR for page 216
Desalination: A National Perspective Research on Environmental Impacts The following research topics address Goal 1 to understand the environmental impacts of desalination and develop approaches to minimize those impacts relative to other water supply alternatives. Assess environmental impacts of desalination intake and concentrate management approaches As discussed in Chapter 5, the environmental impacts of desalination source water intake and concentrate management approaches are not well understood. Source water intakes for coastal desalination can create entrainment concerns with small organisms and impingement issues for larger organisms. For inland groundwater desalination, there are potential concerns regarding overpumping, water quality changes, and subsidence. The possible environmental impacts of concentrate management approaches range from effects on aquatic life in surface water discharges to the contamination of drinking water aquifers in poorly designed injection wells or ponds. Both site-specific studies and broad analyses of relative impacts would help communities weigh the alternatives for meeting water supply needs. The specific research needs are described as follows. 1a. Conduct field studies to assess environmental impacts of seawater intakes. Measurements and modeling of the extent of mortality of aquatic or marine organisms due to impingement and entrainment are needed. There have been numerous studies on such impacts of power plants, and extrapolation of such effects to desalination facilities should be performed. 1b. Conduct field studies to assess environmental impacts of brackish groundwater development. The general environmental interactions between wetlands, freshwater, and brackish aquifers for inland sources have not been documented under likely brackish water development scenarios. While site-specific evaluation of any location will be necessary for developing a brackish water resource, the lack of synthesized information is an impediment to the use of this resource for smaller communities with limited resources. 1c. Develop protocols and conduct field studies to assess the impacts of concentrate management approaches in inland and coastal settings. Comprehensive studies analyzing impacts of concentrate discharge at marine, estuarine, and inland desalination locations are needed.
OCR for page 217
Desalination: A National Perspective Adequate site-specific baseline studies on potential biological and ecological effects are necessary prior to the development of desalination facilities because biological communities in different geographic areas will have differential sensitivity, but a comprehensive synthesis would be valuable once several in-depth studies have been conducted. Protocols should be developed to define the baseline and operational monitoring, reference sites, lengths of transects, and sampling frequencies. Planners would benefit from clear guidance on appropriate monitoring and assessment protocols. Environmental data should be collected for at least 1 year in the area of the proposed facility before a desalination plant with surface water concentrate discharge comes online so that sufficient baseline data on the ecosystem are available with which to compare postoperating conditions. Once a plant is in operation, monitoring of the ecological communities (especially the benthic community) receiving the concentrate should be performed periodically for at least 2 years at multiple distances from the outflow pipe and compared to reference sites. For inland settings, additional regional hydrogeology research is needed on the distribution, thickness, and hydraulic properties of formations that could be used for disposal of concentrate via deep-well injection. Much information is already available about the potential for deep-well injection in states such as Florida and Texas, although suitable geologic conditions may exist in other states as well. Inventories of industrial and commercial brine-disposal wells and producing and abandoned oil fields should be synthesized and used to develop a suitable protocol for further hydrogeological investigations, as appropriate. This research would provide valuable assistance to small communities that typically do not have the resources available to support extensive hydrogeological investigations. 1d. Develop laboratory protocols for long-term toxicity testing of whole effluent to assess long-term impacts of concentrate on aquatic life. Standard acute toxicity tests as defined by the U.S. Environmental Protection Agency (EPA) are generally 96 hours in duration and use larval or juvenile stages of certain fish and invertebrate species with a series of effluent dilutions and a control. The end point is whether the test organisms survive or not. Chronic tests, according to EPA, are typically 7 days in duration when using larval stages of fish and invertebrate species, and the end points of the tests are sublethal, such as growth reduction. Typical chronic toxicity protocols were designed for testing municipal or industrial wastewater treatment plant effluent, which typically contains higher levels of toxic chemicals than the concentrate from desalination plants. To assess the impacts of desalination effluent, a protocol should be developed to analyze the longer-term effects (over whole life cycles) on organisms that live in the vicinity of desalination plants (as opposed
OCR for page 218
Desalination: A National Perspective to the standard species used in EPA-required toxicity testing). These laboratory-based tests should then be used to examine the impacts of whole effluent (and various dilutions) from different desalination plants on a variety of different taxa at numerous representative sites from key ecological regions. 1e. Assess the environmental fate and bioaccumulation potential of desalination-related contaminants. Desalination concentrate contains more than just salts and may include various chemicals that are used in pretreatment and membrane cleaning, antiscaling and antifoulant additives, and metals that may leach from corrosion. Some of these chemicals (e.g., antifoulants, copper leached from older thermal desalination plants) or chemical by-products (e.g., trihalomethanes produced as a result of pretreatment with chlorine) are likely to bioaccumulate in organisms. Investigations into the loading and environmental fate of desalination-related chemicals should be included in modeling and monitoring programs. The degree to which various chemicals biodegrade or accumulate in sediments should also be investigated. High priority should be given to polymer antiscalants, such as polycarbonic acids and polyphosphate, which may increase primary productivity. Corrosion-related metals and disinfection by-products should also be investigated. In conjunction with the field studies described earlier, representative species, preferably benthic infauna along the transects and from the reference (control) site, should be analyzed for bioaccumulative contaminants. Because little is known about the potential of some other desalination chemicals that can be discharged in concentrate to bioaccumulate (e.g., polyphosphate, polycarbonic acid, polyacrylic acid, polymaleic acid), research should be conducted into their toxicity and bioaccumulation potential. Develop improved intake methods at coastal facilities to minimize impingement of larger organisms and entrainment of smaller ones Although intake and screen technology is rapidly developing, continued research and development is needed in the area of seawater intakes to develop cost-effective approaches that minimize the impacts of impingement and entrainment for coastal desalination facilities. Current technology development has focused on subsurface intakes and advanced screens or curtains, and these recent developments should be assessed to determine the costs and benefits of the various approaches. Other innovative concepts could also be considered that might deter marine life from entering intakes.
OCR for page 219
Desalination: A National Perspective Assess the quantity and distribution of brackish water resources nationwide Sustainable development of inland brackish water resources requires maps and synthesized information on total dissolved solids of the groundwater, types of dominant solutes (e.g., NaCl, CaSO4), thickness, and depth to brackish water. The only national map of brackish water resources available (Feth, 1965; Figure 1-1) simply shows depth to saline water. Newer and better solute chemistry data collected over the past 40 years exist in the files of private, state, and federal offices but are not generally organized for use in brackish water resources investigations. Using the aforementioned information, basin analyses, analogous to the U.S. Geological Survey Regional Aquifer System Analysis program for freshwater, could be developed, emphasizing regions facing near-term water scarcity concerns. These brackish water resource investigations could also be conducted at the state level. The data, once synthesized, could be utilized for desalination planning as well as for other water resources and commercial development scenarios. Analyze the human health impacts of boron, considering other sources of boron exposure, to expedite water-quality guidance for desalination process design Typical single-pass reverse osmosis (RO) desalination processes do not remove all the boron in seawater; thus, boron can be found at milligram-per-liter levels in the finished water. Boron can be controlled through treatment optimization, but that treatment has an impact on the cost of desalination. A range of water quality levels (0.5 to 1.4 mg/L) have been proposed as protective of public health based on different assumptions in the calculations. Because of the low occurrence of boron in most groundwater and surface water, the EPA has decided not to develop a maximum contaminant level for boron and has encouraged affected states to issue guidance or regulations as appropriate (see Chapter 5). Additional analysis of existing boron toxicity data is needed, considering other possible sources of boron exposure in the United States, to support guidance for desalination process design that will be suitably protective of human health.
OCR for page 220
Desalination: A National Perspective Research to Lower the Costs of Desalination The following research topics address Goal 2 to develop approaches to lower the costs of desalination so that it is an attractive option relative to other alternatives in locations where traditional sources of water are inadequate. As a broadly conceived agenda, some of this research may be most appropriately supported by the private sector. The appropriate roles of governmental and nongovernmental entities to fund the research agenda are discussed later in the chapter. Improve pretreatment for membrane desalination Pretreatment is necessary to remove potential foulants from the source water, thereby ensuring sustainable operation of the RO membranes at high product water flux and salt rejection. Research to improve the pretreatment process is needed that would develop alternative, cost-effective approaches. 5a. Develop more robust, cost-effective pretreatment processes. Membrane fouling is one of the most problematic issues facing seawater desalination. Forms of fouling common with RO membranes are organic fouling, scaling, colloidal fouling, and biofouling. All forms of fouling are caused by interactions between the foulant and the membrane surface. Improved pretreatment that minimizes these interactions will reduce irreversible membrane fouling. Alteration of solution characteristics can improve the solubility of the foulants, preventing their precipitation or interaction with the membrane surface. Such alteration could be chemical, electrochemical, or physical in nature. Membranes such as microfiltration (MF) and ultrafiltration (UF) have several advantages over traditional pretreatment (e.g., conventional sand filtration) because they have a smaller footprint, are more efficient in removing smaller foulants, and provide a more stable influent to the RO membranes. Additional potential benefits of MF or UF pretreatment are increased flux, increased recovery, longer membrane life, and decreased cleaning frequency. More research is necessary in order to optimize the pretreatment membranes for more effective removal of foulants to the RO system, to reduce the fouling of the pretreatment membranes, and to improve configuration of the pretreatment membranes to maximize cost reduction. 5b. Reduce chemical requirements for pretreatment. Antiscalants, coagulants, and oxidants (such as chlorine) are common chemicals
OCR for page 221
Desalination: A National Perspective applied in the pretreatment steps for RO membranes. Although these chemicals are added to reduce fouling, they add to the operational costs, can reduce the operating life of membranes, and have to be disposed of properly or they can adversely impact aquatic life (see Chapter 5). Antiscalants may also enhance biofouling, so alternative formulations or approaches should be examined. Research is needed on alternative formulations or approaches (including membrane pretreatment) to reduce the chemical requirements of the pretreatment process, both to reduce overall cost and to decrease the environmental impacts of desalination. Improve membrane system performance Sustainable operation of the RO membranes at the designed product water flux and salt rejection is a key to the reduction of desalination process costs. In addition to effective pretreatment, research to optimize the sustained performance of the RO membrane system is needed. 6a. Develop high-permeability, fouling-resistant, high-rejection, oxidant-resistant membranes. New membrane designs could reduce the treatment costs of desalination by improving membrane permeability and salt rejection while increasing resistance to fouling and membrane oxidation. Current membrane research to reduce fouling includes altering the surface charge, increasing hydrophilicity, adding polymers as a barrier to fouling, and decreasing surface roughness. Oxidant-resistant membranes enable feedwater to maintain an oxidant residual that will reduce membrane fouling due to biological growth. Current state-of-the-art thin-film composite desalination membranes are polyamide based and therefore are vulnerable to damage by chlorine or other oxidants. Thus, when an oxidant such as chlorine is added to reduce biofouling, dechlorination is necessary to prevent structural damage. Additionally, trace concentrations of chlorine may be present in some feedwaters. Cellulose-derivative RO membranes have much higher chlorine tolerance; however, these membranes have a much lower permeability than thin-film composite membranes and operate under a narrower pH range. Therefore, there is a need to increase the oxidant tolerance of the higher-permeability membranes. Lower risk of premature membrane replacement equates to overall lower operating costs. Past efforts to synthesize RO membranes with high permeability often resulted in reduced rejection and selectivity. There is a need to develop RO membranes with high permeability without sacrificing selectivity or rejection efficiency. Recent research on utilizing nanomaterials, such as carbon nanotubes, as a separation barrier suggest the possibility
OCR for page 222
Desalination: A National Perspective of obtaining water fluxes much higher than that of traditional polymeric membranes. The development of membranes that are more resistant to degradation from exposure to cleaning chemicals will extend the useful life of a membrane module. The ability to clean membranes more frequently can also decrease energy usage because membrane fouling results in higher differential pressure loss through the modules. By extending the life of membrane modules, the operating and maintenance cost will be reduced by the associated reduction in membrane replacements required. 6b. Optimize membrane system design. With the development of high-flux membranes and larger-diameter membrane modules, new approaches for optimal RO system design are needed to avoid operation under thermodynamic restriction (see Chapter 4) and to ensure equal distribution of flux between the leading and tail elements of the RO system. The key variables for the system design will involve the choice of optimal pressure, the number of stages, and number and size of membrane elements at each stage. An optimal system configuration may also involve hybrid designs where one type of membrane (e.g., intermediate flux, highly fouling-resistant) is used in the leading elements followed by high-flux membranes in the subsequent elements. Fouling can be mitigated by maintaining high crossflow velocity; thus, fouling-resistant membranes may be better served in the downstream positions where lower crossflow velocity is incurred. Thus, additional engineering research on membrane system design is needed to optimize performance with the objective of reducing costs. 6c. Develop lower-cost, corrosion-resistant materials of construction. The duration of equipment life in a desalination plant directly relates to the total costs of the project. Saline and brackish water plants are considered to be a corrosive environment due to the high levels of salts in the raw water. The development and utilization of corrosion-resistant materials will minimize the frequency of equipment or appurtenance replacement, which can significantly reduce the total project costs. 6d. Develop ion-selective processes for brackish water. Some slightly brackish waters could be made potable simply though specific removal of certain contaminants, such as nitrate or arsenite, while removing other ions such as sodium, chloride, and bicarbonate at a lower rate. High removal rates of all salts are not necessary for such waters. Ion-specific separation processes, such as an ion-selective membrane or a selective ion-exchange resin, should be able to produce potable water at much lower energy costs than those processes that fully desalinate the
OCR for page 223
Desalination: A National Perspective source water. Ion-selective removal would also create fewer waste materials requiring disposal. Ion-selective processes would be useful for mildly brackish groundwater sources with high levels of nitrate, uranium, radium, or arsenic. Such an ion-selective process could also be used to optimize boron removal following RO desalination of seawater. 6e. Develop hybrid desalination processes to increase recovery. Overall product water recovery in a desalination plant can be increased through the serial application of more than one desalination process. For example, an RO process could be preceded by a “tight” nanofiltration process, allowing the RO to operate at a higher recovery than it could with less aggressive pretreatment. Other options could be devised, including hybrid thermal and membrane processes to increase the overall recovery of the process. As noted in Chapter 4, the possible hybrid combinations of desalination processes are limited only by ingenuity and identification of economically viable applications. Hybridization also offers opportunities for reducing desalination production costs and expanding the flexibility of operations, especially when co-located with power plants, but hybridization also increases plant complexity and raises challenges in operation and automation. Improve existing desalination approaches to reduce primary energy use Energy is one of the largest annual costs in the desalination process. Thus, research to improve the energy efficiency of desalination technologies could make a significant contribution to reducing costs. 7a. Develop improved energy recovery technologies and techniques for desalination. Membrane desalination is an energy-intensive process compared to treatment of freshwater sources. Modern energy recovery devices operate at up to 96 percent energy recovery (see Chapter 4), although these efficiencies are lower at average operating conditions. The energy recovery method in most common use today is the energy recovery (or Pelton) turbine, which achieves about 87 percent efficiency. Many modern plants still use Pelton wheels because of the higher capital cost of isobaric devices. Thus, opportunities exist to improve recovery of energy from the desalination concentrate over a wide operating range and reduce overall energy costs. 7b. Research configurations and applications for desalination to utilize low-grade or waste heat. Industrial processes that produce waste or low-grade heat may offer opportunities to lower the operating cost of
OCR for page 224
Desalination: A National Perspective the desalination process if these heat sources are co-located with desalination facilities (see Box 4-8). Low-grade heat can be used as an energy source for desalination via commercially available thermal desalination processes. Hybrid membrane-thermal desalination approaches offer additional operational flexibility and opportunities for water-production cost savings. Research is needed to examine configurations and applications of current technologies to utilize low-grade or waste heat for desalination. 7c. Understand the impact of energy pricing on existing desalination technology over time. Energy is one of the largest components of cost for desalination, and future changes in energy pricing could significantly affect the affordability of desalination. Research is needed to examine to what extent the economic and financial feasibility of desalination may be threatened by the uncertain prospect of energy price increases in the future for typical desalination plants in the United States. This research should also examine the costs and benefits of capital investments in renewable energy sources. 7d. Investigate approaches for integrating renewable energy with desalination. Renewable energy sources could help mitigate future increases in energy costs by providing a means to stabilize energy costs for desalination facilities while also reducing the environmental impacts of water production. Research is needed to optimize the potential for coupling various renewable energy applications with desalination. Develop novel approaches or processes to desalinate water in a way that reduces primary energy use Because the energy of RO is only twice the minimum energy of desalination, even novel technologies are unlikely to create step change (>25 percent) reductions in absolute energy consumption compared to the best current technology (see, e.g., Appendix A). Instead, substantial reductions in the energy costs of desalination are more likely to come through the development of novel approaches or processes that optimize the use of low-grade heat. Several innovative desalination technologies that are the focus of ongoing research, such as forward osmosis, dewvaporation, and membrane distillation, have the capacity to use low-grade heat as an energy source. Research into the specific incorporation of waste or low-grade heat into these or other innovative processes could greatly reduce the amount of primary energy required for desalination and, thus, overall desalination costs.
OCR for page 225
Desalination: A National Perspective Crosscutting Research Research topics in this category benefit both Goal 1, for environmental impacts, and Goal 2, for lowering the cost of desalination. Develop cost-effective approaches for concentrate management that minimize potential environmental impacts Research objectives related to concentrate management are crosscutting, because they address both the need to understand and minimize environmental impacts and the need to reduce the total cost of desalination. For coastal concentrate management, research is needed to develop improved diffuser technologies and subsurface injection approaches and to examine their costs and benefits relative to current disposal alternatives. The high cost of inland concentrate management inhibits inland brackish water desalination. Low- to moderate-cost concentrate management alternatives (i.e., subsurface injection, land application, sewer discharge, and surface water discharge) can be limited by the salinity of the concentrate and by location and climate factors; in some scenarios all of these options may be restricted by site-specific conditions, leaving zero liquid discharge (ZLD) as the only alternative for consideration. ZLD options currently include evaporation ponds and energy-intensive processes, such as brine concentrators or crystallizers, followed by landfilling. These options have high capital or operating costs. Research to improve recovery in the desalination process and thereby minimize the initial volume of concentrate could enhance the practical viability of several concentrate management options for inland desalination. This is particularly true for the concentrate management options that are characterized by high costs per unit volume of the concentrate flow treated and for approaches that are not applicable to large concentrate flows, such as thermal evaporation or evaporation ponds. Advancements are also needed that reduce the capital costs and improve the energy efficiency of thermal evaporation processes. Conventional concentrate management options that involve simple equipment are not likely to see significant cost reductions through additional research. The reuse of high-salinity concentrates and minerals extracted from them should be further explored and developed to help mitigate environmental impacts while generating revenues that can help offset concentrate management costs. Possibilities include selective precipitation of marketable salts, irrigation of salt-tolerant crops, supplements for animal dietary needs, dust suppressants, stabilizers for road base construction, or manufacture of lightweight fire-proof building materials. Studies are
OCR for page 226
Desalination: A National Perspective necessary to determine the most feasible uses and to develop ways to prepare the appropriate product for various types of reuse. For all possible uses, site-specific limitations and local and state regulations will need to be considered. Because the transportation costs greatly affect the economics of reuse, a market analysis would also be needed to identify areas in the United States that could reasonably utilize products from desalination concentrate. Highest Priority Research Topics All of the topics identified are considered important, although three topics (1, 2, and 9 above) were deemed to be the highest priority research topics: (1) assessing the environmental impacts of desalination intake and concentrate management approaches, (2) developing improved intake methods to minimize impingement and entrainment, and (3) developing cost-effective approaches for concentrate management that minimize environmental impacts. These three research areas are considered the highest priorities because this research can help address the largest barriers (or showstoppers) to more widespread use of desalination in the United States. Uncertainties about potential environmental impacts will need to be resolved and cost-effective mitigation approaches developed if desalination is to be more widely accepted. Research to develop cost-effective approaches for concentrate management is critical to enable more widespread use of desalination technologies for inland communities. As noted in Chapter 4, the cost of concentrate management can double or triple the cost of the desalination for some inland communities. Research may also reduce the costs of desalination. Any cost improvement will help make desalination an attractive option for communities addressing water shortages. However, the committee does not view these process cost issues as the major limitation to the application of desalination in the United States today. IMPLEMENTING THE RESEARCH AGENDA In the previous section, the committee proposed a broad research agenda that, if implemented, should improve the capacity of desalination to meet future water needs in the United States by further examining and addressing its environmental impacts and reducing its costs relative to other water supply alternatives. Implementing this agenda requires federal leadership, but its success depends on participation from a range of entities, including federal, state, and local governments, nonprofit or-
OCR for page 227
Desalination: A National Perspective ganizations, and the private sector. A strategy for implementing the research agenda is suggested in the following section. This section also includes suggestions for funding the agenda and the appropriate roles of government and nongovernmental entities. Supporting the Desalination Research Agenda A federal role is appropriate for research that provides a “public good.” Specifically, the federal government should have an interest in funding research where the benefits are widespread but where no private-sector entities are willing to make the investment and assume the risks. Thus, for example, research that results in significant environmental benefits should be in the federal interest because these benefits are shared by the public at large and cannot be fully captured by any entrepreneur. Federal investment is also important where it has “national significance”—where the issues are of large-scale concern; they are more than locally, state-, or regionally specific; and the benefits accrue to a large swath of the public. Based on the aforementioned criteria, the proposed research agenda contains many topic items that should be in the federal interest (see topics marked with asterisks in Box 8-1). The research topics in support of Goal 1 (see Box 8-1) are directed at environmental issues that are largely “public good” issues. Some of the needed environmental research will, by nature, be site-specific, and purely site-specific research is not of great federal interest. Thus, there is a clear role for state and local agencies to support site-specific research. The federal government, however, should have an interest in partnering with local communities to conduct more extensive field research from which broader conclusions of environmental impacts can be drawn or which would significantly contribute to a broader meta-analysis. This meta-analysis could especially benefit small water supply systems. Also, there should be federal interest in establishing general protocols for field evaluations and chronic bioassays that could then be adapted for site-specific studies. The research needed to support the attainment of Goal 2 includes several topics that are clearly in the federal interest, as defined earlier. These include efforts to reduce prime energy use, to integrate renewable energy resources within the total energy picture and increase reliance upon them, and to understand the impacts of energy pricing on the future of desalination (see highlighted topics in Box 8-1). However, Goal 2 also includes a number of research topics that may be more appropriately funded by the private sector or nongovernmental organizations, assuming that these entities are willing to assume the risks of the research investment. Indeed, private industry already spends far more on research and
OCR for page 228
Desalination: A National Perspective development for desalination than the federal government (see Chapter 2) and is already making substantial progress in the improvement of existing membrane performance, developing better pretreatment alternatives, and developing improved energy recovery devices. To avoid duplication and to optimize available research funding, government programs should focus instead on research and development with widespread possible benefits that would otherwise go unfunded because private industry is unwilling to make the investment. Finally, the crosscutting topic to develop cost-effective methods of managing concentrates for inland communities, which impacts Goals 1 and 2, is also in the federal interest. Federal Research Funding The optimal level of federal investment in desalination research is inherently a question of public policy. Although the decision should be informed by science, it is not—at its heart—a scientific decision. However, several conclusions emerged from the committee’s analysis of current research and development funding (see Chapter 2) that suggest the importance of strategic integration of the research program. The committee concluded that there is no integrated and strategic direction to the federal desalination research and development efforts. Continuation of a federal program of research dominated by congressional earmarks and beset by competition between funding for research and funding for construction will not serve the nation well and will require the expenditure of more funds than necessary to achieve specified goals. To ensure that future federal investments in desalination research are integrated and prioritized so as to address the two major goals identified in this report, the federal government will need to develop a coordinated strategic plan that utilizes the recommendations of this report as a basis. It is beyond the committee’s scope to recommend specific plans for improving coordination among the many federal agencies that support desalination research. Instead, responsibility for developing the plan should rest with the Office of Science and Technology Policy’s (OSTP’s) National Science and Technology Council (NSTC) because “this Cabinet-level Council is the principal means within the executive branch to coordinate science and technology policy across the diverse entities that make up the Federal research and development enterprise.”1 For example, the NSTC’s Subcommittee on Water Availability and Quality has member-ship representing more than 20 federal agencies and recently released “A Strategy for Federal Science and Technology to Support Water Avail- 1 For more information, see http://www.ostp.gov/nstc/index.html.
OCR for page 229
Desalination: A National Perspective ability and Quality in the United States” (SWAQ, 2007). Representatives of the National Science Foundation, the Bureau of Reclamation, the Environmental Protection Agency, the National Oceanographic and Atmospheric Administration, the Office of Naval Research, and the Department of Energy should participate fully in the development of the strategic federal plan for desalination research and development. Five years into the implementation of this plan, the OSTP should evaluate the status of the plan, whether goals have been met, and the need for further funding. A coordinated strategic plan governing desalination research at the federal level along with effective implementation of the research plan will be the major determinants of federal research productivity in this endeavor. The committee cannot emphasize strongly enough the importance of a well-organized, well-articulated strategically directed effort. In the absence of any or all of these preconditions, federal investment will yield less than it could. Therefore, a well-developed and clearly articulated strategic research plan, as called for above, should be a precondition for any new federal appropriations. Initial federal appropriations on the order of recent spending on desalination research (total appropriations of about $25 million annually, as in fiscal years 2005 and 2006) should be sufficient to make good progress toward the overall research goals if the funding is strategically directed toward the proposed research topics as recommended in this report. Annual federal appropriations of $25 million, properly allocated, should be sufficient to have an impact in the identified priority research areas, given the context of expected state and private-sector funding. This level of federal funding is also consistent with NRC (2004a), which recommended annual appropriations of $700 million for research supporting the nation’s entire water resources research agenda. Reallocation of current spending will be necessary to address topics that are currently underfunded. If current research funding is not reallocated, the overall desalination research and development budget will need to be enhanced. Nevertheless, support for the research agenda stated here should not come at the expense of other high-priority water resource research topics, such as those identified in Confronting the Nation’s Water Problems: The Role of Research (NRC, 2004a). Environmental research should be emphasized up front in the research agenda. At least 50 percent of the federal funding for desalination research should initially be directed toward environmental research. Environmental research, including Goal 1 and the Goal 1 and 2 crosscuts, should be addressed, because these have the potential for the greatest impact in overcoming current roadblocks for desalination and making desalination an attractive water supply alternative. Research funding in support of Goal 2 should be directed strategically toward research topics that are likely to make improvements against benchmarks set by the best
OCR for page 230
Desalination: A National Perspective current technologies for desalination. The best available technologies for desalination at the time of this writing are benchmarked in Chapter 4. Research proposals should make the case as to how and to what degree the proposed research can advance the state of the art in desalination. An emphasis should be placed on energy benchmarks because reductions in energy result in overall cost savings and have environmental benefits. The majority of the federal funding directed toward Goal 2 should support projects that are in the public interest and would not otherwise be privately funded (see Box 8-1), such as some high-risk and long-term research initiatives (e.g., developing novel desalination processes that sharply reduce the primary energy use). Although private industry does make modest investments in high-risk research, it is frequently reluctant to invest in research in the earliest stage of technology creation, when there is extremely low likelihood of success even though there are large potential benefits. The effectiveness with which federal funds are spent will also depend on certain critical implementation steps, which are outlined in the following section. Proposal Announcement and Selection Based on available funding, the opportunity to announce requests for proposals exists for federal agencies, such as the Bureau of Reclamation or the National Science Foundation, or other research institutions that explicitly target one or more research objectives. The principal funding agency should announce a request for proposals as widely as possible to scientists and engineers in municipal and federal government, academia, and private industry. At present, the desalination community is relatively small, but collectively there is a great deal of expertise across the world. International desalination experts and others from related areas of research should be encouraged and given the opportunity to offer innovative research ideas that have the potential to significantly advance the field. Thus, the request for proposals should extend to federal agencies, national laboratories, other research institutions, utilities, and the private sector. Since innovation cannot be preassigned, broad solicitations for proposals should include a provision for unsolicited investigator-initiated research proposals. To achieve the objectives of the research agenda, proposals should be selected through a rigorous independent peer-review process (NRC, 2002b) irrespective of the agency issuing the request for proposals. A rotating panel of independent, qualified reviewers should be appointed based on their relevant expertise in the focal areas. The process should
OCR for page 231
Desalination: A National Perspective allow for the consideration and review of unsolicited proposals, as long as their research goals meet the overall research goals. Proposal funding should be based on the quality of the proposed work, the degree to which the proposed research can advance the state of the art in desalination or otherwise contribute toward the research goals, prior evidence of successful research, and the potential for effective publication or dissemination of the research findings. CONCLUSIONS AND RECOMMENDATIONS A strategic national research agenda has been conceived that centers around two overarching strategic goals for further research in desalination: (1) to understand the environmental impacts of desalination and develop approaches to minimize these impacts relative to other water supply alternatives and (2) to develop approaches to lower the financial costs of desalination so that it is an attractive option relative to other alternatives in locations where traditional sources of water are inadequate. A research agenda is proposed in this chapter in support of these two goals (see Box 8-1). Several recommendations for implementing the proposed research agenda follow. A coordinated strategic plan should be developed to ensure that future federal investments in desalination research are integrated and prioritized and address the two major goals identified in this report. The strategic application of federal funding for desalination research can advance the implementation of desalination technologies in areas where traditional sources of water are inadequate. Responsibility for developing the plan should rest with the OSTP, which should use the recommendations of this report as a basis for plan development. Initial federal appropriations on the order of recent spending on desalination research (total appropriations of about $25 million annually) should be sufficient to make good progress toward these goals, when complemented by ongoing nonfederal and private-sector desalination research, if the funding is directed toward the proposed research topics as recommended in this chapter. Reallocation of current federal spending will be necessary to address currently underfunded topics. If current federal research and development funding is not reallocated, new appropriations will be necessary. However, support for the research agenda stated here should not come at the expense of other high-priority water resource research topics. Five years into the implementation of this plan, the OSTP should evaluate the status of the plan, whether goals have been met, and the need for further funding.
OCR for page 232
Desalination: A National Perspective Environmental research should be emphasized up front when implementing the research agenda. Uncertainties regarding environmental impacts and ways to mitigate these impacts are one of the largest hurdles to implementation of desalination in the United States, and research in these areas has the greatest potential for enabling desalination to help meet future water needs in communities facing water shortages. This environmental research includes work to understand environmental impacts of desalination intakes and concentrate management, the development of improved intake methods to minimize impingement and entrainment, and cost-effective concentrate management technologies. Research funding in support of reducing the costs of desalination (Goal 2) should be directed strategically toward research topics that are likely to make improvements against benchmarks set by the best current technologies for desalination. Because the private sector is already making impressive strides toward Goal 2, federal research funding should emphasize the long-term and high-risk research that may not be attempted by the private sector and that is in the public interest, such as research on novel technologies that significantly reduce prime energy use. Wide dissemination of requests for proposals to meet the goals of the research agenda will benefit the quality of research achieved. Requests for proposals should extend to federal agencies, national laboratories, research institutions, utilities, other countries, and the private sector. Investigator-driven research through unsolicited proposals should be permitted throughout the proposal process. Proposals should be peer-reviewed and based on quality of research proposed, the potential contribution, prior evidence of successful research, and effective dissemination.
|
<urn:uuid:c71c1a01-b722-41d4-aac2-009fd7362c0e>
|
CC-MAIN-2013-20
|
http://www.nap.edu/openbook.php?record_id=12184&page=212
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00007-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.921676
| 9,950
| 2.546875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"renewable energy"
],
"nature": [
"ecological",
"ecosystem",
"wetlands"
]
}
|
{
"strong": 3,
"weak": 1,
"total": 4,
"decision": "accepted_strong"
}
|
Why is it that farmers competing in state and national soybean yield contests routinely grow 60 or even 85 bu. per acre yields when the national average is closer to 44?
That’s the question that motivated some ground-breaking research by Fred Below, a crop physiologist at the University of Illinois. The wise-cracking, caffeinated professor presented his top-line results to a standing-room-only audience of farmers at last week’s 2013 Commodity Classic.
There’s a lot more at stake in the answer to this question than a simple scientific inquiry. To feed an extra 2 billion mouths over the next 40 years, "we need to double the production of all grains," says Below. But at current rates of productivity gains, it will take 100 years to reach 85 bu. per acre.
"Fortunately, there’s a lot of low-hanging fruit," says Below. In fact, nearly every tactic Below tried – whether it was adding more nitrogen or planting in denser rows – improved yields. The best course of action, he says, is "intelligent intensification," academic-speak for putting more of the right stuff in the ground at the right time.
Before diving into his secrets, Below listed some "pre-requisites." He assumes that farmers are draining their fields properly, that they engage in early weed control – "no matter how satisfying it may be to let them grow then go out in the field and kill them with glyphosate" – and are maintaining proper pH levels for soils.
Here’s a rundown on the six "secrets" Below revealed, one at a time.
Because he figures everyone could guess weather was among the secrets, he lists it first. But it may belong at the top, since according to his studies it has a greater influence on yields than anything else. That’s unfortunate because it can’t be controlled.
Fluctuating weather conditions in Below’s home state of Illinois have resulted deviations from trend-line yields of 0.7 bu. per acre over the past 20 years.
Good weather, of course, influences early planting, which can create opportunities for early vegetative growth and node formation. "I think we need to plant earlier, but it’s the weather that determines the planting date," Below says.
Even if you plant early, the professor says, you need to plant the right seed and protect it. The impact of heat and drought can be mitigated by management practices that promote strong root development, such as fertility, enhanced seed emergence, and disease control. Ethylene blocking compounds that alleviate corn stress may work on soybeans as well, he adds.
But there’s nothing like a good rain in August. That’s what saved Below’s test crops last year. "When any of my agronomic schemes don’t work," he says, "I just blame the weather, and I’m usually right."
Soil fertility, Below says, may be the most overlooked component of soybean management. "I don’t think we’re adequately fertilizing soybeans, or we’re losing a lot from corn," Below says. The popular approach of adding nitrogen during the growing season may backfire as well. "If you put in too much, in late June or early July, you can shut down the nodules. Then you wind up with worse performance."
Soybeans obtain between 25 and 75% of plant nitrogen from the soil, with the balance supplied from symbiotic fixation. "When we get to 85 bu. per acre, we’re mining 100 pounds out of the soil," he says.
Some growers, Below said, incorrectly believe that because they applied adequate fertilizer to their corn crop the preceding year that phosphorous (P) and potassium (K) fertility are less critical for soybean production.
A typical fertilizer program for soybeans, for instance, might involve fertilizing the previous year’s corn crop with an equivalent of two years of fertilizer. A 230-bu. corn yield, however, removes nearly 100 pounds of P2O5 from every acre. This doesn’t leave much for the soybean crop in the second year.
Many people think potassium is the key nutrient for soybeans, Below says. While it is important, corn stover could provide half the potassium that soybeans need. Phosphorus, he says, "is the biggest problem, and it’s probably due to the way we fertilize corn." Phosphorus is quickly immobilized in soil and may not be available in sufficient quantity.
Below grew an additional 4.3 bu. per acre through better fertilization. Besides the weather, fertility is the most important variable he tested.
Below believes that farmers don’t spend enough time researching soybean varieties. He found big differences in yield, even among varieties of the same maturity. Yields varied by as much as 20 bu. per acre when grown at the same location.
Some variation was due to differences in susceptibility to diseases like white mold. But he also noted big differences between seeds with varying insect resistance. All told, Below attributes an additional 3.2 bu. per acre upside to selecting the right seed.
4. Enhance seed emergence and vigor
Through the use of fungicidal, insecticidal and plant growth regulator seed treatments, early season growth and vigor will be protected from yield robbing stresses such as disease and insects. The professor says healthier leaves result in bigger seeds, which can dramatically improve yields. "It has a huge impact," he said.
Below argues that insect and disease control is especially critical with soybeans because so many pests can limit yield or reduce grain quality.
5. Seed treatment
Soybean seeds have their highest yield potential, of course, when first planted. After that, the basic idea is to relieve as much stress on the plants as possible. Seed treatments that promote seed germination, seedling establishment, and early vigor, help in this respect.
Some fungicides and insecticides may also promote "physiological vigor," he says. "At my research site, when I had seed with a complete treatment, I saw an enormous improvement in yield." All told, the researcher attributes an increase of 2.6 bu. per acre to seed treatment.
6. Use narrow rows
Below’s research turned up a distinct advantage to planting soybeans in narrow rows of 20 inches, rather than 30 inches. This allows more space between plants within a row and increased branching. That in turn creates more opportunities for precision fertilizer placement in a corn-soybean rotation. Twenty-inch rows also improve light interception, though reduced air circulation may create more pressure for disease.
Though 15-inch soybean rows have gained in popularity in recent years, Below believes that there’s an advantage to planting both corn and soybeans in 20-inch rows. Farmers could use the same equipment for precision fertilizer placement. And by alternating crops, soybeans could take advantage of the residual fertility from the previous band.
Unfortunately, farmers who employed all these secrets wouldn’t get a yield increase equal to the combined value of doing them individually. The whole, in other words, isn’t equal to a sum of the parts. But you do get some added benefit by combining approaches.
For instance, in Below’s research, applying fungicide at the R3 growth stage improved yields by 2.1%. Applying insecticide at R3 resulted in a 3.7% yield increase. But doing both only improved yields by 3.8%.
Soybean Yield Secrets
In terms of yield percentage improvements
|Fertility (extra N,P,S and Zn)
|Variety (fuller maturity for region)
|Foliar protection (fungicide and insecticide)
|Seed treatment (fungicide, insecticide, and nematicide)
|Row width (20-inch versus 30-inch)
Read more information about Below and his research.
For More Information
|
<urn:uuid:0969aaef-85aa-4166-9871-f7e0ca7be586>
|
CC-MAIN-2013-20
|
http://www.agweb.com/farmjournal/farm_journal_corn_college/article/6_secrets_to_higher_soybean_yields/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00007-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.947783
| 1,675
| 2.625
| 3
|
[
"climate"
] |
{
"climate": [
"drought"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
From The New York Times (Felicity Barringer):
The federal government has come up with dozens of ways to enhance the diminishing flow of the Colorado River, which has long struggled to keep seven states and roughly 25 million people hydrated…
…also in the mix, and expected to remain in the final draft of the report [ed. Colorado River Basin Water Supply & Demand Study], is a more extreme and contentious approach. It calls for building a pipeline from the Missouri River to Denver, nearly 600 miles to the west. Water would be doled out as needed along the route in Kansas, with the rest ultimately stored in reservoirs in the Denver area…
The fact that the Missouri River pipeline idea made the final draft, water experts say, shows how serious the problem has become for the states of the Colorado River basin. “I pooh-poohed this kind of stuff back in the 1960s,” said Chuck Howe, a water policy expert and emeritus professor of economics at the University of Colorado, Boulder. “But it’s no longer totally unrealistic. Currently, one can say ‘It’s worth a careful look.’ ”
The pipeline would provide the Colorado River basin [ed. Denver, Kansas, etc., are not in the Colorado River Basin] with 600,000 acre-feet of water annually, which could serve roughly a million single-family homes. But the loss of so much water from the Missouri and Mississippi River systems, which require flows high enough to sustain large vessel navigation, would most likely face strong political opposition…
Rose Davis, a spokeswoman for the Bureau of Reclamation, said that during the course of the study, the analysis done on climate change and historical data led the agency “to an acknowledged gap” between future demand and future supply as early as the middle of this century.
That is when they put out a call for broader thinking to solve the water problem. “When we did have that wake-up call, we threw open the doors and said, ‘Bring it on,’ ” she said. “Nothing is too silly.”[...]
It is unclear how much such a pipeline project would cost, though estimates run into the billions of dollars. That does not include the cost of the new electric power that would be needed (along with the construction of new generating capacity) to pump the water uphill from Leavenworth, Kan., to the front range reservoirs serving Denver, about a mile above sea level, according to Sharlene Leurig, an expert on water-project financing at Ceres, a nonprofit group based in Boston that works with investors to promote sustainability.
If the Denver area had this new source of water to draw on, it could reduce the supplies that come from the Colorado River basin on the other side of the Continental Divide.
But [Burke W. Griggs] and some federal officials said that the approval of such a huge water project remained highly unlikely.
Ms. Leurig noted that local taxpayers and utility customers would be shouldering most of the expense of such a venture through their tax and water bills, which would make conservation a more palatable alternative.
More Missouri River Reuse Project coverage here.
|
<urn:uuid:ab47fb9c-7f5e-4526-b509-4201b9967b13>
|
CC-MAIN-2013-20
|
http://coyotegulch.wordpress.com/2012/12/10/missouri-river-reuse-project-i-pooh-poohed-this-kind-of-stuff-back-in-the-1960s-chuck-howe/?like=1&source=post_flair&_wpnonce=57db9114c5
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00007-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.96201
| 666
| 2.546875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change"
],
"nature": [
"conservation"
]
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
The April issue of Scientific American includes an exclusive excerpt from Bill McKibben's new book, Eaarth: Making a Life on a Tough New Planet, plus an interview that challenges his assumptions. Expanded answers to key interview questions, and additional queries and replies, appear here.
McKibben is a scholar in residence at Middlebury College in Vermont and is a co-founder of the climate action group, 350.org. He argues that humankind, because of its actions, now lives on a fundamentally different world, which he calls Eaarth. This celestial body can no longer support the economic growth model that has driven society for the past 200 years. To avoid its own collapse, humankind must instead seek to maintain wealth and resources, in large part by shifting to more durable, localized economies—especially in food and energy production.
[A Scientific American interview with McKibben follows.]
You entitled your book Eaarth, because you claim that we have permanently altered the planet. How so? And why should we change our ways now?
Well, gravity still applies. But fundamental characteristics have changed, like the way the seasons progress, how much rain falls, the meteorological tropics—which have expanded about two degrees north and south, making Australia one big fire zone. This is a different world. We underestimated how finely balanced the planet's physical systems are. Few people have come to grips with this. The perception, still, is that this is a future issue. It's not—it's here now.
Is zero growth necessary, or would "very slight" growth be sustainable?
A specific number is not part of the analysis. I'm more interested in trajectories: What happens if we move away from growth as the answer to everything and head in a different direction? We've tried very little else. We can measure society by other means, and when we do, the world can become much more robust and secure. You start having a food supply you can count on, and an energy supply you can count on, and know they aren't undermining the rest of the world. You start building communities that are strong enough to count on, so individual accumulation of wealth becomes less important.
If "growth" should no longer be our mantra, then what should it be?
We need stability. We need systems that don't rip apart. Durability needs to be our mantra. The term "sustainability" means essentially nothing to most people. "Maintenance" is not very flashy. "Maturity" would be the word we really want, but it's been stolen by the AARP. So durability is good; durability is a virtue.
In part, you're advocating a return to local reliance. How small is "local"? And can local reliance work only in certain places?
We'll figure out the sensible size. It could be a town, a region, a state. But to find the answer, we have to get the incredibly distorting subsidies out of our current systems. They send all kinds of bad signals about what we should be doing. In energy we've underwritten fossil fuel for a long time; unbelievable gifts to the "clean coal" industry, and on and on. It's even more egregious in agriculture. Most of the United States's cropland is devoted to growing corn and soybeans--not because there's an unbelievable demand to eat corn and soybeans, but because there are federal subsidies to grow them—written into the law by huge agricultural companies who control certain senators. Once subsidies wither, we can figure out what scale of industry makes sense. It will make sense to grow a lot of things closer to home.
It's plausible to "go local" in, say, your home state of Vermont, where residents have money and are forward-looking—and their basic needs are met. But what about people in poor places; don't they need outside help?
Absolutely. The rich nations have screwed up the climate. It's our absolute responsibility to figure out how to allow poor people to have something approaching a decent life. What happens to the poorest and most vulnerable people in the world? They get dengue fever. The fields they depend on are ruined by drought or flood. The glaciers that feed the Ganges will be gone, yet 400 million people depend on that water. We are not helping the poor by destabilizing the planet's systems. Meantime, what works best for them? Local, labor-intensive, low-input agriculture: It provides jobs, security, stability and food, and helps make local ecological systems robust enough to withstand the damage that's coming.
U.S. debt is rising to insane levels because the country has lived beyond its means, which supports your call to switch from growth to maintenance. But how do countries like the U.S. get out of debt without growing? Do we need a transition period where growth eliminates debt, and then we embrace durability?
My sense is that all of this will flow logically from the physics and chemistry of the world we're moving into, just like the centralized industrial model flowed logically from the physics and chemistry of the fossil-fueled world. The primary political question is: Can we make change happen fast enough to avoid all-out collapses that are plausible, even likely, under the patterns we're operating in now? How do we force global changes that move these transitions more quickly than they want to move? We have an incredibly small amount of time; we have already passed the threshold points in some respects. We best get to work.
|
<urn:uuid:919585eb-f1f1-4d32-80a4-202da545b609>
|
CC-MAIN-2013-20
|
http://www.scientificamerican.com/article.cfm?id=bill-mckibben-question-and-answer
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00007-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.956115
| 1,136
| 2.8125
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"drought"
],
"nature": [
"ecological"
]
}
|
{
"strong": 1,
"weak": 1,
"total": 2,
"decision": "accepted_strong"
}
|
Tucked inside Carl Zimmer's wonderful and thorough feature on de-extinction, a topic that got a TEDx coming out party last week, we find a tantalizing, heartbreaking anecdote about the time scientists briefly, briefly brought an extinct species back to life.
The story begins in 1999, when scientists determined that there was a single remaining bucardo, a wild goat native to the Pyrenees, left in the world. They named her Celia and wildlife veterinarian Alberto Fernández-Arias put a radio collar around her neck. She died nine months later in January 2000, crushed by a tree. Her cells, however, were preserved.
Working with the time's crude life sciences tools, José Folch led a Franco-Spanish team that attempted to bring the bucardo, as a species, back from the dead.
It was not pretty. They injected the nuclei from Celia's cells into goat eggs that had been emptied of their DNA, then implanted 57 of them into different goat surrogate mothers. Only seven goats got pregnant, and of those, six had miscarriages. Which meant that after all that work, only a single goat carried a Celia clone to term. On July 30, 2003, the scientists performed a cesarean section.
Here, let's turn the narrative over to Zimmer's story:
As Fernández-Arias held the newborn bucardo in his arms, he could see that she was struggling to take in air, her tongue jutting grotesquely out of her mouth. Despite the efforts to help her breathe, after a mere ten minutes Celia's clone died. A necropsy later revealed that one of her lungs had grown a gigantic extra lobe as solid as a piece of liver. There was nothing anyone could have done.
A species had been brought back. And ten minutes later it was gone again. Zimmer continues
The notion of bringing vanished species back to life--some call it de-extinction--has hovered at the boundary between reality and science fiction for more than two decades, ever since novelist Michael Crichton unleashed the dinosaurs of Jurassic Park on the world. For most of that time the science of de-extinction has lagged far behind the fantasy. Celia's clone is the closest that anyone has gotten to true de-extinction. Since witnessing those fleeting minutes of the clone's life, Fernández-Arias, now the head of the government of Aragon's Hunting, Fishing and Wetlands department, has been waiting for the moment when science would finally catch up, and humans might gain the ability to bring back an animal they had driven extinct.
"We are at that moment," he told me.
That may be. And the tools available to biologists are certainly superior. But there's no developed ethics of de-extinction, as Zimmer elucidates throughout his story. It may be possible to bring animals that humans have killed off back from extinction, but is it wise, Zimmer asks?
"The history of putting species back after they've gone extinct in the wild is fraught with difficulty," says conservation biologist Stuart Pimm of Duke University. A huge effort went into restoring the Arabian oryx to the wild, for example. But after the animals were returned to a refuge in central Oman in 1982, almost all were wiped out by poachers. "We had the animals, and we put them back, and the world wasn't ready," says Pimm. "Having the species solves only a tiny, tiny part of the problem."
Maybe another way to think about it, as Jacquelyn Gill argues in Scientific American, is that animals like mammoths have to perform (as the postmodern language would have it) their own mammothness within the complex social context of a herd.
When we think of cloning woolly mammoths, it's easy to picture a rolling tundra landscape, the charismatic hulking beasts grazing lazily amongst arctic wildflowers. But what does cloning a woolly mammoth actually mean? What is a woolly mammoth, really? Is one lonely calf, raised in captivity and without the context of its herd and environment, really a mammoth?
Does it matter that there are no mammoth matriarchs to nurse that calf, to inoculate it with necessary gut bacteria, to teach it how to care for itself, how to speak to other mammoths, where the ancestral migration paths are, and how to avoid sinkholes and find water? Does it matter that the permafrost is melting, and that the mammoth steppe is gone?...
Ultimately, cloning woolly mammoths doesn't end in the lab. If the goal really is de-extinction and not merely the scientific equivalent of achievement unlocked!, then bringing back the mammoth means sustained effort, intensive management, and a massive commitment of conservation resources. Our track record on this is not reassuring.
In other words, science may be able to produce the organisms, but society would have to produce the conditions in which they could flourish.
|
<urn:uuid:b4c4f96b-307a-4d35-9c3d-b1ec4ed02ca7>
|
CC-MAIN-2013-20
|
http://www.theatlantic.com/technology/archive/2013/03/the-10-minutes-when-scientists-brought-a-species-back-from-extinction/274118/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00007-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.972978
| 1,030
| 2.859375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"permafrost"
],
"nature": [
"conservation",
"wetlands"
]
}
|
{
"strong": 3,
"weak": 0,
"total": 3,
"decision": "accepted_strong"
}
|
International Perspectives on Food and Fuel
Agricultural Marketing Resource Center
Co-Director – Ag Marketing Resource Center
Iowa State University
(Second in a series)
Last month we provided perspectives on the food and fuel debate from the viewpoint of U.S. consumers. This article provides world perspectives on the debate.
When discussing food issues, we must preface the discussion with the understanding that food is the most personal of all consumer purchases. We can do without or find ways to circumvent the need for most consumer products. But the specter of not having enough food is a basic need and elicits an emotional response. Although this specter is foreign to U.S. consumers, it is very real for millions of people around the world.
Dwindling Grain Reserves
Although grains used for biofuels have impacted world grain usage in recent years, the conditions for limited supplies and higher prices were already in motion. Over the last ten years, world grain reserves have been dwindling. As shown in Figure 1, world stocks-to-use ratios for wheat and coarse grains have fallen by half. In 1998/99 we had reserves equal to 30% of a year’s usage (about 3.5 months). While not large, these reserves could cushion the impact of a sudden disruption in grain supplies (e.g.widespread drought). Also, these reserves were large enough to substantially dampen the price of grains. For example, the Iowa average corn price in 1998/99 was $1.87 per bushel. These low prices provided ample incentive for farmers to search for new uses for grains like biofuels to strengthen grain prices. They also discouraged expanding grain production.
Source: Foreign Ag Service, USDA
However, grain prices were already on the way up, regardless of biofuels. We had entered a period when grain usage outstripped production. Non-biofuel grain usage was growing faster than production. The deficit was covered by drawing down reserves. Today’s world reserves are only 15 percent of a year’s usage, of which a significant portion is needed to smoothly transition from one year to the next. So we cannot continue to cover the deficit by drawing down reserves. The result is high prices that will ration existing supplies and stimulate future production.
While dwindling grain reserves and higher prices have an adverse impact on consumers, it has a positive impact on producers. Around the world, 2.5 billion people depend on agriculture for their livelihood (FAO). This is close to 40% of the world’s population. So higher prices have both negative and positive implications.
Adverse Crop Events
We experienced adverse crop events at precisely the time when the world’s grain situation was most vulnerable to supply shocks. Periods of low production can easily upset the delicate balance between surplus and shortage, especially when reserves are low. Adverse crop events in 2007 were a driving factor in grain price increases. It was the second consecutive year of a drop in average yields around the world.
An overview of 2007 crop problems is shown below.
- Australia – multi-year drought
- U.S. winter wheat – late freeze
- Northern Europe – dry spring & wet harvest
- Southeast Europe – drought
- Ukraine and Russia – drought
- Canada – summer hot and dry
- Northwest Africa – drought
- Turkey – drought
- Argentina – late freeze & drought followed by flooding in parts of the corn-soybean belt.
Most of the world’s grains are consumed in the country in which they are produced. A relatively small amount is traded on the international market, as shown in Table 1. However, if trade is disrupted, this small amount can have a significant impact on food distribution and prices, especially during periods of low reserves.
Table 1. Percent of World Consumption from International Trade
Source: What’s Driving Food Prices, Farm Foundation, 2008
Although unknown in this country in recent years, it is not uncommon for countries to impose restrictions such as “export taxes” or “export embargoes” on agricultural commodities sold to other countries. These restrictions become increasingly common when world shortages and high prices appear. These policies are meant to discourage exports and keep food within the country for domestic consumers. Essentially, the restriction means that “our citizens eat first”, if there is anything left over, your citizens can have it.
A prominent example is Argentina, a large producer of agricultural commodities such as soybeans. Argentina already had a 35 percent export tax and in March its president, Christina Fernandez de Kirchner, increased the tax. The decision led to riots and demonstrations by Argentina’s farmers. In July the measure was narrowly rescinded by Argentina’s Senate.
These trade distortions can take many forms in addition to export taxes. Below is a listing of policies that have recently been implemented by both exporting and importing countries due to high food prices and food shortages.
- Export Bans - Ukraine, Serbia, India, Egypt, Cambodia, Vietnam, Indonesia, Kazakhstan
- Export Restriction (quantitative) – Argentina, Ukraine, India, Vietnam
- Export Taxes – China, Argentina, Russian, Kazakhstan, Malaysia
- Eliminate Export Subsidies – China
- Reduced Import Tariffs – India, Indonesia, Serbia, Thailand, EU, Korean, Mongolia
- Subsidize Consumers – Morocco, Venezuela
The long-term implications of export restrictions are negative to the world’s consumers and world agriculture. It distorts trade in agriculture commodities at the precise time when there should be no distortion. It greatly increases the vulnerability of poor countries that are net food importers. It penalizes long-term agricultural development and growth in exporting countries.
Commodity Price Impact on Food Budgets
Although notable exceptions exist, most hunger situations are not caused by an actual shortage of food. Rather hunger is caused by the financial inability to buy food. So how do high food prices this impact the food consumers in low-income, food deficit countries?
As we discussed last month, the average U.S. consumer spends only 10 percent of his/her disposable income on food (although food expenditures for low-income consumers are substantially higher). And the food the consumer buys is highly processed, packaged and often ready to eat. So, of the money spent on food, only 20 percent goes to farmers for producing basic commodities like wheat, milk, meat, etc.
The situation is much different for consumers living in low-income, food-deficit countries. An illustrative example is shown in Table 2. Half of a consumer’s disposable income may be spent on food. And this is primarily for staples (basic commodities). People in developing countries tend to buy basic staples and prepare them rather than buying processed/prepared food. In our example, 70 percent of their food expenditures are for staples compared to 20 percent in high income countries. If the prices of staples increase by 50 percent, the amount of disposable income spent by consumers in high income countries will only increase by one percentage point, going from 10 percent to 11 percent, or a 10 percent increase ((11 – 10)/10). However, the amount spent by consumers in low income countries increases by 18 percentage points, going from 50 percent to 67.5 percent, or a 35% increase ((67.5 – 50)/50). So, people in low income countries, who already spend a disproportionately large amount on food, are the hardest hit by increased commodity prices.
Table 2. Impact of Higher Commodity Prices on Food Budgets *
|High-income Countries||Low-Income Food-Deficit Countries|
|Food Cost as % of Income||10%||50%|
|Staples as % of total food spending||20%||70%|
|Expenditures on staples||$800||$280|
|50% price increase in staples|
|Increase in cost of staples||$400||$140|
|New cost of staples||$1,200||$420|
|New total food costs||$4,400||$540|
|Food Cost as % of Income||11%||67.5%|
|Percent increase in food cost||10%||35%|
* These are illustrative food budgets that characterize the situations for consumers in high- and low-income countries.
Source: Based on information from Global Agricultural Supply and Demand: Factors Contributing to the Recent Increase in Food Commodity Prices/WFS-0801, July, 2008. Economic Research Service, USDA.
Future Demand Growth
The demand for grains will continue to grown in the future. One of the driving factors is expanding world population. From 2000 to 2005, world population grew by more than the entire population of the United States. As shown in Table 3, 85 percent of the growth occurred in Asia and Africa. The population of Europe actually declined. So, most of the growth is occurring in developing countries.
Table 3. World Population Growth (2000 to 2005)
In combination with population growth, the expanding world “middle class” is demanding high value food products that put additional demands on world agricultural production. A future article titled “China on a Western Diet” will address this topic.
The Impact of Biofuels
A reason commonly given for the current high world food prices is the diversion of cropland acreage away from food production to energy production. In the U.S. ethanol is made from corn. As shown in Table 4, U.S. corn acreage increased substantially in 2007 from its 2003-06 average. The majority of those acres were taken from soybean production. In 2008, corn acreage retreated and soybean acreage rebounded to its 2003-06 level.
Corn and soybeans are used primarily for livestock feed. For example, only about 10 percent of U.S. corn production is processed directly into food products. Most is fed to livestock that are mostly consumed by high income consumers. However, U.S. feed corn prices have driven up food corn prices in Africa and Mexico, where corn goes directly into the human food chain. Moreover, soybean oil is an important staple in the Far East. In China, almost everything is cooked in soybean oil or other vegetable oil.
Table 4. U.S. Planted Acreage of Major Crops
1/ Represents 59% of world corn acreage
2/ Represents less than 10% of world wheat acreage
3/ Represents less than 1% of world rice acreage
The basic staples of many poor countries are grains that can be consumed directly like wheat and rice.
Although world wheat price increased substantially in 2007 and early 2008, it was not due to the encroachment of corn for U.S. biofuels production. As shown in Table 1, U.S. wheat acreage actually increased in 2007 from the 2003-06 average, and increased again in 2008. However, in the absence of corn for biofuels production, wheat acres may have expanded more or the actual expansion may have been achieved at lower wheat prices. Moreover, the need for expanded corn acreage in the future will place continued pressure on wheat acreage and the wheat price need to main existing acreage.
The small U.S. rice acreage is on land that is not designed for the production of traditional grains. The threat of substitution is limited.
On a worldwide basis, about 35 million acres of land were used for biofuels in 2004. This is about one percent of total arable land. The Energy Information Administration predicts this will increase to 2 to 3.5 percent by 2030.
While the biofuels industry (U.S. and worldwide) is a contributing factor, it is not the only cause of rising world commodity food prices. The precise impact is difficult to assess. In next month’s article we will attempt to provide some comparison scenarios of food supplies and prices if the biofuels industry did not exist.
Most of the popular press reports about the food and energy situation focus on actions we can take to solve the situation immediately. However, if we are to be successful, we must also take a holistic and long-term view of the situation and implement policies and programs that will impact the long-term causes of the problem. Below we address the situation from a short-, intermediate- and long-term perspective.
There is no good short-term solution to the situation we face today. Many of the programs and policies we implement today will not produce results until the intermediate and long term time periods. However, if you are one of the millions without enough to eat, solutions that provide help next year or in the next decade provide little comfort.
Programs that provide immediate food aid and other assistance are required to meet the current needs. However, these programs need to provide assistance without competing with the local agricultural sector. If the problem is the high price of food rather than an actual shortage of food, buying food locally at world prices and providing it to residents at a discounted price provides food for local residents while supporting local agriculture. Conversely, bringing in food staples from outside competes directly with local farmers and impedes the country’s ability to be self-sustaining in food production in future years.
Higher agricultural prices stimulate farmers around the world to increase production. This is a powerful force that is often neglected in discussions about the current food situation. The payoff from using better seed varieties, more fertilizer and other production inputs is magnified when grain prices are high.
However, this will not impact today’s situation. At least one production cycle is required to increase production. And this assumes that farmers have access to production inputs and the money to purchase them. So, programs that provide access and funding are important for increased production in the coming years.
Moreover, policies that limit commodity price increases within countries must be avoided. Developing countries are under pressure to limit price increases in an effort to ease the domestic short-term situation. However, high prices are necessary to stimulate increased domestic production. So, short term programs and policies need to be designed that provide immediate food assistance without depressing or limiting prices.
Increased funding for agricultural research and education is the long-term solution to the situation. New seed varieties, new and improved production inputs, better cultural practices, increased knowledge of how to apply these practices, and an array of other research efforts have the ability to significantly increase world agricultural production.
Although research programs are of little value in the short-term, their cumulative impact over five, ten or more years can be enormous. However, the challenge will be great. The combined forces of:
- continued world population growth,
- increased demand for higher quality food by the world’s expanding “middle class”, and
- the need to provide both food and fuel,
require careful monitoring by the international community and substantial worldwide investment in agricultural research and application. And this agricultural expansion needs to be done in a sustainable manner while adapting to the impacts of climate change on the world’s agricultural production capacity in the near term and mitigating climate change in the long term.
References and Further Reading
Issue Report: What's Driving Food Prices? - Farm Foundation, July 2008.
Global Agricultural Supply and Demand: Factors Contributing to the Recent Increase in Food Commodity Prices/WFS-0801, July, 2008. Economic Research Service, USDA.
Rising Food Prices and Global Food Needs: The U.S.Response, Congressional Research Service, May 2008.
Rising Food Prices: Policy Options and World Bank response , World Bank, 2008.
Implications of Higher Global Food Prices for Poverty in Low-income Countries, World Bank, April, 2008.
Impact of High Food and Fuel Prices on Developing Countries—Frequently Asked Questions. International Monetary Fund, April, 2008.
The High-Level Conference on World Food Security: the Challenges of Climate Change and Bioenergy, Food and Agriculture Organization, June 2008.
Biofuels and Sustainable Development, Executive Session on Grand Challenges of the Sustainability Transition, May 2008.
|
<urn:uuid:3dd7701d-d6d0-4918-9b86-0f09c43562a4>
|
CC-MAIN-2013-20
|
http://www.agmrc.org/renewable_energy/agmrc_renewable_energy_newsletter.cfm/international_perspectives_on_food_and_fuel?show=article&articleID=13&issueID=5
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00007-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.926424
| 3,325
| 2.734375
| 3
|
[
"climate"
] |
{
"climate": [
"climate change",
"drought",
"food security"
],
"nature": []
}
|
{
"strong": 2,
"weak": 1,
"total": 3,
"decision": "accepted_strong"
}
|
|Search Results (6 videos found)|
|NASASciFiles - Meteors
NASA Sci Files segment explaining what meteors, meteoroids, and meteorites are and the differences in these.
Keywords: NASA Sci Files; Rock; Comet; Outer Space; Meteor; Meteoroid; Meteorite; Sonic Boom; Speed of Sound; Seismic Activity; Fire Ball; Shooting Star;
Popularity (downloads): 2170
|NASAWhy?Files - Equilibrium
NASA Why? Files segment explaining the concept of equlibrium and how the Treehouse Detectives could maintain equlibrium in a Martian environment.
Keywords: NASA Why? Files; Adaptation; Environment; Oxygen; Atmosphere; Astronauts; Trash Management; Module; Sunlight; Gravity; Equlilibrium; Balanced System; Mars; Habitat; Weather; Meteors; Plants; Algae; Algal Bloom; Fish;
Popularity (downloads): 1401
|NASASciFiles - Moon Phases
NASA Sci Files segment explaining the phases of the moon and how they are created.
Keywords: NASA Sci Files; Gravity; Craters; Meteors; Astroids; Water; Earth; Moon; Moon Phases; Illuminations; Revolve; New Moon; Full Moon; First Quarter; Third Quarter; Sun; Lunar Phases; Axis; Apollo; Tides; Beach; Gravitational Pull; Oceans; Century;
Popularity (downloads): 3386
|NASASciFiles - The Case of the Shaky Quake
NASA Sci Files video containing the following eleven segments. NASA Sci Files segment exploring the different types of waves that earthquakes create. NASA Sci Files segment exploring faults and...
Keywords: NASA Sci Files; Earthquake; Waves; Primary; Compressional Waves; Secondary; Sheer Waves; Earth; Vibrate; Epicenter; Surface Wave; Crust; Rock Layers; Faults; Normal Fault; Hanging Wall; Foot Wall; Reverse Fault; Strike-Slip Fault; Lithosphere; Plates; Earth; Fault Line; Plate Boundaries; Divergent Boundaries; Rift Valleys; Volcanoes; Convergent Boundary; Mountains; Transform Boundary; San Andreas Fault; Interplate Earthquakes; Fossils; Plate Tectonics; Dinosaur; Bones; Excavation; Climate; Riverbed; Arid; Equator; Continental Drift; Alfred Wagner; Pangaea; Rock Structures; Sandstone; Chimney Formation; Grand Canyon; Global Positioning System; Stations; Satellites; Crustal Movement; Earth; Blind Fault; Computer Simulation; Slip Rate; Prediction; Displacement; Layers; Core; Diameter; Iron; Nickle; Solid; Liquid; Dense; Mantle; Basalt; Granite; Density; Graduated Cylinder; Plates; Measurement; Richter Scale; Moment Magnitude Scale; Scientific Journals; Observations; Data; Epicenter; Comet; Outer Space; Meteor; Meteoroid; Meteorite; Sonic Boom; Speed of Sound; Seismic Activity; Fire Ball; Shooting Star; Earthquake Facts; Frenquency; Location; Intensity; California; Alaska; Weather; Seismograph; Inertia; Newton; Vertical Motion; Horizontal Motion; Seismology; Tremor; S Waves; P Waves; Sound Waves; Seismogram; Triangulation; Graph; Compass; World Map; Student Activity; Epicenter; Seismic Station;
Popularity (downloads): 2158
|NASASciFiles - The Case of the Galactic Vacation
NASA Sci Files video containing the following eleven segments. NASA Sci Files segment exploring the Arecibo Observatory, what it does, and where it is located. NASA Sci Files segment...
Keywords: NASA Sci Files; Arecibo Observatory; Telescope; Radio Telescope; Radio Waves; Signals; Universe; Pulsar; Quasar; Reflector; Receiver; Electrical Signal; Control Room; Scientists; Equator; Wavelength; Optical; Atmosphere; Solar System; Galaxy; Extraterrestrial Intelligence; Artificial Signal; Forces of Motion; Free Fall; Weightlessness; Inertia; Acceleration; Parabola; Accelerometer; Space Travel; Roller Coaster; Navigation and Vehicle Health Monitoring System; Modified Bathrooms; Gravity; Zero Gravity; Exercise Equipment; Kitchen; Starship 2040; Orbit; International Space Station; Living Environment; Earth; Commander; Mars; Tourist Attraction; Canyon; Crater; Solar System; Planet; Water; Liquid; Frozen; Seasons; Axis; Polar Ice Caps; Atmosphere; Gas; Carbon Dioxide; Oxygen; Nitrogen; Hydrogen; Temperature; Space Suit; Common Denomenator; Meteors; Astroids; Water; Earth; Moon; Moon Phases; Illuminations; Revolve; New Moon; Full Moon; First Quarter; Third Quarter; Sun; Lunar Phases; Axis; Apollo; Tides; Beach; Gravitational Pull; Oceans; Century; Space; Distances; Parallax; Experiment; Student Activity; Optics; Protractor; Vertex; Angle; Data; Propulsion System; Space Radiation; Bone Mass; Chemical Rockets; Spaceship; Gases; Plasma; Magnetic Field; Exhaust; Energy; Heat; Electricity; Nuclear Power; Fusion; Thermonuclear Reaction; Technology; Arecibo Telescope; Solar System; Extra-solar Planets; Stars; Planets; Lightyears; Reflecting Telescope; Light; Dim; Betelgeuse; Giant Star; Life; Colors; Red; Blue; Temperature; Yellow; Dwarf Star; Sun; Habitable Zone; Ultraviolet Radiation; Puerto Rico; Galaxy; Orion Nebula; Hydrogen Gas; Whirlpool Galaxy; Extreme Environment; Boiling Temperature; Air Pressure; Celcius; Oxygen; Gravitational Force; Jupiter; Kilometers; Inner Planets; Mercury; Venus; Lava Flows; Helium; Saturn; Uranus; Neptune; Pluto; Astronomer; Proxima Centauri;
Popularity (downloads): 1602
|NASAWhy?Files - The Case of the Inhabitable Habitat
NASA Why? Files video containing the following fifteen segments. NASA Why? Files segment explaining how astronauts adapt to a new environment like space. NASA Why? Files segment explaining how astronauts...
Keywords: NASA Why? Files; NASA Why? Files; Adaptation; Astronauts; Space; Altitude Sickness; Oxygen; Environment; Elevation; Sea Level; Training; Weightlessness; Free Fall; Parabola; Weightless Wonder; Airplane; Simulate; Zero Gravity; Vomit Comet; Trash Management; Module; Sunlight; Gravity; Equlilibrium; Balanced System; Mars; Habitat; Weather; Meteors; Plants; Algae; Algal Bloom; Fish; Atmosphere; Minerals; Water; Photosynthesis; Carbon Dioxide; Food Web; Consumers; Producers; Decomposers; Carnivores; Herbivores; Ominvore; Community; Survival; Bacteria; Fungi; Desert; Ocean; Food; Shelter; Reef; Lagoon; Forest; Pond; Animals; Rain Forest; Predators; Behaviors; Gravity; Outer Space; Microgravity; Earth; NASA; Gravitational Force; Boiling Point; Vacuum Pump; Martian Atmosphere; Boil; Density; Ice; Liquid Water; Water Vapor; Student Activity; Migration; Migratory Patterns; Turtles; Data; Coordinates; Food Source; Space Walk; International Space Station; Hubble Space Telescope; Neutral Bouyancy; Laboratory; Orbit; Space Suit; Radiation; Seeds; Plant Growth Chamber; Plant Reproduction; Germinate; Gases; Transpiration; Pores; Leaves; Evaporation; Condensation; Space Vehicles; Food; Nutrition; Space Seeds; Arabidopsis; Mustard Weed; Life Cycle; Control Group; Records; Reproduction; Normal Growth; Bioregenerative System; Extreme Temperature; Space Suit; Radiation; Protection; Outer Space; Air Pressure; Long Johns; Maximum Absorbency Garment; Iterative Process; Gloves; Space Station; Space Trash; Reduce; Reuse; Recycle; Trash Cans; Efficient Packaging; Progress; Hardware; Self-Sufficient; Soil; Nutrients; Terrarium; The Red Planet; Robotic Airplane; Winds; Iron; Lowlands; Highlands; Volcanoe; Canyon; Thin Atmosphere; Cold; Dry; Nitrogen; Argon;
Popularity (downloads): 2175
|
<urn:uuid:7a8e7385-a09c-4655-9cde-6384af66f95d>
|
CC-MAIN-2013-20
|
http://www.open-video.org/results.php?keyword_search=true&terms=+Meteors
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00007-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.672482
| 1,787
| 3.453125
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"adaptation",
"carbon dioxide"
],
"nature": [
"habitat"
]
}
|
{
"strong": 2,
"weak": 1,
"total": 3,
"decision": "accepted_strong"
}
|
Institute for the Study of Earth, Oceans, and Space at UNH
Scientists Say Developing Countries Will Be Hit Hard By Water Scarcity in the 21st Century
By Sharon Keeler
UNH News Bureau
July 11, 2001
DURHAM, N.H. --The entire water cycle of the globe has been changed by human activities and even more dramatic changes lie ahead, said a group of experts at an international conference in Amsterdam on global change this week.
"Today, approximately 2 billion people are suffering from water stress, and models predict that this will increase to more than 3 billion (or about 40 percent of the population) in 2025," said Charles Vorosmarty, a research professor in the University of New Hampshire's Institute for the Study of Earth, Oceans, and Space.
There will be winners and losers in terms of access to safe water. The world's poor nations will be the biggest losers. Countries already suffering severe water shortages, such as Mexico, Pakistan, northern China, Poland and countries in the Middle East and sub-Saharan Africa will be hardest hit.
"Water scarcity means a growing number of public health, pollution and economic development problems," said Vorosmarty.
"To avoid major conflict through competition for water resources, we urgently need international water use plans," added Professor Hartmut Grassl from the Max-Planck-Institute for Meteorology in Germany. "I believe this should be mediated by an established intergovernmental body."
The water cycle is affected by climate change, population growth, increasing water demand, changes in vegetation cover and finally the El Nino Southern Oscillation, bringing drought to some areas and flooding to others. Surprisingly, at the global scale, population growth and increasing demand for water -- not climate change -- are the primary contributing factors in future water scarcity to the year 2025.
"But at the regional scale, which is where all the critical decisions are made, it is the combination of population growth, increasing demand for water, and climate change that is the main culprit," said Vorosmarty.
According to El Nino expert, Professor Antonio Busalacchi from the University of Maryland, the two major El Nino events of the century occurred in the last 15 years and there are signs that the frequency may increase due to human activities.
"In 1982-83, what was referred to as the "El Nino event of the century" occurred with global economic consequences totaling more than $13 billion," said Busalacchi. "The recently concluded 1997-1998 El Nino was the second El Nino event of the century with economic losses estimated to be upward of $89 billion."
|
<urn:uuid:62224b44-50bb-4f13-ba4c-993a1afd2ab3>
|
CC-MAIN-2013-20
|
http://www.unh.edu/news/news_releases/2001/july/sk_20010712vorosmarty.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00007-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.944163
| 548
| 3.140625
| 3
|
[
"climate"
] |
{
"climate": [
"climate change",
"drought"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Biological invasions: a growing threat to biodiversity
07 May 2012 | News story
Biological invasions: a growing threat to biodiversity, human health and food security. Policy recommendations for the Rio+20 process drafted by IUCN SSC Invasive Species Specialist Group and IUCN's Invasive Species Initiative.
Planet Under Pressure 2012 was the largest gathering of global change scientists leading up to the United Nations Conference on Sustainable Development (Rio+20) with a total of 3,018 delegates at the conference venue and over 3,500 that attended virtually via live webstreaming. The first State of the Planet Declaration was issued at the conference.
Following the conference and declaration several ISSG members were concerned with the limited attention being paid to the issue of biological invasions and invasive alien species in the Rio+20 process. Members proposed the development and submission of a policy paper highlighting the growing threat of biological invasions on biodiversity, human health and food security for the Rio+20 process.
After extensive consultation with the membership, the ISSG with the IUCN's Invasive Species Intitiative (ISI) developed and submitted a policy brief related to biologival invasions and invasive alien species to the IUCN. This brief will be included in the IUCN documentation for Rio+20 and text be reflected in the umbrella position paper (which will form the basis of IUCN’s statement to the Rio+20 conference).
The Rio+20 Conference will take place in Rio de Janeiro, Brazil, from is June 20 to 22, 2012, in order to mark the 20th anniversary of the United Nations Conference on Environment and Development, also called the “Rio Earth Summit”. The conference will focus on two themes: 1) a Green Economy in the context of sustainable development and poverty eradication; and 2) the Institutional Framework for Sustainable Development.
|
<urn:uuid:12135c03-6130-4c1c-b83a-d6e03d43a44c>
|
CC-MAIN-2013-20
|
http://www.iucn.org/fr/nouvelles_homepage/nouvelles_par_theme/politique_mondiale_news/?9767/Biological-invasions-a-growing-threat-to-biodiversity
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00007-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.914797
| 387
| 2.734375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"food security"
],
"nature": [
"biodiversity",
"invasive species"
]
}
|
{
"strong": 2,
"weak": 1,
"total": 3,
"decision": "accepted_strong"
}
|
Significance and Use
Sediment provides habitat for many aquatic organisms and is a major repository for many of the more persistent chemicals that are introduced into surface waters. In the aquatic environment, most anthropogenic chemicals and waste materials including toxic organic and inorganic chemicals eventually accumulate in sediment. Mounting evidences exists of environmental degradation in areas where USEPA Water Quality Criteria (WQC; Stephan et al.(67)) are not exceeded, yet organisms in or near sediments are adversely affected Chapman, 1989 (68). The WQC were developed to protect organisms in the water column and were not directed toward protecting organisms in sediment. Concentrations of contaminants in sediment may be several orders of magnitude higher than in the overlying water; however, whole sediment concentrations have not been strongly correlated to bioavailability Burton, 1991(69). Partitioning or sorption of a compound between water and sediment may depend on many factors including: aqueous solubility, pH, redox, affinity for sediment organic carbon and dissolved organic carbon, grain size of the sediment, sediment mineral constituents (oxides of iron, manganese, and aluminum), and the quantity of acid volatile sulfides in sediment Di Toro et al. 1991(70) Giesy et al. 1988 (71). Although certain chemicals are highly sorbed to sediment, these compounds may still be available to the biota. Chemicals in sediments may be directly toxic to aquatic life or can be a source of chemicals for bioaccumulation in the food chain.
The objective of a sediment test is to determine whether chemicals in sediment are harmful to or are bioaccumulated by benthic organisms. The tests can be used to measure interactive toxic effects of complex chemical mixtures in sediment. Furthermore, knowledge of specific pathways of interactions among sediments and test organisms is not necessary to conduct the tests Kemp et al. 1988, (72). Sediment tests can be used to: (1) determine the relationship between toxic effects and bioavailability, (2) investigate interactions among chemicals, (3) compare the sensitivities of different organisms, (4) determine spatial and temporal distribution of contamination, (5) evaluate hazards of dredged material, (6) measure toxicity as part of product licensing or safety testing, (7) rank areas for clean up, and (8) estimate the effectiveness of remediation or management practices.
A variety of methods have been developed for assessing the toxicity of chemicals in sediments using amphipods, midges, polychaetes, oligochaetes, mayflies, or cladocerans (Test Method E 1706, Guide E 1525, Guide E 1850; Annex A1, Annex A2; USEPA, 2000 (73), EPA 1994b, (74), Environment Canada 1997a, (75), Enviroment Canada 1997b,(76)). Several endpoints are suggested in these methods to measure potential effects of contaminants in sediment including survival, growth, behavior, or reproduction; however, survival of test organisms in 10-day exposures is the endpoint most commonly reported. These short-term exposures that only measure effects on survival can be used to identify high levels of contamination in sediments, but may not be able to identify moderate levels of contamination in sediments (USEPA USEPA, 2000 (73); Sibley et al.1996, (77); Sibley et al.1997a, (78); Sibley et al.1997b, (79); Benoit et al.1997, (80); Ingersoll et al.1998, (81)). Sublethal endpoints in sediment tests might also prove to be better estimates of responses of benthic communities to contaminants in the field, Kembel et al. 1994 (82). Insufficient information is available to determine if the long-term test conducted with Leptocheirus plumulosus (Annex A2) is more sensitive than 10-d toxicity tests conducted with this or other species.
The decision to conduct short-term or long-term toxicity tests depends on the goal of the assessment. In some instances, sufficient information may be gained by measuring sublethal endpoints in 10-day tests. In other instances, the 10-day tests could be used to screen samples for toxicity before long-term tests are conducted. While the long-term tests are needed to determine direct effects on reproduction, measurement of growth in these toxicity tests may serve as an indirect estimate of reproductive effects of contaminants associated with sediments (Annex A1).
Use of sublethal endpoints for assessment of contaminant risk is not unique to toxicity testing with sediments. Numerous regulatory programs require the use of sublethal endpoints in the decision-making process (Pittinger and Adams, 1997, (83)) including: (1) Water Quality Criteria (and State Standards); (2) National Pollution Discharge Elimination System (NPDES) effluent monitoring (including chemical-specific limits and sublethal endpoints in toxicity tests); (3) Federal Insecticide, Rodenticide and Fungicide Act (FIFRA) and the Toxic Substances Control Act (TSCA, tiered assessment includes several sublethal endpoints with fish and aquatic invertebrates); (4) Superfund (Comprehensive Environmental Responses, Compensation and Liability Act; CERCLA); (5) Organization of Economic Cooperation and Development (OECD, sublethal toxicity testing with fish and invertebrates); (6) European Economic Community (EC, sublethal toxicity testing with fish and invertebrates); and (7) the Paris Commission (behavioral endpoints).
Results of toxicity tests on sediments spiked at different concentrations of chemicals can be used to establish cause and effect relationships between chemicals and biological responses. Results of toxicity tests with test materials spiked into sediments at different concentrations may be reported in terms of an LC50 (median lethal concentration), an EC50 (median effect concentration), an IC50 (inhibition concentration), or as a NOEC (no observed effect concentration) or LOEC (lowest observed effect concentration). However, spiked sediment may not be representative of chemicals associated with sediment in the field. Mixing time Stemmer et al. 1990b, (84), aging ( Landrum et al. 1989,(85), Word et al. 1987, (86), Landrum et al., 1992,(87)), and the chemical form of the material can affect responses of test organisms in spiked sediment tests.
Evaluating effect concentrations for chemicals in sediment requires knowledge of factors controlling their bioavailability. Similar concentrations of a chemical in units of mass of chemical per mass of sediment dry weight often exhibit a range in toxicity in different sediments Di Toro et al. 1990, (88) Di Toro et al. 1991,(70). Effect concentrations of chemicals in sediment have been correlated to interstitial water concentrations, and effect concentrations in interstitial water are often similar to effect concentrations in water-only exposures. The bioavailability of nonionic organic compounds in sediment is often inversely correlated with the organic carbon concentration. Whatever the route of exposure, these correlations of effect concentrations to interstitial water concentrations indicate that predicted or measured concentrations in interstitial water can be used to quantify the exposure concentration to an organism. Therefore, information on partitioning of chemicals between solid and liquid phases of sediment is useful for establishing effect concentrations Di Toro et al. 1991, (70).
Field surveys can be designed to provide either a qualitative reconnaissance of the distribution of sediment contamination or a quantitative statistical comparison of contamination among sites.
Surveys of sediment toxicity are usually part of more comprehensive analyses of biological, chemical, geological, and hydrographic data. Statistical correlations may be improved and sampling costs may be reduced if subsamples are taken simultaneously for sediment tests, chemical analyses, and benthic community structure.
Table 2 lists several approaches the USEPA has considered for the assessment of sediment quality USEPA, 1992, (89). These approaches include: (1) equilibrium partitioning, (2) tissue residues, (3) interstitial water toxicity, (4) whole-sediment toxicity and sediment-spiking tests, (5) benthic community structure, (6) effect ranges (for example, effect range median, ERM), and (7) sediment quality triad (see USEPA, 1989a, 1990a, 1990b and 1992b, (90, 91, 92, 93 and Wenning and Ingersoll (2002 (94)) for a critique of these methods). The sediment assessment approaches listed in Table 2 can be classified as numeric (for example, equilibrium partitioning), descriptive (for example, whole-sediment toxicity tests), or a combination of numeric and descriptive approaches (for example, ERM, USEPA, 1992c, (95). Numeric methods can be used to derive chemical-specific sediment quality guidelines (SQGs). Descriptive methods such as toxicity tests with field-collected sediment cannot be used alone to develop numerical SQGs for individual chemicals. Although each approach can be used to make site-specific decisions, no one single approach can adequately address sediment quality. Overall, an integration of several methods using the weight of evidence is the most desirable approach for assessing the effects of contaminants associated with sediment, (Long et al. 1991(96) MacDonald et al. 1996 (97) Ingersoll et al. 1996 (98) Ingersoll et al. 1997 (99), Wenning and Ingersoll 2002 (94)). Hazard evaluations integrating data from laboratory exposures, chemical analyses, and benthic community assessments (the sediment quality triad) provide strong complementary evidence of the degree of pollution-induced degradation in aquatic communities (Burton, 1991 (69), Chapman 1992, 1997 (100, 101).)
Regulatory Applications—Test Method E 1706 provides information on the regulatory applications of sediment toxicity tests.
The USEPA Environmental Monitoring Management Council (EMMC) recommended the use of performance-based methods in developing standards, (Williams, 1993 (102). Performance-based methods were defined by EMMC as a monitoring approach which permits the use of appropriate methods that meet preestablished demonstrated performance standards (11.2).
The USEPA Office of Water, Office of Science and Technology, and Office of Research and Development held a workshop to provide an opportunity for experts in the field of sediment toxicology and staff from the USEPA Regional and Headquarters Program offices to discuss the development of standard freshwater, estuarine, and marine sediment testing procedures (USEPA, 1992a, 1994a (89, 103)). Workgroup participants arrived at a consensus on several culturing and testing methods. In developing guidance for culturing test organisms to be included in the USEPA methods manual for sediment tests, it was agreed that no one method should be required to culture organisms. However, the consensus at the workshop was that success of a test depends on the health of the cultures. Therefore, having healthy test organisms of known quality and age for testing was determined to be the key consideration relative to culturing methods. A performance-based criteria approach was selected in USEPA, 2000 (73) as the preferred method through which individual laboratories could use unique culturing methods rather than requiring use of one culturing method.
This standard recommends the use of performance-based criteria to allow each laboratory to optimize culture methods and minimize effects of test organism health on the reliability and comparability of test results. See Annex A1 and Annex A2 for a listing of performance criteria for culturing or testing.
1.1 This test method covers procedures for testing estuarine or marine organisms in the laboratory to evaluate the toxicity of contaminants associated with whole sediments. Sediments may be collected from the field or spiked with compounds in the laboratory. General guidance is presented in Sections 1-15 for conducting sediment toxicity tests with estuarine or marine amphipods. Specific guidance for conducting 10-d sediment toxicity tests with estuarine or marine amphipods is outlined in Annex A1 and specific guidance for conducting 28-d sediment toxicity tests with Leptocheirus plumulosus is outlined in Annex A2.
1.2 Procedures are described for testing estuarine or marine amphipod crustaceans in 10-d laboratory exposures to evaluate the toxicity of contaminants associated with whole sediments (Annex A1; USEPA 1994a (1)). Sediments may be collected from the field or spiked with compounds in the laboratory. A toxicity method is outlined for four species of estuarine or marine sediment-burrowing amphipods found within United States coastal waters. The species are Ampelisca abdita, a marine species that inhabits marine and mesohaline portions of the Atlantic coast, the Gulf of Mexico, and San Francisco Bay; Eohaustorius estuarius, a Pacific coast estuarine species; Leptocheirus plumulosus, an Atlantic coast estuarine species; and Rhepoxynius abronius, a Pacific coast marine species. Generally, the method described may be applied to all four species, although acclimation procedures and some test conditions (that is, temperature and salinity) will be species-specific (Sections 12 and Annex A1). The toxicity test is conducted in 1-L glass chambers containing 175 mL of sediment and 775 mL of overlying seawater. Exposure is static (that is, water is not renewed), and the animals are not fed over the 10-d exposure period. The endpoint in the toxicity test is survival with reburial of surviving amphipods as an additional measurement that can be used as an endpoint for some of the test species (for R. abronius and E. estuarius). Performance criteria established for this test include the average survival of amphipods in negative control treatment must be greater than or equal to 90 %. Procedures are described for use with sediments with pore-water salinity ranging from >0 o/ooto fully marine.
1.3 A procedure is also described for determining the chronic toxicity of contaminants associated with whole sediments with the amphipod Leptocheirus plumulosus in laboratory exposures (Annex A2; USEPA-USACE 2001(2)). The toxicity test is conducted for 28 d in 1-L glass chambers containing 175 mL of sediment and about 775 mL of overlying water. Test temperature is 25° ± 2°C, and the recommended overlying water salinity is 5 o/oo ± 2 o/oo(for test sediment with pore water at 1 o/oo to 10 o/oo) or 20 o/oo ± 2 o/oo (for test sediment with pore water >10 o/oo). Four hundred millilitres of overlying water is renewed three times per week, at which times test organisms are fed. The endpoints in the toxicity test are survival, growth, and reproduction of amphipods. Performance criteria established for this test include the average survival of amphipods in negative control treatment must be greater than or equal to 80 % and there must be measurable growth and reproduction in all replicates of the negative control treatment. This test is applicable for use with sediments from oligohaline to fully marine environments, with a silt content greater than 5 % and a clay content less than 85 %.
1.4 A salinity of 5 or 20 o/oo is recommended for routine application of 28-d test with L. plumulosus (Annex A2; USEPA-USACE 2001 (2)) and a salinity of 20 o/oois recommended for routine application of the 10-d test with E. estuarius or L. plumulosus (Annex A1). However, the salinity of the overlying water for tests with these two species can be adjusted to a specific salinity of interest (for example, salinity representative of site of interest or the objective of the study may be to evaluate the influence of salinity on the bioavailability of chemicals in sediment). More importantly, the salinity tested must be within the tolerance range of the test organisms (as outlined in Annex A1 and Annex A2). If tests are conducted with procedures different from those described in 1.3 or in Table A1.1 (for example, different salinity, lighting, temperature, feeding conditions), additional tests are required to determine comparability of results (1.10). If there is not a need to make comparisons among studies, then the test could be conducted just at a selected salinity for the sediment of interest.
1.5 Future revisions of this standard may include additional annexes describing whole-sediment toxicity tests with other groups of estuarine or marine invertebrates (for example, information presented in Guide E 1611 on sediment testing with polychaetes could be added as an annex to future revisions to this standard). Future editions to this standard may also include methods for conducting the toxicity tests in smaller chambers with less sediment (Ho et al. 2000 (3), Ferretti et al. 2002 (4)).
1.6 Procedures outlined in this standard are based primarily on procedures described in the USEPA (1994a (1)), USEPA-USACE (2001(2)), Test Method E 1706, and Guides E 1391, E 1525, E 1688, Environment Canada (1992 (5)), DeWitt et al. (1992a (6); 1997a (7)), Emery et al. (1997 (8)), and Emery and Moore (1996 (9)), Swartz et al. (1985 (10)), DeWitt et al. (1989 (11)), Scott and Redmond (1989 (12)), and Schlekat et al. (1992 (13)).
1.7 Additional sediment toxicity research and methods development are now in progress to (1) refine sediment spiking procedures, (2) refine sediment dilution procedures, (3) refine sediment Toxicity Identification Evaluation (TIE) procedures, (4) produce additional data on confirmation of responses in laboratory tests with natural populations of benthic organisms (that is, field validation studies), and (5) evaluate relative sensitivity of endpoints measured in 10- and 28-d toxicity tests using estuarine or marine amphipods. This information will be described in future editions of this standard.
1.8 Although standard procedures are described in Annex A2 of this standard for conducting chronic sediment tests with L. plumulosus, further investigation of certain issues could aid in the interpretation of test results. Some of these issues include further investigation to evaluate the relative toxicological sensitivity of the lethal and sublethal endpoints to a wide variety of chemicals spiked in sediment and to mixtures of chemicals in sediments from contamination gradients in the field (USEPA-USACE 2001 (2)). Additional research is needed to evaluate the ability of the lethal and sublethal endpoints to estimate the responses of populations and communities of benthic invertebrates to contaminated sediments. Research is also needed to link the toxicity test endpoints to a field-validated population model of L. plumulosus that would then generate estimates of population-level responses of the amphipod to test sediments and thereby provide additional ecologically relevant interpretive guidance for the laboratory toxicity test.
1.9 This standard outlines specific test methods for evaluating the toxicity of sediments with A. abdita, E. estuarius, L. plumulosus, and R. abronius. While standard procedures are described in this standard, further investigation of certain issues could aid in the interpretation of test results. Some of these issues include the effect of shipping on organism sensitivity, additional performance criteria for organism health, sensitivity of various populations of the same test species, and confirmation of responses in laboratory tests with natural benthos populations.
1.10 General procedures described in this standard might be useful for conducting tests with other estuarine or marine organisms (for example, Corophium spp., Grandidierella japonica, Lepidactylus dytiscus, Streblospio benedicti), although modifications may be necessary. Results of tests, even those with the same species, using procedures different from those described in the test method may not be comparable and using these different procedures may alter bioavailability. Comparison of results obtained using modified versions of these procedures might provide useful information concerning new concepts and procedures for conducting sediment tests with aquatic organisms. If tests are conducted with procedures different from those described in this test method, additional tests are required to determine comparability of results. General procedures described in this test method might be useful for conducting tests with other aquatic organisms; however, modifications may be necessary.
1.11 Selection of Toxicity Testing Organisms:
1.11.1 The choice of a test organism has a major influence on the relevance, success, and interpretation of a test. Furthermore, no one organism is best suited for all sediments. The following criteria were considered when selecting test organisms to be described in this standard (Table 1 and Guide E 1525). Ideally, a test organism should: (1) have a toxicological database demonstrating relative sensitivity to a range of contaminants of interest in sediment, (2) have a database for interlaboratory comparisons of procedures (for example, round-robin studies), (3) be in direct contact with sediment, (4) be readily available from culture or through field collection, (5) be easily maintained in the laboratory, (6) be easily identified, (7) be ecologically or economically important, (8) have a broad geographical distribution, be indigenous (either present or historical) to the site being evaluated, or have a niche similar to organisms of concern (for example, similar feeding guild or behavior to the indigenous organisms), (9) be tolerant of a broad range of sediment physico-chemical characteristics (for example, grain size), and (10) be compatible with selected exposure methods and endpoints (Guide E 1525). Methods utilizing selected organisms should also be (11) peer reviewed (for example, journal articles) and (12) confirmed with responses with natural populations of benthic organisms.
1.11.2 Of these criteria (Table 1), a database demonstrating relative sensitivity to contaminants, contact with sediment, ease of culture in the laboratory or availability for field-collection, ease of handling in the laboratory, tolerance to varying sediment physico-chemical characteristics, and confirmation with responses with natural benthic populations were the primary criteria used for selecting A. abdita, E. estuarius, L. plumulosus, and R. abronius for the current edition of this standard for 10-d sediment tests (Annex A1). The species chosen for this method are intimately associated with sediment, due to their tube- dwelling or free-burrowing, and sediment ingesting nature. Amphipods have been used extensively to test the toxicity of marine, estuarine, and freshwater sediments (Swartz et al., 1985 (10); DeWitt et al., 1989 (11); Scott and Redmond, 1989 (12); DeWitt et al., 1992a (6); Schlekat et al., 1992 (13)). The selection of test species for this standard followed the consensus of experts in the field of sediment toxicology who participated in a workshop entitled “Testing Issues for Freshwater and Marine Sediments”. The workshop was sponsored by USEPA Office of Water, Office of Science and Technology, and Office of Research and Development, and was held in Washington, D.C. from 16-18 September 1992 (USEPA, 1992 (14)). Of the candidate species discussed at the workshop, A. abdita, E. estuarius, L. plumulosus, and R. abronius best fulfilled the selection criteria, and presented the availability of a combination of one estuarine and one marine species each for both the Atlantic (the estuarine L. plumulosus and the marine A. abdita) and Pacific (the estuarine E. estuarius and the marine R. abronius) coasts. Ampelisca abdita is also native to portions of the Gulf of Mexico and San Francisco Bay. Many other organisms that might be appropriate for sediment testing do not now meet these selection criteria because little emphasis has been placed on developing standardized testing procedures for benthic organisms. For example, a fifth species, Grandidierella japonica was not selected because workshop participants felt that the use of this species was not sufficiently broad to warrant standardization of the method. Environment Canada (1992 (5)) has recommended the use of the following amphipod species for sediment toxicity testing: Amphiporeia virginiana, Corophium volutator, Eohaustorius washingtonianus, Foxiphalus xiximeus, and Leptocheirus pinguis. A database similar to those available for A. abdita, E. estuarius, L. plumulosus, and R. abronius must be developed in order for these and other organisms to be included in future editions of this standard.
1.11.3 The primary criterion used for selecting L. plumulosus for chronic testing of sediments was that this species is found in both oligohaline and mesohaline regions of estuaries on the East Coast of the United States and is tolerant to a wide range of sediment grain size distribution (USEPA-USACE 2001 (2), Annex Annex A2). This species is easily cultured in the laboratory and has a relatively short generation time (that is, about 24 d at 23°C, DeWitt et al. 1992a (6)) that makes this species adaptable to chronic testing (Section 12).
1.11.4 An important consideration in the selection of specific species for test method development is the existence of information concerning relative sensitivity of the organisms both to single chemicals and complex mixtures. Several studies have evaluated the sensitivities of A. abdita, E. estuarius, L. plumulosus, or R. abronius, either relative to one another, or to other commonly tested estuarine or marine species. For example, the sensitivity of marine amphipods was compared to other species that were used in generating saltwater Water Quality Criteria. Seven amphipod genera, including Ampelisca abdita and Rhepoxynius abronius, were among the test species used to generate saltwater Water Quality Criteria for 12 chemicals. Acute amphipod toxicity data from 4-d water-only tests for each of the 12 chemicals was compared to data for (1) all other species, (2) other benthic species, and (3) other infaunal species. Amphipods were generally of median sensitivity for each comparison. The average percentile rank of amphipods among all species tested was 57 %; among all benthic species, 56 %; and, among all infaunal species, 54 %. Thus, amphipods are not uniquely sensitive relative to all species, benthic species, or even infaunal species (USEPA 1994a (1)). Additional research may be warranted to develop tests using species that are consistently more sensitive than amphipods, thereby offering protection to less sensitive groups.
1.11.5 Williams et al. (1986 (15)) compared the sensitivity of the R. abronius 10-d whole sediment test, the oyster embryo (Crassostrea gigas) 48-h abnormality test, and the bacterium (Vibrio fisheri) 1-h luminescence inhibition test (that is, the Microtox test) to sediments collected from 46 contaminated sites in Commencement Bay, WA. Rhepoxynius abronius were exposed to whole sediment, while the oyster and bacterium tests were conducted with sediment elutriates and extracts, respectfully. Microtox was the most sensitive test, with 63 % of the sites eliciting significant inhibition of luminescence. Significant mortality of R. abronius was observed in 40 % of test sediments, and oyster abnormality occurred in 35 % of sediment elutriates. Complete concordance (that is, sediments that were either toxic or not-toxic in all three tests) was observed in 41 % of the sediments. Possible sources for the lack of concordance at other sites include interspecific differences in sensitivity among test organisms, heterogeneity in contaminant types associated with test sediments, and differences in routes of exposure inherent in each toxicity test. These results highlight the importance of using multiple assays when performing sediment assessments.
1.11.6 Several studies have compared the sensitivity of combinations of the four amphipods to sediment contaminants. For example, there are several comparisons between A. abdita and R. abronius, between E. estuarius and R. abronius, and between A. abdita and L. plumulosus. There are fewer examples of direct comparisons between E. estuarius and L. plumulosus, and no examples comparing L. plumulosus and R. abronius. There is some overlap in relative sensitivity from comparison to comparison within each species combination, which appears to indicate that all four species are within the same range of relative sensitivity to contaminated sediments.
220.127.116.11 Word et al. (1989 (16)) compared the sensitivity of A. abdita and R. abronius to contaminated sediments in a series of experiments. Both species were tested at 15°C. Experiments were designed to compare the response of the organism rather than to provide a comparison of the sensitivity of the methods (that is, Ampelisca abdita would normally be tested at 20°C). Sediments collected from Oakland Harbor, CA, were used for the comparisons. Twenty-six sediments were tested in one comparison, while 5 were tested in the other. Analysis of results using Kruskal Wallace rank sum test for both experiments demonstrated that R. abronius exhibited greater sensitivity to the sediments than A. abdita at 15°C. Long and Buchman (1989 (17)) also compared the sensitivity of A. abdita and R. abronius to sediments from Oakland Harbor, CA. They also determined that A. abdita showed less sensitivity than R. abronius, but they also showed that A. abdita was less sensitive to sediment grain size factors than R. abronius.
18.104.22.168 DeWitt et al. (1989 (11)) compared the sensitivity of E. estuarius and R. abronius to sediment spiked with fluoranthene and field-collected sediment from industrial waterways in Puget Sound, WA, in 10-d tests, and to aqueous cadmium (CdCl2) in a 4-d water-only test. The sensitivity of E. estuarius was from two (to spiked-spiked sediment) to seven (to one Puget Sound, WA, sediment) times less sensitive than R. abronius in sediment tests, and ten times less sensitive to CdCl2 in the water-only test. These results are supported by the findings of Pastorok and Becker (1990 (18)) who found the acute sensitivity of E. estuarius and R. abronius to be generally comparable to each other, and both were more sensitive than Neanthes arenaceodentata (survival and biomass endpoints), Panope generosa (survival), and Dendraster excentricus (survival).
22.214.171.124 Leptocheirus plumulosus was as sensitive as the freshwater amphipod Hyalella azteca to an artificially created gradient of sediment contamination when the latter was acclimated to oligohaline salinity (that is, 6 o/oo; McGee et al., 1993 (19)). DeWitt et al. (1992b (20)) compared the sensitivity of L. plumulosus with three other amphipod species, two mollusks, and one polychaete to highly contaminated sediment collected from Baltimore Harbor, MD, that was serially diluted with clean sediment. Leptocheirus plumulosus was more sensitive than the amphipods Hyalella azteca and Lepidactylus dytiscus and exhibited equal sensitivity with E. estuarius. Schlekat et al. (1995 (21)) describe the results of an interlaboratory comparison of 10-d tests with A. abdita, L. plumulosus and E. estuarius using dilutions of sediments collected from Black Rock Harbor, CT. There was strong agreement among species and laboratories in the ranking of sediment toxicity and the ability to discriminate between toxic and non-toxic sediments.
126.96.36.199 Hartwell et al. (2000 (22)) evaluated the response of Leptocheirus plumulosus (10-d survival or growth) to the response of the amphipod Lepidactylus dytiscus (10-d survival or growth), the polychaete Streblospio benedicti (10-d survival or growth), and lettuce germination (Lactuca sativa in 3-d exposure) and observed that L. plumulosus was relatively insensitive compared to the response of either L. dytiscus or S. benedicti in exposures to 4 sediments with elevated metal concentrations.
188.8.131.52 Ammonia is a naturally occurring compound in marine sediment that results from the degradation of organic debris. Interstitial ammonia concentrations in test sediment can range from <1 mg/L to in excess of 400 mg/L (Word et al., 1997 (23)). Some benthic infauna show toxicity to ammonia at concentrations of about 20 mg/L (Kohn et al., 1994 (24)). Based on water-only and spiked-sediment experiments with ammonia, threshold limits for test initiation and termination have been established for the L. plumulosus chronic test. Smaller (younger) individuals are more sensitive to ammonia than larger (older) individuals (DeWitt et al., 1997a (7), b (25). Results of a 28-d test indicated that neonates can tolerate very high levels of pore-water ammonia (>300 mg/L total ammonia) for short periods of time with no apparent long-term effects (Moore et al., 1997 (26)). It is not surprising L. plumulosus has a high tolerance for ammonia given that these amphipods are often found in organic rich sediments in which diagenesis can result in elevated pore-water ammonia concentrations. Insensitivity to ammonia by L. plumulosus should not be construed as an indicator of the sensitivity of the L. plumulosus sediment toxicity test to other chemicals of concern.
1.11.7 Limited comparative data is available for concurrent water-only exposures of all four species in single-chemical tests. Studies that do exist generally show that no one species is consistently the most sensitive.
184.108.40.206 The relative sensitivity of the four amphipod species to ammonia was determined in ten-d water only toxicity tests in order to aid interpretation of results of tests on sediments where this toxicant is present (USEPA 1994a (1)). These tests were static exposures that were generally conducted under conditions (for example, salinity, photoperiod) similar to those used for standard 10-d sediment tests. Departures from standard conditions included the absence of sediment and a test temperature of 20°C for L. plumulosus, rather than 25°C as dictated in this standard. Sensitivity to total ammonia increased with increasing pH for all four species. The rank sensitivity was R. abronius = A. abdita > E. estuarius > L. plumulosus. A similar study by Kohn et al. (1994 (24)) showed a similar but slightly different relative sensitivity to ammonia with A. abdita > R. abronius = L. plumulosus > E. estuarius.
220.127.116.11 Cadmium chloride has been a common reference toxicant for all four species in 4-d exposures. DeWitt et al. (1992a (6)) reports the rank sensitivity as R. abronius > A. abdita > L. plumulosus > E. estuarius at a common temperature and salinity of 15°C and 28 o/oo. A series of 4-d exposures to cadmium that were conducted at species-specific temperatures and salinities showed the following rank sensitivity: A. abdita = L. plumulosus = R. abronius > E. estuarius (USEPA 1994a (1)).
18.104.22.168 Relative species sensitivity frequently varies among contaminants; consequently, a battery of tests including organisms representing different trophic levels may be needed to assess sediment quality (Craig, 1984 (27); Williams et al. 1986 (15); Long et al., 1990 (28); Ingersoll et al., 1990 (29); Burton and Ingersoll, 1994 (31)). For example, Reish (1988 (32)) reported the relative toxicity of six metals (arsenic, cadmium, chromium, copper, mercury, and zinc) to crustaceans, polychaetes, pelecypods, and fishes and concluded that no one species or group of test organisms was the most sensitive to all of the metals.
1.11.8 The sensitivity of an organism is related to route of exposure and biochemical response to contaminants. Sediment-dwelling organisms can receive exposure from three primary sources: interstitial water, sediment particles, and overlying water. Food type, feeding rate, assimilation efficiency, and clearance rate will control the dose of contaminants from sediment. Benthic invertebrates often selectively consume different particle sizes (Harkey et al. 1994 (33)) or particles with higher organic carbon concentrations which may have higher contaminant concentrations. Grazers and other collector-gatherers that feed on aufwuchs and detritus may receive most of their body burden directly from materials attached to sediment or from actual sediment ingestion. In some amphipods (Landrum, 1989 (34)) and clams (Boese et al., 1990 (35)) uptake through the gut can exceed uptake across the gills for certain hydrophobic compounds. Organisms in direct contact with sediment may also accumulate contaminants by direct adsorption to the body wall or by absorption through the integument (Knezovich et al. 1987 (36)).
1.11.9 Despite the potential complexities in estimating the dose that an animal receives from sediment, the toxicity and bioaccumulation of many contaminants in sediment such as Kepone®, fluoranthene, organochlorines, and metals have been correlated with either the concentration of these chemicals in interstitial water or in the case of non-ionic organic chemicals, concentrations in sediment on an organic carbon normalized basis (Di Toro et al. 1990 (37); Di Toro et al. 1991(38)). The relative importance of whole sediment and interstitial water routes of exposure depends on the test organism and the specific contaminant (Knezovich et al. 1987 (36)). Because benthic communities contain a diversity of organisms, many combinations of exposure routes may be important. Therefore, behavior and feeding habits of a test organism can influence its ability to accumulate contaminants from sediment and should be considered when selecting test organisms for sediment testing.
1.11.10 The use of A. abdita, E. estuarius, R. abronius, and L. plumulosus in laboratory toxicity studies has been field validated with natural populations of benthic organisms (Swartz et al. 1994 (39) and Anderson et al. 2001 (40) for E. estuarius, Swartz et al. 1982 (43) and Anderson et al. 2001 (40) for R. abronius, McGee et al. 1999 (41)and McGee and Fisher 1999 (42) for L. plumulosus).
22.214.171.124 Data from USEPA Office of Research and Development's Environmental Monitoring and Assessment program were examined to evaluate the relationship between survival of Ampelisca abdita in sediment toxicity tests and the presence of amphipods, particularly ampeliscids, in field samples. Over 200 sediment samples from two years of sampling in the Virginian Province (Cape Cod, MA, to Cape Henry, VA) were available for comparing synchronous measurements of A. abdita survival in toxicity tests to benthic community enumeration. Although species of this genus were among the more frequently occurring taxa in these samples, ampeliscids were totally absent from stations that exhibited A. abdita test survival <60 % of that in control samples. Additionally, ampeliscids were found in very low densities at stations with amphipod test survival between 60 and 80 % (USEPA 1994a (1)). These data indicate that tests with
2. Referenced Documents (purchase separately) The documents listed below are referenced within the subject standard but are not provided as part of the standard.
D1129 Terminology Relating to Water
D4447 Guide for Disposal of Laboratory Chemicals and Samples
E29 Practice for Using Significant Digits in Test Data to Determine Conformance with Specifications
E105 Practice for Probability Sampling of Materials
E122 Practice for Calculating Sample Size to Estimate, With Specified Precision, the Average for a Characteristic of a Lot or Process
E141 Practice for Acceptance of Evidence Based on the Results of Probability Sampling
E177 Practice for Use of the Terms Precision and Bias in ASTM Test Methods
E178 Practice for Dealing With Outlying Observations
E456 Terminology Relating to Quality and Statistics
E691 Practice for Conducting an Interlaboratory Study to Determine the Precision of a Test Method
E729 Guide for Conducting Acute Toxicity Tests on Test Materials with Fishes, Macroinvertebrates, and Amphibians
E943 Terminology Relating to Biological Effects and Environmental Fate
E1241 Guide for Conducting Early Life-Stage Toxicity Tests with Fishes
E1325 Terminology Relating to Design of Experiments
E1391 Guide for Collection, Storage, Characterization, and Manipulation of Sediments for Toxicological Testing and for Selection of Samplers Used to Collect Benthic Invertebrates
E1402 Guide for Sampling Design
E1525 Guide for Designing Biological Tests with Sediments
E1611 Guide for Conducting Sediment Toxicity Tests with Polychaetous Annelids
E1688 Guide for Determination of the Bioaccumulation of Sediment-Associated Contaminants by Benthic Invertebrates
E1706 Test Method for Measuring the Toxicity of Sediment-Associated Contaminants with Freshwater Invertebrates
E1847 Practice for Statistical Analysis of Toxicity Tests Conducted Under ASTM Guidelines
E1850 Guide for Selection of Resident Species as Test Organisms for Aquatic and Sediment Toxicity Tests
Ampelisca abdita; amphipod; bioavailability; chronic; Eohaustorius estuarius; estuarine; invertebrates; Leptocheirus plumulosus; marine; Rhepoxynius abronius; sediment; toxicity; Acidity, alkalinity, pH--chemicals; Acute toxicity tests; Ampelisca abdita; Amphipods/Amphibia; Aqueous environments; Benthic macroinvertebrates (collecting); Biological data analysis--sediments; Bivalve molluscs; Chemical analysis--water applications; Contamination--environmental; Corophium; Crustacea; EC50 test; Eohaustorius estuarius; Estuarine environments; Field testing--environmental materials/applications; Geochemical characteristics; Grandidierella japonica; Leptocheirus Plumuulosus; Marine environments; Median lethal dose; Polychaetes; Reference toxicants; Rhepoxynium abronius; Saltwater; Seawater (natural/synthetic); Sediment toxicity testing; Static tests--environmental materials/applications; Ten-day testing; Toxicity/toxicology--water environments
ASTM International is a member of CrossRef.
Citing ASTM Standards
[Back to Top]
|
<urn:uuid:5696801b-841c-4e58-ac11-6af8637c94c1>
|
CC-MAIN-2013-20
|
http://www.astm.org/Standards/E1367.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00008-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.889574
| 9,272
| 3.71875
| 4
|
[
"climate",
"nature"
] |
{
"climate": [
"2°c"
],
"nature": [
"habitat"
]
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Water reuse can be defined as the use of reclaimed water for a direct beneficial purpose. The use of reclaimed water for irrigation and other purposes has been employed as a water conservation practice in Florida, California, Texas, Arizona, and other states for many years.
Reclaimed water, also known as recycled water, is water recovered from domestic, municipal, and industrial wastewater treatment plants that has been treated to standards that allow safe reuse. Properly reclaimed water is typically safe for most uses except human consumption.
Wastewater is not reclaimed water. Wastewater is untreated liquid industrial waste and/or domestic sewage from residential dwellings, commercial buildings, and industrial facilities. Gray water, or untreated wastewater from bathing or washing, is one form of wastewater. Wastewater may be land applied, but this is considered to be land treatment rather than water reuse.
The demand for fresh water in Virginia is growing as the state’s population increases. This demand can potentially exceed supply during times of even moderate drought. In recent years, the normal seasonal droughts that have occurred in Virginia have caused local and state government to enact water conservation ordinances. These ordinances limit the use of potable water (water suitable for human consumption) for such things as car washing and landscape irrigation. The potential for developing new sources of potable water is limited. Conservation measures, such as irrigating with reclaimed water, are one way to help ensure existing water supplies are utilized as efficiently as possible.
The environmental benefits of using reclaimed water include:
Reclaimed water typically comes from municipal wastewater treatment plants, although some industries (e.g., food processors) also generate water that may be suitable for nonpotable uses. (Figure 1).
During primary treatment at a wastewater treatment plant, inorganic and organic suspended solids are removed from plant influent by screening, and settling. The decanted effluent from the primary treatment process is then subjected to secondary treatment, which involves biological decomposition of organic material and settling to further separate water from solids. If a wastewater treatment plant is not equipped to perform advanced treatment, water is disinfected and discharged to natural water bodies following secondary treatment.
Advanced or tertiary treatment consists of further removal of suspended and dissolved solids, including nutrients, and disinfection. Advanced treatment can include:
Water that has undergone advanced treatment is disinfected prior to being released or reused. Reclaimed water often requires greater treatment than effluent that is discharged to local streams or rivers because users will typically have more direct contact with undiluted reclaimed water than undiluted effluent.
For an interactive diagram of a wastewater treatment system with more information on treatment processes, please see www.wef.org/apps/gowithflow/theflow.htm.
Although the primary focus of this publication is on the use of reclaimed water for agricultural, municipal, and residential irrigation, reclaimed water can be used for many other purposes. Non-irrigation uses for reclaimed water include:
Intentional indirect potable reuse means that reclaimed water is discharged to a water body where it is then purposefully used as a raw water supply for another water treatment plant. This occurs unintentionally in most rivers, since downstream water treatment plants use treated water discharged by upstream wastewater treatment plants.
Direct potable reuse refers to the use of reclaimed water for drinking directly after treatment, and, to date, has only been implemented in Africa (U.S. EPA, 2004).
Examples of non-irrigation permitted water reuse projects in Virginia are:
The turfgrass and ornamental horticulture industries have grown as Virginia becomes more urbanized. The acreage devoted to high-value specialty crops that benefit from irrigation, such as fruits and vegetables, is also increasing. As demand for potable water increases, maintaining turf, landscape plants, and crops will require the utilization of previously underutilized water sources.
The regulation of reclaimed water production and use encourages both the supply of and the demand for reclaimed water. The benefits to suppliers of reclaimed water include greater public awareness and demand for reclaimed water and clear guidelines for reclaimed water production. Benefits to end users include increased public acceptance of the use of reclaimed water and a subsequent decrease in the demand for fresh water.
There are no federal regulations governing reclaimed water use, but the U.S. EPA (2004) has established guidelines to encourage states to develop their own regulations. The primary purpose of federal guidelines and state regulations is to protect human health and water quality. To reduce disease risks to acceptable levels, reclaimed water must meet certain disinfection standards by either reducing the concentrations of constituents that may affect public health and/or limiting human contact with reclaimed water.
The U.S. EPA (2004) recommends that water intended for reuse should:
Biochemical oxygen demand (BOD) is an indicator of the presence of reactive organic matter in water. Total suspended solids (TSS) or turbidity (measured in nephelometric turbidity units, or NTUs) are measures of the amount of organic and inorganic particulate matter in water. Some other parameters often measured as indicators of disinfection efficiency include:
The recommended values for each of these indicators depend on the intended use of the reclaimed water (Table 1).
Table 1. Summary of U.S. EPA guidelines for water reuse for irrigation
(Adapted from U.S. EPA, 2004).
Monitoring for specific pathogens and microconstituents may become a part of the standard testing protocol as the use of reclaimed water for indirect potable reuse applications increases. Pathogens of particular concern include enteric viruses and the protozoan parasites Giardia and Cryptosporidium, whose monitoring is required by the state of Florida for water reuse projects.
Microconstituents include organic chemicals, such as pharmaceutically active substances, personal care products, endocrine disrupting compounds, and previously unregulated inorganic elements whose toxicity may be re-assessed or newly evaluated. Fish, amphibians, and birds have been found to develop reproductive system abnormalities upon direct or indirect exposure to a variety of endocrine disrupting compounds. Such microconstituents may have the potential to cause reproduction system abnormalities and immune system malfunctioning in other wildlife and humans at higher concentrations. The impacts of the extremely low concentrations of these compounds found in wastewater effluent or reclaimed water are unknown. To date, there is no evidence that microconstituents cause human health effects at environmentally relevant concentrations.
Some possible options for the removal of microconstituents from wastewater are treatment with ozone, hydrogen peroxide, and UV light. These methods can destroy some microconstituents via advanced oxidation, but the endocrine disruption activity of the by-products created during oxidation may also be of concern.
No illnesses have been directly associated with the use of properly treated reclaimed water in the U.S. (U.S. EPA, 2004). The U.S. EPA recommends, however, that ongoing research and additional monitoring for Giardia, Cryptosporidium, and microconstituents be conducted to understand changes in reclaimed water quality.
State regulations need not agree with U.S. EPA guidelines and are often more stringent. In Virginia, water reuse means direct beneficial reuse, indirect potable reuse, or a controlled use in accordance with the Water Reclamation and Reuse Regulation (9 VAC 25-740-10 et seq.; available at the Virginia Department of Environmental Quality website www.deq.virginia.gov/programs/homepage.html under Water Reuse and Reclamation.)
The Virginia Water Regulation and Reuse Regulation establishes legal requirements for the reclamation and treatment of water that is to be reused. These require ments are designed to protect both water quality and public health, while encouraging the use of reclaimed water. The Virginia Department of Environmental Quality, Water Quality Division has oversight over the Virginia Water Reclamation and Reuse Regulation.
The primary determinants of how reclaimed water of varying quality can be used are based on treatment processes to which the water has been subjected and on quantitative chemical, physical, and biological standards. Reclaimed water suitable for reuse in Virginia is categorized as either Level 1 or Level 2 (Table 2). The minimum standard requirements for reclaimed water for specific uses are summarized in Table 3.
Table 2. Minimum standards for treatment of Level 1 and Level 2 reclaimed water.
(Summarized from Virginia Water Reclamation and Reuse Regulations: 9 VAC 25-740-10 et seq.)
Table 3. Minimum treatment requirements for irrigation and landscape-related reuse of reclaimed water in Virginia.
(Summarized from Virginia Water Reclamation and Reuse Regulations: 9 VAC 25-740-10 et seq.)
Water quality must be considered when using reclaimed water for irrigation. The following properties are critical to plant and soil health and environmental quality.
Salinity, or salt concentration, is probably the most important consideration in determining whether water is suitable for reuse (U.S. EPA, 2004). Water salinity is the sum of all elemental ions (e.g., sodium, calcium, chloride, boron, sulfate, nitrate) and is usually measured by determining the electrical conductivity (EC, units = dS/m) or total dissolved solids (TDS, units = mg/L) concentration of the water. Water with a TDS concentration of 640 mg/L will typically have an EC of approximately 1 dS/m.
Salts in reclaimed water come from:
Most reclaimed water from urban areas is slightly saline (TDS ≤ 1280 mg/L or EC ≤ 2 dS/m). High salt concentrations reduce water uptake in plants by lowering the osmotic potential of the soil. For instance, residential use of water adds approx 200-400 mg/L dissolved salts (Lazarova et al., 2004a). Plants differ in their sensitivity to salt levels so the salinity of the particular reclaimed water source should be measured so that appropriate crops and/or application rates can be selected. Most turfgrasses can tolerate water with 200-800 mg/L soluble salts, but salt levels above 2,000 mg/L may be toxic (Harivandi, 2004). For further information on managing turfgrasses when irrigating with saline water, see Carrow and Duncan (1998).
Many other crop and landscape plants are more sensitive to high soluble-salt levels than turfgrasses, and should be managed accordingly. See Wu and Dodge (2005) for a list of landscape plants with their relative salt tolerance and Maas (1987) for information on salt-tolerant crops.
Specific dissolved ions may also affect irrigation water quality. For example, irrigation water with a high concentration of sodium (Na) ions may cause dispersion of soil aggregates and sealing of soil pores. This is a particular problem in golf course irrigation (Sheikh, 2004) since soil compaction is already a concern due to persistent foot and vehicular traffic. The Sodium Adsorption Ratio (SAR), which measures the ratio of sodium to other ions, is used to evaluate the potential effect of irrigation water on soil structure. For more information on how to assess and interpret SAR levels, please see Harivandi (1999).
High levels of sodium can also be directly toxic to plants both through root uptake and by accumulation in plant leaves following sprinkler irrigation. The specific concentration of sodium that is considered to be toxic will vary with plant species and the type of irrigation system. Turfgrasses are generally more tolerant to sodium than most ornamental plant species.
Although boron (B) and chlorine (Cl) are necessary at low levels for plant growth, dissolved boron and chloride ions can cause toxicity problems at high concentrations. Specific toxic concentrations will vary depending on plant species and type of irrigation method used. Levels of boron as low as 1 to 2 mg/L in irrigation water can cause leaf burn on ornamental plants, but turfgrasses can often tolerate levels as high as 10 mg/L (Harivandi, 1999). Very salt-sensitive landscape plants such as crape myrtle (Lagerstroemia sp.), azalea (Rhododendron sp.), and Chinese privet (Ligustrum sinense) may be damaged by overhead irrigation with reclaimed water containing chloride levels over 100 mg/L, but most turfgrasses are relatively tolerant to chloride if they are mowed frequently (Harivandi, 1999; Crook, 2005).
Reclaimed water typically contains more nitrogen (N) and phosphorus (P) than drinking water. The amounts of N and P provided by the reclaimed water can be calculated as the product of the estimated irrigation volume and the N and P concentration in the water. To prevent N and P leaching into groundwater, the Virginia Water Reclamation and Reuse Regulation requires that a nutrient management plan be written for bulk use of reclaimed water not treated to achieve biological nutrient removal (BNR), which the regulation defines as treatment that achieves an annual average of 8.0 mg/L total N and 1.0 mg/L total P. Water that has been subjected to BNR treatment processes contains such low concentrations of N and P that the reclaimed water can be applied at rates sufficient to supply a crop’s water needs without risk of surface or ground water contamination.
The Virginia Water Reclamation and Reuse Regulations require that irrigation with reclaimed water shall be limited to supplemental irrigation. Supplemental irrigation is defined as that amount of water which, in combination with rainfall, meets the water demands of the irrigated vegetation to maximize production or optimize growth.
Irrigation rates for reclaimed water are site- and crop-specific, and will depend on the following factors (U.S. EPA, 2004; Lazarova et al., 2004b).
1. First, seasonal irrigation demands must be determined. These can be predicted with:• an evapotranspiration estimate for the particular crop being grown
• determination of the period of plant growth
• average annual precipitation data
• data for soil permeability and water holding capacity
Methods for calculating such irrigation requirements can be found in the U.S. Department of Agriculture’s National Engineering Handbook at www.info.usda.gov/CED/ftp/CED/neh-15.htm (USDA-NRCS, 2003) and in Reed et al. (1995). These calculations are more complicated for landscape plantings than for agricultural crops or turf because landscape plantings consist of many different species with different requirements.
2. The properties of the specific reclaimed water to be used, as detailed in the section above, must be taken into account since these may limit the total amount of water that can be applied per season.
3. The availability of the reclaimed water should also be quantified, including:• the total amount available
• the time of year when available
• availability of water storage facilities for the nongrowing season
• delivery rate and type
Water reuse is actively promoted by the Florida Department of Environmental Protection since Florida law requires that the use of potable water for irrigation be limited. In 2005, 462 Florida golf courses, covering over 56,000 acres of land, were irrigated with reclaimed water. Reclaimed water was also used to irrigate 201,465 residences, 572 parks, and 251 schools. St. Petersburg is home to one of the largest dual distribution systems in the world. (A dual distribution system is one where pipes carrying reclaimed water are separate from those carrying potable water.) In existence since the 1970s, this network provides reclaimed water to residences, golf courses, parks, schools, and commercial areas for landscape irrigation, and to commercial and industrial customers for cooling and other applications.
For more information, see Crook (2005) and Florida Department of Environmental Protection (2006).
The town of Cary is the first city in the state of North Carolina to institute a dual distribution system. The system has been in operation since 2001 and can provide up to 1 million gallons of reclaimed water daily for irrigating and cooling. The reclaimed water has undergone advanced treatment and meets North Carolina water quality rules. To date, there are over 400 residential and industrial users.
For more information, see www.townofcary.org/depts/pwdept/reclaimhome.htm.
The Bayberry Hills Golf Course expansion is one of numerous water reuse projects in Massachusetts. It was initiated in 2001 as an addition to an existing golf course of seven holes irrigated with reclaimed water. These seven holes use approximately 18 million gallons of water per year, and water reuse was necessary since Yarmouth’s water supply was already operating at capacity during summer months. The reused water undergoes secondary treatment followed by ozone treatment, filtration, and UV disinfection. There are provisions for water storage during the nongrowing season. The water reuse project has reduced the nitrogen needed for golf course fertilization.
For more information on this and other reuse projects in the state of Massachusetts, see www.mapc.org/regional_planning/MAPC_Water_Reuse_Report_2005.pdf. For further information on irrigation of golf courses with reclaimed water, see United States Golf Association (1994).
The Southeast Farm in Tallahassee, Florida, has been irrigating with reclaimed water since 1966. The farm is a cooperative between the city of Tallahassee, which supplies water, and farmers who contract acreage. Until 1980, the farm was limited to 20 acres of land for hay production, but has expanded since then to 2,163 acres. The irrigation water receives secondary treatment. The crops grown are corn (Zea mays L. subsp. Mays), soybeans [Glycine max (L.) Merr], bermudagrass [Cynodon dactylon (L.) Pers], and rye (Secale cereale L.).
In recent years, however, elevated nitrate levels have been found in the waters of Wakulla Springs State Park south of Tallahassee, which is one of the largest and deepest freshwater springs in the world. This has apparently resulted in excessive growth of algae and exotic aquatic plant species, causing reduced clarity and changes in the spring’s ecosystem. Dye studies have confirmed that at least a portion of the nitrate comes from the Southeast Farm’s irrigated fields, although studies are on-going. As a result, in June 2006, the city of Tallahassee removed all cattle from Southeast Farm, eliminated regular use of nitrogen fertilizer on the farm, and implemented a comprehensive nutrient management plan for the farm.
For more information, see www.talgov.com/you/water/pdf/sefarm.pdf or U.S. EPA (2004).
Water Conserv II has been in existence since 1986, and is the first project permitted by the Florida Department of Environmental Protection for crops for human consumption. Over 3,000 acres of citrus groves are irrigated with reclaimed water, in addition to nurseries, residential landscaping, a sand mine, and the Orange County National Golf Center. No problems have resulted from the irrigation. The reclaimed water provides adequate boron and phosphorus and maintains soil at correct pH for citrus growth. The adequate supply of water permits citrus growers to maintain optimum moisture levels for high yields and ample water for freeze protection, which requires more than eight times as much water as normal irrigation.
Although Water Conserv II had historically provided reclaimed water to citrus growers for no charge, the project recently began charging for water. It’s unclear if citrus growers will continue to irrigate with reclaimed water, or whether Water Conserv II’s emphasis will change to providing reclaimed water for residential, industrial, and landscape customers.
For more information, see www.waterconservii.com/ or U.S. EPA (2004).
This publication was reviewed by Adria Bordas, Bobby Clark, Erik Ervin, and Gary Felton. A draft version was reviewed by Bob Angelotti, Marcia Degen, Karen Harr, George Kennedy, Valerie Rourke, and Terry Wagner. Any opinions, conclusions, or recommendations expressed in this publication are those of the authors.
www.watereuse.org/: WateReuse Association. “The WateReuse Association is a non-profit organization whose mission is to advance the beneficial and efficient use of water resources through education, sound science, and technology using reclamation, recycling, reuse, and desalination for the benefit of our members, the public, and the environment.” Page contains links to water reuse projects (mostly in the western U.S.), and other useful links.
www.cvco.org/science/vwea/navbuttons/Glossary-11-01.pdf: Virginia Water Environment Association’s Virginia Water Reuse Glossary.
www.hrsd.com/waterreuse.htm: Hampton Roads (Virginia) Sanitation District water reuse page. Description of industrial water reuse project, research reports, FAQ’s, and glossary of water reuse jargon.
www.floridadep.org/water/reuse/index.htm: Florida Department of Environmental Protection water re-use page. Links to many water reuse-related resources on site, including general education/information materials, and Florida-specific links on water reuse policy, regulations, and projects.
www.gaepd.org/Files_PDF/techguide/wpb/reuse.pdf: Georgia Department of Natural Resources Environmental Protection Division’s “Guidelines for Water Reclamation and Urban Water Re-Use (2002).
www.mass.gov/dep/water/wastewater/wrfaqs.htm: Massachusetts Department of Environmental Protection FAQ on water reuse.
www.bcua.org/WPC_VT_WasteWaterReUse.htm: Bergen County (New Jersey) Utilities Authority. Describes reuse of wastewater effluent re-use in cooling towers and for sewer cleaning.
www.owasa.org/pages/WaterReuse/questionsandanswers.html: FAQ about Orange Water and SewerAuthority’s (Carrboro, NC) water reuse project for the University of North Carolina at Chapel Hill.
Carrow, R.N. and R.R. Duncan. 1998. Salt-affected turfgrass sites: Assessment and management. John Wiley & Sons, Inc., New York, N.Y.
Crook, James. 2005. St. Petersburg, Florida, dual water system: A case study. Water conservation, reuse, and recycling: Proceedings of an Iranian-American workshop. The National Academies Press, Washington, D.C.
Florida Department of Environmental Protection. 2006. 2005 reuse inventory. FDEP, Tallahassee, FL. Available on-line at www.floridadep.org/water/reuse/inventory.htm.
Harivandi, M. Ali. 1999. Interpreting turfgrass irrigation water test results. Publication 8009. University of California Division of Agriculture and Natural Resources, Oakland, Calif. Available on-line at anrcatalog.ucdavis.edu/pdf/8009.pdf.
Harivandi, M. Ali. 2004. Evaluating recycled waters for golf course irrigation. U.S. Golf Association Green Section Record 42(6): 25-29. Available on-line at turf.lib.msu.edu/2000s/2004/041125.pdf.
Landschoot, Peter. 2007. Irrigation water quality guidelines for turfgrass sites. Department of Crop and Soil Sciences, Cooperative Extension. Penn State University, State College, Pa. Available on-line at turfgrassmanagement.psu.edu/irrigation_water_quality_for_turfgrass_sites.cfm.
Lazarova, Valentina and Takashi Asano. 2004. Challenges of sustainable irrigation with recycled water. p. 1-30. In Valentina Lazarova and Bahri (ed.). Water reuse for irrigation: agriculture, landscapes, and turf grass. CRC Press, Boca Raton, Fla.
Lazarova, Valentina, Herman Bouwer, and Akica Bahri. 2004a. Water quality considerations. p. 31-60. In Valentina Lazarova and Akica Bahri (ed.). Water reuse for irrigation: agriculture, landscapes, and turf grass. CRC Press, Boca Raton, Fla.
Lazarova, Valentina, Ioannis Papadopoulous, and Akica Bahri. 2004b. Code of successful agronomic practices. p. 103-150. In Valentina Lazarova and Akica Bahri (ed.). Water reuse for irrigation: agriculture, landscapes, and turf grass. CRC Press, Boca Raton, Fla.
Maas, E.V. 1987. Salt tolerance of plants. p. 57–75. In B.R. Christie (ed.) CRC handbook of plant science in agriculture, Vol. II. CRC Press, Boca Raton, Fla.
Metropolitan Area Planning Council. 2005. Once is not enough: A guide to water reuse in Massachusetts. MAPC, Boston, Mass. Available on-line at www.mapc.org/regional_planning/MAPC_Water_Reuse_Report_2005.pdf.
Reed, Sherwood C., Ronald W. Crites, and E. Joe Middlebrooks. 1995. Natural systems for waste management and treatment. 2nd edition. McGraw-Hill, Inc. New York, N.Y.
Sheikh, Bahman. 2004. Code of practices for landscape and golf course irrigation. In Valentina Lazarova and Akica Bahri (ed.). Water reuse for irrigation: agriculture, landscapes, and turf grass. CRC Press, Boca Raton, Fla.
USDA-NRCS. 2003. Irrigation water requirements. Section 15, Chapter 2. p. 2-i-2-284. In Part 623 National Engineering Handbook. U.S. Dept. of Agriculture Natural Resources Conservation Service, Washington, D.C. Available on-line at www.info.usda.gov/CED/ftp/CED/neh-15.htm.
U.S. EPA. 2003. National primary drinking water standards. EPA 816-F-03-016. U.S. Environmental Protection Agency, Washington, D.C.
U.S. EPA. 2004. Guidelines for water reuse. EPA 645-R-04-108. U.S. Environmental Protection Agency, Washington, D.C. Available on-line at www.epa.gov/ORD/NRMRL/pubs/625r04108/625r04108.pdf.
United States Golf Association. 1994. Wastewater reuse for golf course irrigation. Lewis Publishers, Chelsea, Mich. 294 p.
VAAWW-VWEA. 2000. A Virginia water reuse glossary. Virginia Section, American Water Works Association and Virginia Water Environment Federation. Available on-line at www.cvco.org/science/vwea/navbuttons/Glossary-11-01.pdf.
Wu, Lin, and Linda Dodge. 2005. Landscape plant salt tolerance guide for recycled water irrigation. Slosson Research Endowment for Ornamental Horticulture, Department of Plant Sciences, University of California, Davis, Calif. Available on-line at ucce.ucdavis.edu/files/filelibrary/5505/20091.pdf.
Reviewed by Greg Evanylo, Extension Specialist, Crop and Soil Environmental Sciences
Virginia Cooperative Extension materials are available for public use, re-print, or citation without further permission, provided the use includes credit to the author and to Virginia Cooperative Extension, Virginia Tech, and Virginia State University.
Issued in furtherance of Cooperative Extension work, Virginia Polytechnic Institute and State University, Virginia State University, and the U.S. Department of Agriculture cooperating. Alan L. Grant, Dean, College of Agriculture and Life Sciences; Edwin J. Jones, Director, Virginia Cooperative Extension, Virginia Tech, Blacksburg; Jewel E. Hairston, Administrator, 1890 Extension Program, Virginia State, Petersburg.
May 1, 2009
|
<urn:uuid:bf39c2b1-ba73-4b48-8a84-7796ac1b235e>
|
CC-MAIN-2013-20
|
http://www.pubs.ext.vt.edu/452/452-014/452-014.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00008-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.901107
| 5,860
| 3.890625
| 4
|
[
"climate",
"nature"
] |
{
"climate": [
"drought"
],
"nature": [
"conservation",
"ecosystem",
"soil health"
]
}
|
{
"strong": 4,
"weak": 0,
"total": 4,
"decision": "accepted_strong"
}
|
Basindra Village, Ratlam District, Madhya Pradesh, is watered by the perennial Jhamand River. The river flows about 100 ft below the village and was once its only source of water. The 2-3 handpumps installed here in the 1970’s by the Public Health and Engineering Department (PHED) used to run dry in the summer. When the river would shrink in the summer, people would dig holes in the riverbed to procure water for their daily needs.
Today the village has 13 handpumps and 2 tubewells. Groundwater therefore, is heavily relied upon for meeting daily water needs. This however, is not a daunting prospect for this village at least in the current scenario, for recharge measure have been taken in the form of an earthen dam, a solid weir and a check dam, all built on the Jhamand. “These structures don’t just recharge groundwater in this area, but also ensure that the river never dries up” informs Bhandari, a PHED engineer. PHED believes that the river has been made perennial by the dam even as they see that the river has actually shrunk because of it. The Dholavar earthen dam is over a kilometre long and manages to amass a large amount of water in the reservoir it creates. The dam has a live storage capacity of about 50 million cubic meters, submerging 600 ha of land. The riverbed on the other side of the dam is totally dry for a distance, till it is revived again by groundwater accrual. The multi-purpose dam has been supplying water to Ratlam city since 1984 (about 5 mld) and canal irrigation to neighbouring villages.
While currently seeming like a solution, one wonders what the adverse effects of this structure could be in the long-run, on the riverine ecology. The impact canal irrigation and all-year-round agriculture is having on the soil is already visible, “we never needed artificial fertilisers before, now they are a necessity, the soil seems tired” observed Juvan Singh, an elderly citizen of the village. Other than impacts in the immediate environment, dams of this size typically have severe downstream impacts as well.
Basindra has piped water supply today, brought to the village from 2 tubewells. The dugwell this village possessed, has typically fallen out of use, with the introduction of handpumps and tubewells. With an electricity-run motor now providing water in people’s houses, consumption has obviously increased. This, they say is not a problem as the reservoir has enough water. Since only about 35 families have opted for individual tap connections, the panchayat is not able to collect enough funds for the operation and maintenance of the scheme. In fact, the funds collected through community contribution (Rs.30/month/family) are not enough even to pay the electricity charges of the motor. “With the river and the handpumps close by, people don’t feel the need to spend money on piped water supply” says Anankuar, an elderly woman of the village. “Due to constant power cuts, we never have water in our taps anyway” she mournfully adds.
Rowty village, close by, is plagued with similar problems. Although electricity problems persist, here 80% of the people have individual tap connections. The only ones who don’t are the tribals, typically living in hamlets in the outskirts of the village, where pipes don’t reach. The charges for this facility are Rs.40/month/family, with an initial contribution of Rs.500.
Till the 1980’s Rowty was sufficiently watered by 3 dugwells, 1 baori (stepwell) and a seasonal stream. Gradually with climate change, deforestation etc, rainfall began to reduce, but the population pressure kept mounting. “We used to have a good monsoon every year back in the day, the dugwell and baori used to last us the entire year” reminisced a group of villagers. “In 1978 PHED started a pipeline system from the baori, connected to public standposts, but in 1984-85 water scarcity became acute, that’s when we built a stop-dam on the seasonal stream and an overhead tank along with individual tap connections” explained Bhandari. While these measures took care of the problem temporarily, in the 1990’s scarcity rose its ugly head once again as the sources kept drying up. The PHED then decided to build dykes and check dams in the watershed of the village, for recharging groundwater. Tubewells were also drilled, but the water they yielded suffered from bad quality. The handpumps wouldn’t just have bad quality water, but also run dry in the lean season. It was in the late 1990’s when the panchayat demanded water from the Jhamand, through long distance pipes. And only in 2006 was this scheme launched. It is managed by the panchayat, which apart from community contributions, also uses its own funds from other sources to run the system.
With the introduction of tap water, the baori and dugwells have been rendered useless and fallen into a dilapidated condition. While the PHED boasts of these two villages as success stories, it is essential to identify the loopholes. It is commendable that piped water supply has been taken seriously here, as opposed to most other villages, where water isn’t available even in public standposts and handpumps. However, the engineers themselves acknowledge that they do not factor in the electricity problem and therefore end up designing unrealistic schemes which are successful only on paper. They assert that designing schemes with lower consumption of electricity is possible, but they never consider it. Technically therefore, they have 24 x 7 water supply, but the ground reality speaks a different language.
The other point to ponder over is the environmental short-sightedness displayed by their schemes. PHED needs a drastic shift in its approach while designing drinking water schemes. The focus continues to remain on groundwater extraction by way of tubewells, ignoring dugwells and throwing traditional systems into disuse. The sustainability of the source or of the system is almost never considered. Even though they have now begun to make recharge structures to insure against falling water tables, simpler and less energy-intensive techniques are not considered. Many a times, their recharge structures involve dams, solid-weirs etc, which are ecologically myopic in nature and prove to be disastrous for the environment in the long-run. It is an imperative that the PHED tries to revive traditional and indigenous wisdom that is culture and geography specific, and apply technology to that, so as to make it relevant to modern times.
Moreover, PHED has not bothered to involve the community in either of these cases. Villagers are hardly ever consulted before designing a scheme. Their opinions or needs simply don’t matter. If at all any interaction takes place between the village community and the PHED prior to the installation of a water supply scheme, it is one-sided, in the form of information-education-communication (IEC). In these two villages, no IEC activity was undertaken, neither were the locals consulted at any stage of planning or implementation. For this reason, the Village Water and Sanitation Committee (VWSC) continues to lie defunct and it is the panchayat that runs the show. In the former example of Basindra, the scheme was not demand-driven and is running into losses.
“PHED schemes are not demand driven but politics driven” the engineers themselves claim. “Where water will be supplied and where it wont is not a matter of need at all, it is based on politics between panchayats, the PHED and MLAs” they discuss amongst each other. When villagers refuse to pay for the installed scheme, PHED engineers lash out at them with hostility, failing to draw a connection between their own observation that they don’t need it and their refusal to pay.
The point that squarely drives home is the compelling need for structural and systemic change within the governmental edifice, and more specifically, within the PHED. Under the new guidelines for provision of drinking water in villages, issued by the Department of Drinking Water and Sanitation (DDWS), PHED engineers are expected to involve communities in their schemes right from the planning stage. While very much in line with the participatory governance rhetoric, this idea has few takers as it is designed by those sitting in Delhi, far removed from ground-realities. Policies such as these are issued in a top-down manner by central departments, much in contradiction to what they ask of the engineers at the village level.
Community participation is a bottom-up process that involves consistent investment of time and effort by the engineers. It is not a one-day event or a one-visit job. Involving villagers in the planning, implementation and operation processes of a drinking water scheme entails gaining ground within all sections of the community, winning their trust, dealing with caste and gender issues, local politics and becoming aware of all the minute details of the problems they face. Mobilising the people and sustaining their confidence is a long drawn process that does not terminate once the scheme has been installed, for that is just the beginning, and running the scheme successfully henceforth requires much cooperation from the villagers.
However, this is not something the PHED engineers feel they are equipped to do. “We are expected to do IEC activities among other aspects of community participation, but neither do we have the skill nor the time for such activities” they claim. “I have over 600 villages under me, how can I undertake a two-year process of community mobilisation for each one of them?” exclaims a sub-engineer from Ratlam district. Another sub-engineer threw light on the loopholes in their planning process, “we are asked to design schemes overnight, how can we ensure people’s involvement in this manner? Our schemes therefore do not factor in local issues – geographical or social, and are unrealistic, thereby causing their own failure.” What these statements elucidate is a desperate need to bring in structural changes so as to enable PHED to respond to these local problems, which the central department often turns a blind eye to.
It is clear that many of the engineers are well aware of their limitations and weaknesses, one of them being their inability to involve communities for the reasons stated above. Lack of skill, time and manpower, as well as bureaucratic procedures are only some the barriers. The need of the hour isn’t just to convey the importance of bottom-up planning and participatory governance, but to bring about systemic changes within government bodies to make them more responsive towards current realities. Government bodies, whether the PHED, the municipalities, the development authorities or any other, cannot consist solely of engineers, but will have to have a wing of social scientists who have the skill that engineers lack to work at the village level, with the people.
Whether socially blind or ecologically short-sighted, the PHED interventions require a massive shift from being top-down, technocratic schemes to demand-driven, sustainable and people-managed system. What is needed is not a just a scheme but an entire system.
|
<urn:uuid:3276f3bc-879a-469b-91eb-5c403333123c>
|
CC-MAIN-2013-20
|
http://cseindia.org/node/4016
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00008-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.966815
| 2,351
| 3.1875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change",
"monsoon"
],
"nature": [
"deforestation"
]
}
|
{
"strong": 3,
"weak": 0,
"total": 3,
"decision": "accepted_strong"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.