The March/April 2016 issue of the beautiful environmental magazine, Orion, features an article about “a revolution in the science of dirt,” which — it claims — “is transforming American agriculture.” The article is called “Dirt First” and is written by Kristin Ohlson. Like most good stories, this one has a hero, in this case Rick Haney, a USDA soil scientist.
“Our entire agriculture industry is based on chemical inputs, but soil is not a chemistry set,” Haney explains. “It’s a biological system. We’ve treated it like a chemistry set because the chemistry is easier to measure than the soil biology.”
In nature, of course, plants grow like mad without added synthetic fertilizer, thanks to a multimillion-year-old partnership with soil microorganisms. Plants pull carbon dioxide from the atmosphere through photosynthesis and create a carbon syrup. About 60 percent of this fuels the plant’s growth, with the remaining exuded through the roots to soil microorganisms, which trade mineral nutrients they’ve liberated from rocks, sand, silt, and clay—in other words, fertilizer—for their share of the carbon bounty. Haney insists that ag scientists are remiss if they don’t pay more attention to this natural partnership.
“I’ve had scientific colleagues tell me they raised 300 bushels of corn [per acre] with an application of fertilizer, and I ask how the control plots, the ones without the fertilizer, did,” Haney says. “They tell me 220 bushels of corn. How is that not the story? How is raising 220 bushels of corn without fertilizer not the story?” If the natural processes at work in even the tired soil of a test plot can produce 220 bushels of corn, he argues, the yields of farmers consciously building soil health can be much higher.
Less than 50 percent of the synthetic fertilizer that farmers apply to most crops is actually used by plants, with much of the rest running off into drainage ditches and streams and, later, concentrating with disastrous effects in lakes and oceans. Witness the oxygen-free dead zone in the Gulf of Mexico or tap water tainted by neurotoxin-producing algae in Ohio: both phenomena are tied to fertilizer runoff. Farmers often apply fertilizer based on advice from manufacturers and university extension agents who are faithful to the agrochemical mindset, using formulas that tie X amount of desired yield to Y pounds of fertilizer applied per acre. Or they apply fertilizer based on a standard test that gauges the amount of inorganic nitrogen, potassium, and phosphorus—the basic ingredients of chemical fertilizers, often referred to as NPK—in a soil sample. Or they apply what they put on the year before, or what their neighbor applied, and then maybe a little bit more, hoping for a jackpot combination of rain, sunshine, and a good market.
The standard soil test, developed some sixty years ago, focuses only on the chemical properties of soil. Haney began developing his test in the early 1990s to focus instead on the soil’s biology. Based on the vigor of the microscopic community in a farmer’s soil, his recommendations usually call for far less than what the farmer hears elsewhere. The yields of those who heed his advice often remain the same, or rise.”
Read the full article here: “Dirt First“
Scientists believe that simple land management techniques can increase the rate at which carbon is absorbed from the atmosphere and stored in soils.
For many climate change activists, the latest rallying cry has been, “Keep it in the ground,” a call to slow and stop drilling for fossil fuels. But for a new generation of land stewards, the cry is becoming, “Put it back in the ground!”
As an avid gardener and former organic farmer, I know the promise that soil holds: Every ounce supports a plethora of life. Now, evidence suggests that soil may also be a key to slowing and reversing climate change.
Evidence suggests that soil may also be a key to slowing and reversing climate change.
“I think the future is really bright,” said Loren Poncia, an energetic Northern Californian cattle rancher. Poncia’s optimism stems from the hope he sees in carbon farming, which he has implemented on his ranch. Carbon farming uses land management techniques that increase the rate at which carbon is absorbed from the atmosphere and stored in soils. Scientists, policy makers, and land stewards alike are hopeful about its potential to mitigate climate change.
Carbon is the key ingredient to all life. It is absorbed by plants from the atmosphere as carbon dioxide and, with the energy of sunlight, converted into simple sugars that build more plant matter. Some of this carbon is consumed by animals and cycled through the food chain, but much of it is held in soil as roots or decaying plant matter. Historically, soil has been a carbon sink, a place of long-term carbon storage.
But many modern land management techniques, including deforestation and frequent tilling, expose soil-bound carbon to oxygen, limiting the soil’s absorption and storage potential. In fact, carbon released from soil is estimated to contribute one-third of global greenhouse gas emissions, according to the Food and Agriculture Organization of the United Nations.
Ranchers and farmers have the power to address that issue. Pastures make up 3.3 billion hectares, or 67 percent, of the world’s farmland. Carbon farming techniques can sequester up to 50 tons of carbon per hectare over a pasture’s lifetime. This motivates some ranchers and farmers to do things a little differently.
“It’s what we think about all day, every day,” said Sallie Calhoun of Paicines Ranch on California’s central coast. “Sequestering soil carbon is essentially creating more life in the soil, since it’s all fed by photosynthesis. It essentially means more plants into every inch of soil.”
Carbon released from soil is estimated to make up to one-third of global greenhouse gas emissions.
Calhoun’s ranch sits in fertile, rolling California pastureland about an hour’s drive east of Monterey Bay. She intensively manages her cattle’s grazing, moving them every few days across 7,000 acres. This avoids compaction, which decreases soil productivity, and also allows perennial grasses to grow back between grazing. Perennial grasses, like sorghum and bluestems, have long root systems that sequester far more carbon than their annual cousins.
By starting with a layer of compost, Calhoun has also turned her new vineyard into an effective carbon sink. Compost is potent for carbon sequestration because of how it enhances otherwise unhealthy soil, enriching it with nutrients and microbes that increase its capacity to harbor plant growth. Compost also increases water-holding capacity, which helps plants thrive even in times of drought. She plans to till the land only once, when she plants the grapes, to avoid releasing stored carbon back into the atmosphere.
Managed grazing and compost application are just a few common practices of the 35 that the Natural Resources Conservation Service recommends for carbon sequestration. All 35 methods have been proven to sequester carbon, though some are better documented than others.
David Lewis, director of the University of California Cooperative Extension, says the techniques Calhoun uses, as well as stream restoration, are some of the most common. Lewis has worked with theMarin Carbon Project, a collaboration of researchers, ranchers, and policy makers, to study and implement carbon farming in Marin County, California. The research has been promising: They found that one application of compost doubled the production of grass and increased carbon sequestration by up to 70 percent. Similarly, stream and river ecosystems, which harbor lots of dense, woody vegetation, can sequester up to one ton of carbon, or as much as a car emits in a year, in just a few feet along their beds.
One application of compost doubled the production of grass and increased carbon sequestration by up to 70 percent.
On his ranch, Poncia has replanted five miles of streams with native shrubs and trees, and has applied compost to all of his 800 acres of pasture. The compost-fortified grasses are more productive and have allowed him to double the number of cattle his land supports. This has had financial benefits. Ten years ago, Poncia was selling veterinary pharmaceuticals to subsidize his ranch. But, with the increase in cattle, he has been able to take up ranching full time. Plus, his ranch sequesters the same amount of carbon each year as is emitted by 81 cars.
Much of the research on carbon farming focuses on rangelands, which are open grasslands, because they make up such a large portion of ecosystems across the planet. They are also, after all, where we grow a vast majority of our food.
“Many of the skeptics of carbon farming think we should be planting forests instead,” Poncia said. “I think forests are a no-brainer, but there are millions of acres of rangelands across the globe and they are not sequestering as much carbon as they could be.”
The potential of carbon farming lies in wide-scale implementation. The Carbon Cycle Institute, which grew out of the Marin Carbon Project with the ambition of applying the research and lessons to other communities in California and nationally, is taking up that task.
“It really all comes back to this,” said Torri Estrada, pointing to a messy white board with the words SOIL CARBON scrawled in big letters. Estrada is managing director of the Carbon Cycle Institute, where he is working to attract more ranchers and farmers to carbon farming. The white board maps the intricate web of organizations and strategies the institute works with. They provide technical assistance and resources to support land stewards in making the transition.
“If the U.S. government would buy carbon credits from farmers, we would produce them.”
For interested stewards, implementation, and the costs associated with it, are different. It could be as simple as a one-time compost application or as intensive as a lifetime of managing different techniques. But for all, the process starts by first assessing a land’s sequestration potential and deciding which techniques fit a steward’s budget and goals. COMET-Farm, an online tool produced by the U.S. Department of Agriculture, can help estimate a ranch’s carbon input and output.
The institute also works with state and national policy makers to provide economic incentives for these practices. “If the U.S. government would buy carbon credits from farmers, we would produce them,” Poncia said. These credits are one way the government could pay farmers to mitigate climate change. “Farmers overproduce everything. So, if they can fund that, we will produce them,” he said. While he is already sequestering carbon, Poncia says that he could do more, given the funding.
Estrada sees the bigger potential of carbon farming to help spur a more fundamental conversation about how we relate to the land. “We’re sitting down with ranchers and having a conversation, and carbon is just the medium for that,” he said. Through this work, Estrada has watched ranchers take a more holistic approach to their management.
On his ranch, Poncia has shifted from thinking about himself as a grass farmer growing feed for his cattle to a soil farmer with the goal of increasing the amount of life in every inch of soil.
Sunday, April 17th was the designated moment. The world’s leading oil producers were expected to bring fresh discipline to the chaotic petroleum market and spark a return to high prices. Meeting in Doha, the glittering capital of petroleum-rich Qatar, the oil ministers of the Organization of the Petroleum Exporting Countries (OPEC), along with such key non-OPEC producers as Russia and Mexico, were scheduled to ratify a draft agreement obliging them to freeze their oil output at current levels. In anticipation of such a deal, oil prices had begun to creep inexorably upward, from $30 per barrel in mid-January to $43 on the eve of the gathering. But far from restoring the old oil order, the meeting ended in discord, driving prices down again and revealing deep cracks in the ranks of global energy producers.
It is hard to overstate the significance of the Doha debacle. At the very least, it will perpetuate the low oil prices that have plagued the industry for the past two years, forcing smaller firms into bankruptcy and erasing hundreds of billions of dollars of investments in new production capacity. It may also have obliterated any future prospects for cooperation between OPEC and non-OPEC producers in regulating the market. Most of all, however, it demonstrated that the petroleum-fueled world we’ve known these last decades — with oil demand always thrusting ahead of supply, ensuring steady profits for all major producers — is no more. Replacing it is an anemic, possibly even declining, demand for oil that is likely to force suppliers to fight one another for ever-diminishing market shares.
The Road to Doha
Before the Doha gathering, the leaders of the major producing countries expressed confidence that a production freeze would finally halt the devastating slump in oil prices that began in mid-2014. Most of them are heavily dependent on petroleum exports to finance their governments and keep restiveness among their populaces at bay. Both Russia and Venezuela, for instance, rely on energy exports for approximately 50% of government income, while for Nigeria it’s more like 75%. So the plunge in prices had already cut deep into government spending around the world, causing civil unrest and even in some cases political turmoil.
No one expected the April 17th meeting to result in an immediate, dramatic price upturn, but everyone hoped that it would lay the foundation for a steady rise in the coming months. The leaders of these countries were well aware of one thing: to achieve such progress, unity was crucial. Otherwise they were not likely to overcome the various factors that had caused the price collapsein the first place. Some of these were structural and embedded deep in the way the industry had been organized; some were the product of their own feckless responses to the crisis.
On the structural side, global demand for energy had, in recent years, ceased to rise quickly enough to soak up all the crude oil pouring onto the market, thanks in part to new supplies from Iraq and especially from the expanding shale fields of the United States. This oversupply triggered the initial 2014 price drop when Brent crude — the international benchmark blend — went from a high of $115 on June 19th to $77 on November 26th, the day before a fateful OPEC meeting in Vienna. The next day, OPEC members, led by Saudi Arabia, failed to agree on either production cuts or a freeze, and the price of oil went into freefall.
The failure of that November meeting has been widely attributed to the Saudis’ desire to kill off new output elsewhere — especially shale production in the United States — and to restore their historic dominance of the global oil market. Many analysts were also convinced that Riyadh was seeking to punish regional rivals Iran and Russia for their support of the Assad regime in Syria (which the Saudis seek to topple).
The rejection, in other words, was meant to fulfill two tasks at the same time: blunt or wipe out the challenge posed by North American shale producers and undermine two economically shaky energy powers that opposed Saudi goals in the Middle East by depriving them of much needed oil revenues. Because Saudi Arabia could produce oil so much more cheaply than other countries — for as little as $3 per barrel — and because it could draw upon hundreds of billions of dollars in sovereign wealth funds to meet any budget shortfalls of its own, its leaders believed it more capable of weathering any price downturn than its rivals. Today, however, that rosy prediction is looking grimmer as the Saudi royals begin to feel the pinch of low oil prices, and find themselves cutting back on the benefits they had been passing on to an ever-growing, potentially restive population while still financing a costly, inconclusive, and increasingly disastrous war in Yemen.
Many energy analysts became convinced that Doha would prove the decisive moment when Riyadh would finally be amenable to a production freeze. Just days before the conference, participants expressed growing confidence that such a plan would indeed be adopted. After all, preliminary negotiations between Russia, Venezuela, Qatar, and Saudi Arabia had produced a draft document that most participants assumed was essentially ready for signature. The only sticking point: the nature of Iran’s participation.
The Iranians were, in fact, agreeable to such a freeze, but only after they were allowed to raise their relatively modest daily output to levels achieved in 2012 before the West imposed sanctions in an effort to force Tehran to agree to dismantle its nuclear enrichment program. Now that those sanctions were, in fact, being lifted as a result of the recently concluded nuclear deal, Tehran was determined to restore the status quo ante. On this, the Saudis balked, having no wish to see their arch-rival obtain added oil revenues. Still, most observers assumed that, in the end, Riyadh would agree to a formula allowing Iran some increase before a freeze. “There are positive indications an agreement will be reached during this meeting… an initial agreement on freezing production,” said Nawal Al-Fuzaia, Kuwait’s OPEC representative, echoing the views of other Doha participants.
But then something happened. According to people familiar with the sequence of events, Saudi Arabia’s Deputy Crown Prince and key oil strategist, Mohammed bin Salman, called the Saudi delegation in Doha at 3:00 a.m. on April 17th and instructed them to spurn a deal that provided leeway of any sort for Iran. When the Iranians — who chose not to attend the meeting — signaled that they had no intention of freezing their output to satisfy their rivals, the Saudis rejected the draft agreement it had helped negotiate and the assembly ended in disarray.
Geopolitics to the Fore
Most analysts have since suggested that the Saudi royals simply considered punishing Iran more important than raising oil prices. No matter the cost to them, in other words, they could not bring themselves to help Iran pursue its geopolitical objectives, including giving yet more support to Shiite forces in Iraq, Syria, Yemen, and Lebanon. Already feeling pressured by Tehran and ever less confident of Washington’s support, they were ready to use any means available to weaken the Iranians, whatever the danger to themselves.
“The failure to reach an agreement in Doha is a reminder that Saudi Arabia is in no mood to do Iran any favors right now and that their ongoing geopolitical conflict cannot be discounted as an element of the current Saudi oil policy,” said Jason Bordoff of the Center on Global Energy Policy at Columbia University.
Many analysts also pointed to the rising influence of Deputy Crown Prince Mohammed bin Salman, entrusted with near-total control of the economy and the military by his aging father, King Salman. As Minister of Defense, the prince has spearheaded the Saudi drive to counter the Iranians in a regional struggle for dominance. Most significantly, he is the main force behind Saudi Arabia’s ongoing intervention in Yemen, aimed at defeating the Houthi rebels, a largely Shia group with loose ties to Iran, and restoring deposed former president Abd Rabbuh Mansur Hadi. After a year of relentless U.S.-backed airstrikes (including the use of cluster bombs), the Saudi intervention has, in fact, failed to achieve its intended objectives, though it has produced thousands of civilian casualties, provoking fierce condemnation from U.N. officials, and created space for the rise of al-Qaeda in the Arabian Peninsula. Nevertheless, the prince seems determined to keep the conflict going and to counter Iranian influence across the region.
For Prince Mohammed, the oil market has evidently become just another arena for this ongoing struggle. “Under his guidance,” the Financial Timesnoted in April, “Saudi Arabia’s oil policy appears to be less driven by the price of crude than global politics, particularly Riyadh’s bitter rivalry with post-sanctions Tehran.” This seems to have been the backstory for Riyadh’s last-minute decision to scuttle the talks in Doha. On April 16th, for instance, Prince Mohammed couldn’t have been blunter to Bloomberg, even if he didn’t mention the Iranians by name: “If all major producers don’t freeze production, we will not freeze production.”
With the proposed agreement in tatters, Saudi Arabia is now expected to boost its own output, ensuring that prices will remain bargain-basement low and so deprive Iran of any windfall from its expected increase in exports. The kingdom, Prince Mohammed told Bloomberg, was prepared to immediately raise production from its current 10.2 million barrels per day to 11.5 million barrels and could add another million barrels “if we wanted to” in the next six to nine months. With Iranian and Iraqi oil heading for market in larger quantities, that’s the definition of oversupply. It would certainly ensure Saudi Arabia’s continued dominance of the market, but it might also wound the kingdom in a major way, if not fatally.
A New Global Reality
No doubt geopolitics played a significant role in the Saudi decision, but that’s hardly the whole story. Overshadowing discussions about a possible production freeze was a new fact of life for the oil industry: the past would be no predictor of the future when it came to global oil demand. Whatever the Saudis think of the Iranians or vice versa, their industry is being fundamentally transformed, altering relationships among the major producers and eroding their inclination to cooperate.
Until very recently, it was assumed that the demand for oil would continue to expand indefinitely, creating space for multiple producers to enter the market, and for ones already in it to increase their output. Even when supply outran demand and drove prices down, as has periodically occurred, producers could always take solace in the knowledge that, as in the past, demand would eventually rebound, jacking prices up again. Under such circumstances and at such a moment, it was just good sense for individual producers to cooperate in lowering output, knowing that everyone would benefit sooner or later from the inevitable price increase.
But what happens if confidence in the eventual resurgence of demand begins to wither? Then the incentives to cooperate begin to evaporate, too, and it’s every producer for itself in a mad scramble to protect market share. This new reality — a world in which “peak oil demand,” rather than “peak oil,” will shape the consciousness of major players — is what the Doha catastrophe foreshadowed.
At the beginning of this century, many energy analysts were convinced that we were at the edge of the arrival of “peak oil”; a peak, that is, in the output of petroleum in which planetary reserves would be exhausted long before the demand for oil disappeared, triggering a global economic crisis. As a result of advances in drilling technology, however, the supply of oil has continued to grow, while demand has unexpectedly begun to stall. This can be traced both to slowing economic growth globally and to an accelerating “green revolution” in which the planet will be transitioning to non-carbon fuel sources. With most nations now committed to measures aimed at reducing emissions of greenhouse gases under the just-signed Paris climate accord, the demand for oil is likely to experience significant declines in the years ahead. In other words, global oil demand will peak long before supplies begin to run low, creating a monumental challenge for the oil-producing countries.
This is no theoretical construct. It’s reality itself. Net consumption of oil in the advanced industrialized nations has already dropped from 50 million barrels per day in 2005 to 45 million barrels in 2014. Further declines are in store as strict fuel efficiency standards for the production of new vehicles and other climate-related measures take effect, the price of solar and wind power continues to fall, and other alternative energy sources come on line. While the demand for oil does continue to rise in the developing world, even there it’s not climbing at rates previously taken for granted. With such countries also beginning to impose tougher constraints on carbon emissions, global consumption is expected to reach a peak and begin an inexorable decline.According to experts Thijs Van de Graaf and Aviel Verbruggen, overall world peak demand could be reached as early as 2020.
In such a world, high-cost oil producers will be driven out of the market and the advantage — such as it is — will lie with the lowest-cost ones. Countries that depend on petroleum exports for a large share of their revenues will come under increasing pressure to move away from excessive reliance on oil. This may have been another consideration in the Saudi decision at Doha. In the months leading up to the April meeting, senior Saudi officials dropped hints that they were beginning to plan for a post-petroleum era and that Deputy Crown Prince bin Salman would play a key role in overseeing the transition.
On April 1st, the prince himself indicated that steps were underway to begin this process. As part of the effort, he announced, he was planning an initial public offering of shares in state-owned Saudi Aramco, the world’s number one oil producer, and would transfer the proceeds, an estimated $2 trillion, to its Public Investment Fund (PIF). “IPOing Aramco and transferring its shares to PIF will technically make investments the source of Saudi government revenue, not oil,” the prince pointed out. “What is left now is to diversify investments. So within 20 years, we will be an economy or state that doesn’t depend mainly on oil.”
For a country that more than any other has rested its claim to wealth and power on the production and sale of petroleum, this is a revolutionary statement. If Saudi Arabia says it is ready to begin a move away from reliance on petroleum, we are indeed entering a new world in which, among other things, the titans of oil production will no longer hold sway over our lives as they have in the past.
This, in fact, appears to be the outlook adopted by Prince Mohammed in the wake of the Doha debacle. In announcing the kingdom’s new economic blueprint on April 25th, he vowed to liberate the country from its “addiction” to oil.” This will not, of course, be easy to achieve, given the kingdom’s heavy reliance on oil revenues and lack of plausible alternatives. The 30-year-old prince could also face opposition from within the royal family to his audacious moves (as well as his blundering ones in Yemen and possibly elsewhere). Whatever the fate of the Saudi royals, however, if predictions of a future peak in world oil demand prove accurate, the debacle in Doha will be seen as marking the beginning of the end of the old oil order.
Michael T. Klare, a TomDispatch regular, is a professor of peace and world security studies at Hampshire College and the author, most recently, of The Race for What’s Left. A documentary movie version of his book Blood and Oil is available from the Media Education Foundation. Follow him on Twitter at @mklare1.
Follow TomDispatch on Twitter and join us on Facebook. Check out the newest Dispatch Book, Nick Turse’s Tomorrow’s Battlefield: U.S. Proxy Wars and Secret Ops in Africa, and Tom Engelhardt’s latest book, Shadow Government: Surveillance, Secret Wars, and a Global Security State in a Single-Superpower World.
Copyright 2016 Michael T. Klare
Near Misses, Radioactive Leaks, and Flooding
In both Chernobyl and Fukushima, areas around the devastated plants were made uninhabitable for the foreseeable future. In neither place, before disaster began to unfold, was anyone expecting it and few imagined that such a catastrophe was possible. In the United States, too, despite the knowledge since 1945 that nuclear power, at war or in peacetime, holds dangers of a stunning sort, the general attitude remains: it can’t happen here — nowhere more dangerously in recent years than on the banks of New York’s Hudson River, an area that could face a nuclear peril endangering a population ofnearly 20 million.
As the Fukushima tragedy struck, President Obama assured Americans that U.S. nuclear plants were closely monitored and built to withstand earthquakes. That statement covered one of the oldest plants in the country, the Indian Point Energy Center (IPEC) in Westchester, New York, first opened in 1962. One of 61 commercial nuclear plants in the country, it has two reactors that generate electricity for homes across New York City and Westchester County. It is located in the sixth most densely populated urban area in the world, the New York metropolitan region, just 30 miles north of Manhattan Island and the planet’s most economically powerful city.
The plant sits astride two seismic faults, which has prompted those opposing its continued operation to call for a detailed analysis of its capacity to resist an earthquake. In addition, a long series of accidents and ongoing hazards has only increased the potential for catastrophe. According to a report by the National Resources Defense Council (NDRC), if a nuclear disaster of a Fukushima magnitude were to strike Indian Point, it would necessitate the evacuation of at least 5.6 million people. In 2003, the existing evacuation plan for the area was deemed inadequate in a report by James Lee Witt, former head of the Federal Emergency Management Agency.
American officials have urged U.S. citizens to stay 50 miles away from the Fukushima plant. Such a 50-mile circle around IPEC would stretch past Kingston in Ulster County to the north, past Bayonne and Jersey City to the south, almost to New Haven, Connecticut, to the east, and into Pennsylvania to the west. It would include all of New York City except for Staten Island and all of Fairfield, Connecticut. “Many scholars have already argued that any evacuation plans shouldn’t be called plans, but rather ‘fantasy documents,’” Daniel Aldrich, a professor of political science at Purdue University, told the New York Times.
Paul Blanch, a nuclear engineer who worked in the industry for 40 years as well as with the Nuclear Regulatory Commission (NRC), thinks a worst-case accident at Indian Point could make the region, including parts of Connecticut, uninhabitable for generations.
According to a report from the Indian Point Safe Energy Coalition, there were 23 reported problems at the plant from its inception to 2005, including steam generator tube ruptures, reactor containment flooding, transformer fires, the failure of backup power for emergency sirens, and leaks of radioactive water laced with tritium. In the latest tritium leak, reported only last month, an outflow of the radioactive isotope from the plant has infused both local groundwater and the Hudson River. (Other U.S. nuclear plants have had their share of tritium leaks as well, including Turkey Point nuclear plant in Florida where such a leak is at the moment threatening drinking water wells.)
Experts agree that although present levels of tritium in groundwater near the plant are “alarming,” the tritium in the river will not be considered harmful until it reaches a far greater concentration of 120,000 picocuries per liter of water. (A picocurie is a standard unit of measurement for radioactivity.) Tritium is the lightest radioactive substance to leak from Indian Point, but according to an assessment by the New York Department of State, other potentially more dangerous radioactive elements like strontium-90, cesium-137, cobalt-60, and nickel-63 are also escaping the plant and entering both the groundwater and the river.
Representatives of Entergy Corporation, which owns the Indian Point plant, report that they don’t know when the present leak began or what its source might be. “No one has made a statement as to when the leak started,” wrote Paul Blanch in an email to us. “It could have started two years ago.” Nor does anyone seem to know where the leak is, how much radioactive matter is leaking, or how it can be stopped. The longer the leak persists, the greater the likelihood of isotopes more potent than tritium contaminating local drinking water.
According to David Lochbaum, director of the Nuclear Safety Project for the Union of Concerned Scientists (UCS) and once a trainer for NRC inspectors, the danger of flooding at the reactor should be an even greater focus of concern than radioactive substance outflows, since it could result in a reactor core meltdown. Yet despite repeated calls for Indian Point’s shutdown from the early 1970s on, it keeps operating.
On April 2, 2000, the NRC rated one of Indian Point’s two reactors the most troubled in the country, and it has been closed for lengthy periods because of system failures of various sorts. This, it turns out, is typical of Entergy-owned reactors. There were 10 “near-miss” incidents at U.S. nuclear reactors last year, a majority of them at three Entergy plants, according to a UCS report on nuclear plant safety. A near-miss incident is an event or condition that could increase the chance of reactor core damage by a factor of 10 or more. In response, the Nuclear Regulatory Commission must send an inspection team to investigate.
The number of such incidents has declined since UCS initiated its annual review in 2010, “overall, a positive trend,” according to report author Lochbaum. “Five years ago, there were nearly twice as many near misses. That said, the nuclear industry is only as good as its worst plant owner. The NRC needs to find out why Entergy plants are experiencing so many potentially serious problems.” Upstate New York’s Ginna plant, he adds, has been operating as long as Indian Point, but with only two “events” in its history. At Indian Point “there’s a major event every two to three years.”
What troubles Lochbaum more than anything else is Indian Point’s vulnerability to flooding. “There was a problem in May 2015 where a transformer exploded,” he told us. “There was an automatic fire sprinkler system installed to put this out. But it ended up flooding the building adjacent to where the explosion had taken place. Fortunately a worker noticed that an inch or two of water had accumulated. If the room had flooded up to five inches, all the power in the plant would have been lost. It would have plunged unit 3 into a ‘station blackout.’”
This might indeed have led to some kind of Fukushima-on-the-Hudson situation. In Fukushima, after the earthquake wiped out the normal power supply and tsunami floodwaters took away the backup supply, workers were unable to get cooling water into the reactor cores and three of the plant’s six reactors melted down.
In 2007, when Indian Point’s plant owner applied to the NRC for a 20-year extension of the plant’s operating license, it was found that a flood alarm could be installed in the room in question for about $200,000. As Lochbaum explains, “The owner determined it was cost-beneficial, that if they installed this flood alarm… it [would reduce] the risk of core meltdown by 20%, and [reduce] the amount of radiation that people on the plant could be exposed to by about 40%, at a cost of about two cents per person for the 20 million people living within 50 miles of the plant.” But nine years later, he told us, that flood alarm has still not been installed.
Potential Pipeline Explosions
As if none of this were enough, a new set of dangers to Indian Point have arisen in recent years due to a high-pressure natural gas pipeline currently being built by Spectra Energy. Dubbed the Algonquin Incremental Market (AIM) pipeline, it is to carry fracked natural gas from the Marcellus Shale formation underlying New York and adjacent states to the Canadian border. At 42 inches in diameter, this pipeline is the biggest that can at present be built — and here’s the catch: AIM is slated to pass within 150 feet of the plant’s reactors.
A former Spectra worker hired to help oversee safety during the pipeline’s construction told a reporter that the company had taken dangerous shortcuts in its rush to begin the project. He had witnessed, he said, “at least two dozen” serious safety violations and transgressions.
Taking shortcuts in pipeline construction could, in the end, prove a risky business. Pipeline ruptures are the commonest cause of gas explosions like the one that, in March 2014 in Manhattan’s East Harlem, killed eight, injured 70, and leveled two apartment buildings. Robert Miller, chairman of the National Association of Pipeline Safety Representatives, attributed the rising rates of such incidents in newly constructed pipelines to “poor construction practices or maybe not enough quality control, quality assurance programs out there to catch these problems before those pipelines go into service.”
In January 2015, the National Transportation Safety Board published a study documenting that gas accidents in “high-consequence” areas (where there are a lot of people and buildings) have been on the rise. With the New York metropolitan area so close to Indian Point, it seems odd indeed to independent experts that the nuclear plant with the sorriest safety history in the country has been judged safe enough for a high-pressure gas pipeline to be run right by it.
A hazards assessment replete with errors was the basis for the go-ahead. Richard B. Kuprewicz, a pipeline infrastructure expert and incident investigator with more than 40 years of energy industry experience, has called that risk assessment “seriously deficient and inadequate.”
At another nuclear plant subsequently shut down, as David Lochbaum points out, a rigorous risk analysis was conducted for possible explosions based on a worst-case scenario. (“I couldn’t think of any scenario that would be worse than what they presumed.”) At Indian Point, the risk analysis was, however, done on a best-case basis. Among other things, it assumed that any pipeline leak around the plant could be stopped in less than three minutes — an unlikelihood at best. “It’s night and day. They did a very conservative analysis for [the other plant] and a very cavalier best-case scenario for Indian Point… I don’t know why they opted for [this] drive-by analysis.”
Of all the contaminants released in this industrial world, radioactivity may, in a sense, be the least visible and least imaginable, even if the most potentially devastating, were something to go wrong. As a result, the dangers of the “peaceful” atom have often proved hard to absorb before disaster strikes — as at the Three Mile Island reactor near Middletown, Pennsylvania, on March 28, 1979. Even when such a power plant sits near a highway or a community, it’s usually a reality to which people pay scant attention, in part because nuclear science is alien territory. This is why safety at nuclear power plants has been something citizens have relied on the government for.
The history of Indian Point, however, offers a grim reminder that the government agencies expected to protect citizens from disaster aren’t doing a particularly good job of it. Over the past several years, for instance, residents in the path of the AIM pipeline project have begun accusing the Federal Energy Regulatory Commission (FERC) of overwhelming bias in the industry’s favor. As FERC has a corner on oversight and approval of all pipeline construction, this is alarming. Its stamp of approval on a pipeline can only be contested via appeals that lead directly back to FERC itself, as the Natural Gas Act of 1938 gave the agency sole discretion over pipeline construction in the U.S. Ever since then, its officials have approved pipelines of every sort almost without exception. Worse yet, at Indian Point, the Nuclear Regulatory Commission joined FERC in green-lighting AIM.
During the two-and-a-half-year period in which the pipeline was approved and construction began, the mainstream media virtually ignored the project and its potential dangers. Only this February, when New York Governor Andrew Cuomo, who has been opposed to the relicensing of Indian Point, first raised concerns about the dangers of the pipeline, did the New York Times, the paper of record for the New York metropolitan area, finally publish a piece on AIM. So it fell to a grassroots movement of local activists to bring AIM’s dangers to public attention. Its growing resistance to a pipeline that could precipitate just about anything up to a Fukushima-on-the-Hudson-style event evidently led Governor Cuomo to urge FERC to postpone construction until a safety review could be completed, a request that the agency rejected. In February, alarmed by reports of tritium leaking from the plant, the governor also directed the state’s departments of environmental conservation and health to investigate the likely duration and consequences of such a leak and its potential impacts on public health.
According to Paul Blanch, the risk of a pipeline explosion in proximity to Indian Point is one in 1,000, odds he believes are too high given what’s potentially at stake. (He considers a one-in-a-million chance acceptable.) “I’ve had over 45 years of nuclear experience and [experience in] safety issues. I have never seen [a situation] that essentially puts 20 million residents at risk, plus the entire economics of the United States by making a large area surrounding Indian Point uninhabitable for generations. I’m not an alarmist and haven’t been known as an alarmist, but the possibility of a gas line interacting with a plant could easily cause a Fukushima type of release.”
According to Blanch, attempts to regulate nuclear plants after a Fukushima- or Chernobyl-type catastrophe are known in the trade as “tombstone regulation.” Nobody, of course, should ever want to experience such a situation on the Hudson, or have America’s own mini-Hiroshima seven decades late, or find literal tombstones cropping up in the New York metropolitan area due to a nuclear disaster. One hope for preventing all of this and ensuring protection for New York’s citizenry: the continuing growth of impressive citizen pressure and increasing public alarm around both the pipeline and Indian Point. It gives new meaning to the phrase “power to the people.”
TomDispatch regular Ellen Cantarow reported on Israel and the West Bank from 1979 to 2009 for the Village Voice, Mother Jones, Inquiry, and Grand Street, among other publications. For the past five years she has been writing about the environmental ravages of the oil and gas industries.
Alison Rose Levy is a New York-based journalist who covers the nexus of health, science, the environment, and public policy. She has reported on fracking, pipelines, the Trans-Pacific Partnership, chemical pollution, and the health impacts of industrial activity for the Huffington Post, Alternet,Truthdig, and EcoWatch.
Copyright 2016 Ellen Cantarow and Alison Rose Levy
By Thomas Frank, author of the just-published Listen, Liberal, or What Ever Happened to the Party of the People? (Metropolitan Books) from which this essay is adapted. He has also written Pity the Billionaire, The Wrecking Crew, and What’s the Matter With Kansas? among other works. He is the founding editor of The Baffler. Reprinted with permission from Tomdispatch.com
When you press Democrats on their uninspiring deeds — their lousy free trade deals, for example, or their flaccid response to Wall Street misbehavior — when you press them on any of these things, they automatically reply that this is the best anyone could have done. After all, they had to deal with those awful Republicans, and those awful Republicans wouldn’t let the really good stuff get through. They filibustered in the Senate. They gerrymandered the congressional districts. And besides, change takes a long time. Surely you don’t think the tepid-to-lukewarm things Bill Clinton and Barack Obama have done in Washington really represent the fiery Democratic soul.
So let’s go to a place that does. Let’s choose a locale where Democratic rule is virtually unopposed, a place where Republican obstruction and sabotage can’t taint the experiment.
Let’s go to Boston, Massachusetts, the spiritual homeland of the professional class and a place where the ideology of modern liberalism has been permitted to grow and flourish without challenge or restraint. As the seat of American higher learning, it seems unsurprising that Boston should anchor one of the most Democratic of states, a place where elected Republicans (like the new governor) are highly unusual. This is the city that virtually invented the blue-state economic model, in which prosperity arises from higher education and the knowledge-based industries that surround it.
The coming of post-industrial society has treated this most ancient of American cities extremely well. Massachusetts routinely occupies the number one spot on the State New Economy Index, a measure of how “knowledge-based, globalized, entrepreneurial, IT-driven, and innovation-based” a place happens to be. Boston ranks high on many of Richard Florida’s statistical indices of approbation — in 2003, it was number one on the “creative class index,” number three in innovation and in high tech — and his many books marvel at the city’s concentration of venture capital, its allure to young people, or the time it enticed some firm away from some unenlightened locale in the hinterlands.
Boston’s knowledge economy is the best, and it is the oldest. Boston’s metro area encompasses some 85 private colleges and universities, the greatest concentration of higher-ed institutions in the country — probably in the world. The region has all the ancillary advantages to show for this: a highly educated population, an unusually large number of patents, and more Nobel laureates than any other city in the country.
The city’s Route 128 corridor was the original model for a suburban tech district, lined ever since it was built with defense contractors and computer manufacturers. The suburbs situated along this golden thoroughfare are among the wealthiest municipalities in the nation, populated by engineers, lawyers, and aerospace workers. Their public schools are excellent, their downtowns are cute, and back in the seventies their socially enlightened residents were the prototype for the figure of the “suburban liberal.”
Another prototype: the Massachusetts Institute of Technology, situated in Cambridge, is where our modern conception of the university as an incubator for business enterprises began. According to a report on MIT’s achievements in this category, the school’s alumni have started nearly 26,000 companies over the years, including Intel, Hewlett Packard, and Qualcomm. If you were to take those 26,000 companies as a separate nation, the report tells us, its economy would be one of the most productive in the world.
Then there are Boston’s many biotech and pharmaceutical concerns, grouped together in what is known as the “life sciences super cluster,” which, properly understood, is part of an “ecosystem” in which PhDs can “partner” with venture capitalists and in which big pharmaceutical firms can acquire small ones. While other industries shrivel, the Boston super cluster grows, with the life-sciences professionals of the world lighting out for the Athens of America and the massive new “innovation centers” shoehorning themselves one after the other into the crowded academic suburb of Cambridge.
To think about it slightly more critically, Boston is the headquarters for two industries that are steadily bankrupting middle America: big learning and big medicine, both of them imposing costs that everyone else is basically required to pay and which increase at a far more rapid pace than wages or inflation. A thousand dollars a pill, 30 grand a semester: the debts that are gradually choking the life out of people where you live are what has madethis city so very rich.
Perhaps it makes sense, then, that another category in which Massachusetts ranks highly is inequality. Once the visitor leaves the brainy bustle of Boston, he discovers that this state is filled with wreckage — with former manufacturing towns in which workers watch their way of life draining away, and with cities that are little more than warehouses for people on Medicare. According to one survey, Massachusetts has the eighth-worst rate of income inequality among the states; by another metric it ranks fourth. However you choose to measure the diverging fortunes of the country’s top 10% and the rest, Massachusetts always seems to finish among the nation’s most unequal places.
Seething City on a Cliff
You can see what I mean when you visit Fall River, an old mill town 50 miles south of Boston. Median household income in that city is $33,000, among the lowest in the state; unemployment is among the highest, 15% in March 2014, nearly five years after the recession ended. Twenty-three percent of Fall River’s inhabitants live in poverty. The city lost its many fabric-making concerns decades ago and with them it lost its reason for being. People have been deserting the place for decades.
Many of the empty factories in which their ancestors worked are still standing, however. Solid nineteenth-century structures of granite or brick, these huge boxes dominate the city visually — there always seems to be one or two of them in the vista, contrasting painfully with whatever colorful plastic fast-food joint has been slapped up next door.
Most of the old factories are boarded up, unmistakable emblems of hopelessness right up to the roof. But the ones that have been successfully repurposed are in some ways even worse, filled as they often are with enterprises offering cheap suits or help with drug addiction. A clinic in the hulk of one abandoned mill has a sign on the window reading simply “Cancer & Blood.”
The effect of all this is to remind you with every prospect that this is a place and a way of life from which the politicians have withdrawn their blessing. Like so many other American scenes, this one is the product of decades of deindustrialization, engineered by Republicans and rationalized by Democrats. This is a place where affluence never returns — not because affluence for Fall River is impossible or unimaginable, but because our country’s leaders have blandly accepted a social order that constantly bids down the wages of people like these while bidding up the rewards for innovators, creatives, and professionals.
Even the city’s one real hope for new employment opportunities — an Amazon warehouse that is now in the planning stages — will serve to lock in this relationship. If all goes according to plan, and if Amazon sticks to the practices it has pioneered elsewhere, people from Fall River will one day get to do exhausting work with few benefits while being electronically monitored for efficiency, in order to save the affluent customers of nearby Boston a few pennies when they buy books or electronics.
But that is all in the future. These days, the local newspaper publishes an endless stream of stories about drug arrests, shootings, drunk-driving crashes, the stupidity of local politicians, and the lamentable surplus of “affordable housing.” The town is up to its eyeballs in wrathful bitterness against public workers. As in: Why do they deserve a decent life when the rest of us have no chance at all? It’s every man for himself here in a “competition for crumbs,” as a Fall River friend puts it.
The Great Entrepreneurial Awakening
If Fall River is pocked with empty mills, the streets of Boston are dotted with facilities intended to make innovation and entrepreneurship easy and convenient. I was surprised to discover, during the time I spent exploring the city’s political landscape, that Boston boasts a full-blown Innovation District, a disused industrial neighborhood that has actually been zoned creative — a projection of the post-industrial blue-state ideal onto the urban grid itself. The heart of the neighborhood is a building called “District Hall” — “Boston’s New Home for Innovation” — which appeared to me to be a glorified multipurpose room, enclosed in a sharply angular façade, and sharing a roof with a restaurant that offers “inventive cuisine for innovative people.” The Wi-Fi was free, the screens on the walls displayed famous quotations about creativity, and the walls themselves were covered with a high-gloss finish meant to be written on with dry-erase markers; but otherwise it was not much different from an ordinary public library. Aside from not having anything to read, that is.
This was my introduction to the innovation infrastructure of the city, much of it built up by entrepreneurs shrewdly angling to grab a piece of the entrepreneur craze. There are “co-working” spaces, shared offices for startups that can’t afford the real thing. There are startup “incubators” and startup “accelerators,” which aim to ease the innovator’s eternal struggle with an uncaring public: the Startup Institute, for example, and the famous MassChallenge, the “World’s Largest Startup Accelerator,” which runs an annual competition for new companies and hands out prizes at the end.
And then there are the innovation Democrats, led by former Governor Deval Patrick, who presided over the Massachusetts government from 2007 to 2015. He is typical of liberal-class leaders; you might even say he is their most successful exemplar. Everyone seems to like him, even his opponents. He is a witty and affable public speaker as well as a man of competence, a highly educated technocrat who is comfortable in corporate surroundings. Thanks to his upbringing in a Chicago housing project, he also understands the plight of the poor, and (perhaps best of all) he is an honest politician in a state accustomed to wide-open corruption. Patrick was also the first black governor of Massachusetts and, in some ways, an ideal Democrat for the era of Barack Obama — who, as it happens, is one of his closest political allies.
As governor, Patrick became a kind of missionary for the innovation cult. “The Massachusetts economy is an innovation economy,” he liked to declare, and he made similar comments countless times, slightly varying the order of the optimistic keywords: “Innovation is a centerpiece of the Massachusetts economy,” et cetera. The governor opened “innovation schools,” a species of ramped-up charter school. He signed the “Social Innovation Compact,” which had something to do with meeting “the private sector’s need for skilled entry-level professional talent.” In a 2009 speech called “The Innovation Economy,” Patrick elaborated the political theory of innovation in greater detail, telling an audience of corporate types in Silicon Valley about Massachusetts’s “high concentration of brainpower” and “world-class” universities, and how “we in government are actively partnering with the private sector and the universities, to strengthen our innovation industries.”
What did all of this inno-talk mean? Much of the time, it was pure applesauce — standard-issue platitudes to be rolled out every time some pharmaceutical company opened an office building somewhere in the state.
On some occasions, Patrick’s favorite buzzword came with a gigantic price tag, like the billion dollars in subsidies and tax breaks that the governor authorized in 2008 to encourage pharmaceutical and biotech companies to do business in Massachusetts. On still other occasions, favoring inno has meant bulldozing the people in its path — for instance, the taxi drivers whose livelihoods are being usurped by ridesharing apps like Uber. When these workers staged a variety of protests in the Boston area, Patrick intervened decisively on the side of the distant software company. Apparently convenience for the people who ride in taxis was more important than good pay for people who drive those taxis. It probably didn’t hurt that Uber had hired a former Patrick aide as a lobbyist, but the real point was, of course, innovation: Uber was the future, the taxi drivers were the past, and the path for Massachusetts was obvious.
A short while later, Patrick became something of an innovator himself. After his time as governor came to an end last year, he won a job as a managing director of Bain Capital, the private equity firm that was founded by his predecessor Mitt Romney — and that had been so powerfully denounced by Democrats during the 2012 election. Patrick spoke about the job like it was just another startup: “It was a happy and timely coincidence I was interested in building a business that Bain was also interested in building,” he told theWall Street Journal. Romney reportedly phoned him with congratulations.
At a 2014 celebration of Governor Patrick’s innovation leadership, Google’s Eric Schmidt announced that “if you want to solve the economic problems of the U.S., create more entrepreneurs.” That sort of sums up the ideology in this corporate commonwealth: Entrepreneurs first. But how has such a doctrine become holy writ in a party dedicated to the welfare of the common man? And how has all this come to pass in the liberal state of Massachusetts?
The answer is that I’ve got the wrong liberalism. The kind of liberalism that has dominated Massachusetts for the last few decades isn’t the stuff of Franklin Roosevelt or the United Auto Workers; it’s the Route 128/suburban-professionals variety. (Senator Elizabeth Warren is the great exception to this rule.) Professional-class liberals aren’t really alarmed by oversized rewards for society’s winners. On the contrary, this seems natural to them — because they are society’s winners. The liberalism of professionals just does not extend to matters of inequality; this is the area where soft hearts abruptly turn hard.
Innovation liberalism is “a liberalism of the rich,” to use the straightforward phrase of local labor leader Harris Gruman. This doctrine has no patience with the idea that everyone should share in society’s wealth. What Massachusetts liberals pine for, by and large, is a more perfect meritocracy — a system where the essential thing is to ensure that the truly talented get into the right schools and then get to rise through the ranks of society. Unfortunately, however, as the blue-state model makes painfully clear, there is no solidarity in a meritocracy. The ideology of educational achievement conveniently negates any esteem we might feel for the poorly graduated.
This is a curious phenomenon, is it not? A blue state where the Democrats maintain transparent connections to high finance and big pharma; where they have deliberately chosen distant software barons over working-class members of their own society; and where their chief economic proposals have to do with promoting “innovation,” a grand and promising idea that remains suspiciously vague. Nor can these innovation Democrats claim that their hands were forced by Republicans. They came up with this program all on their own.
Thomas Frank is the author of the just-published Listen, Liberal, or What Ever Happened to the Party of the People? (Metropolitan Books) from which this essay is adapted. He has also written Pity the Billionaire, The Wrecking Crew, and What’s the Matter With Kansas? among other works. He is the founding editor of The Baffler.
Copyright 2016 Thomas Frank
Follow TomDispatch on Twitter and join us on Facebook. Check out the newest Dispatch Book, Nick Turse’s Tomorrow’s Battlefield: U.S. Proxy Wars and Secret Ops in Africa, and Tom Engelhardt’s latest book, Shadow Government: Surveillance, Secret Wars, and a Global Security State in a Single-Superpower World.
The refuge’s creation helped support nearby ranchers.
Nancy Langston Feb. 2, 2016
Reprinted with permission from High Country News
National wildlife refuges such as the one at Malheur near Burns, Oregon, have importance far beyond the current furor over who manages our public lands. Such refuges are becoming increasingly critical habitat for migratory birds because 95 percent of the wetlands along the Pacific Flyway have already been lost to development.
In some years, 25 million birds visit Malheur, and if the refuge were drained and converted to intensive cattle grazing – which is something the “occupiers” threatened to do – entire populations of ducks, sandhill cranes, and shorebirds would suffer. With their long-distance flights and distinctive songs, the migratory birds visiting Malheur’s wetlands now help to tie the continent together.
This was not always the case. By the 1930s, three decades of drainage, reclamation, and drought had decimated high-desert wetlands and the birds that depended upon them. Out of the hundreds of thousands of egrets that once nested on Malheur Lake, only 121 remained. The American population of the birds had dropped by 95 percent. It took the federal government to restore Malheur’s wetlands and recover waterbird populations, bringing back healthy populations of egrets and many other species.
Yet despite the importance of wildlife refuges to America’s birds, not everyone appreciates them. At one recent news conference, Ammon Bundy called the creation of Malheur National Wildlife refuge “an unconstitutional act” that removed ranchers from their lands and plunged the county into an economic depression. This is not a new complaint. Since the Sagebrush Rebellion of the 1980s, rural communities in the West have blamed their poverty on the 640 million acres of federal public lands, which make up 52 percent of the land in Western states.
Rural Western communities are indeed suffering, but the cause is not the wildlife refuge system. Conservation of bird habitat did not lead to economic devastation, nor were refuge lands “stolen” from ranchers. If any group has prior claims to Malheur refuge, it is the Paiute Indian Tribe.
For at least 6,000 years, Malheur was the Paiutes’ home. It took a brutal Army campaign to force the people from their reservation, marching them through the snow to the state of Washington in 1879. Homesteaders and cattle barons then moved onto Paiute lands, squeezing as much livestock as possible onto dwindling pastures, and warring with each other over whose land was whose. Scars from this era persist more than a century later.
In 1908, President Roosevelt established the Malheur Lake Bird Reservation on the lands of the former Malheur Indian Reservation. But the refuge included only the lake itself, not the rivers that fed into it. Deprived of water, the lake shrank during droughts, and squatters moved onto the drying lakebed. Conservationists, realizing they needed to protect the Blitzen River that fed the lake, began a campaign to expand the refuge.
But the federal government never forced the ranchers to sell, as the occupiers at Malheur claimed, and the sale did not impoverish the community. In fact, it was just the opposite: During the Depression years of the 1930s, the federal government paid the Swift Corp. $675,000 for ruined grazing lands. Impoverished homesteaders who had squatted on refuge lands eventually received payments substantial enough to set them up as cattle ranchers nearby.
John Scharff, Malheur’s manager from 1935 to 1971, sought to transform local suspicion into acceptance by allowing local ranchers to graze cattle on the refuge. Yet some tension persisted. In the 1970s, when concern about overgrazing reduced – but did not eliminate – refuge grazing, violence erupted again. Some environmentalists denounced ranchers as parasites who destroyed wildlife habitat. A few ranchers responded with death threats against environmentalists and federal employees.
But violence is not the basin’s most important historical legacy. Through the decades, community members have come together to negotiate a better future. In the 1920s, poor homesteaders worked with conservationists to save the refuge from irrigation drainage. In the 1990s, Paiute tribal members, ranchers, environmentalists and federal agencies collaborated on innovative grazing plans to restore bird habitat while also giving ranchers more flexibility. In 2013, such efforts resulted in a landmark collaborative conservation plan for the refuge, and it offers great hope for the local economy and for wildlife.
The poet Gary Snyder wrote, “We must learn to know, love, and join our place even more than we love our own ideas. People who can agree that they share a commitment to the landscape – even if they are otherwise locked in struggle with each other – have at least one deep thing to share.”
Collaborative processes are difficult and time-consuming. Yet they have proven that they have the potential to peacefully sustain both human and wildlife communities.
Nancy Langston is a contributor to Writers on the Range, the opinion service of High Country News. She is a professor of environmental history at Michigan Technological University, and the author of a history of Malheur Refuge, Where Land and Water Meet: A Western Landscape Transformed.
When our kids were little, we lived in an Eichler suburb in south Palo Alto. Every house on the block had a 6-to-7-foot fence around it. In the year that we lived there we rarely saw a neighbor. It was eery.
One day, while mowing our lawn, I had a revelation: Our market system has a vested interest in our individual isolation, because this way — rather than sharing, say, lawnmowers among all the neighbors — we each buy our own lawnmower. Consumption is maximized by the destruction of community. In some weird way our market system depends on our isolation from one another, from the weakness of community.
Notice that this is — maybe — starting to change a bit now with the “sharing economy” … Airbnb, Uber, the mesh, waste as food, access not ownership, etc. But does the “sharing economy” really increase community, or merely find a new way to profit from the lack of it?
Various personal and civic pathologies are associated with the breakdown of communities … crime, mental health, etc.
In the following article from Huffington Post, human isolation is now found to be at the root of addiction, and human connection — community — the key to healing it.
The experiment is simple. Put a rat in a cage, alone, with two water bottles. One is just water. The other is water laced with heroin or cocaine. Almost every time you run this experiment, the rat will become obsessed with the drugged water, and keep coming back for more and more, until it kills itself.
The advert explains: “Only one drug is so addictive, nine out of ten laboratory rats will use it. And use it. And use it. Until dead. It’s called cocaine. And it can do the same thing to you.”
But in the 1970s, a professor of Psychology in Vancouver called Bruce Alexander noticed something odd about this experiment. The rat is put in the cage all alone. It has nothing to do but take the drugs. What would happen, he wondered, if we tried this differently? So Professor Alexander built Rat Park. It is a lush cage where the rats would have colored balls and the best rat-food and tunnels to scamper down and plenty of friends: everything a rat about town could want. What, Alexander wanted to know, will happen then?
In Rat Park, all the rats obviously tried both water bottles, because they didn’t know what was in them. But what happened next was startling.
The rats with good lives didn’t like the drugged water. They mostly shunned it, consuming less than a quarter of the drugs the isolated rats used. None of them died. While all the rats who were alone and unhappy became heavy users, none of the rats who had a happy environment did.
“Decades ago, the majority of the Arctic’s winter ice pack was made up of thick, perennial ice. Today, very old ice is extremely rare. This animation tracks the relative amount of ice of different ages from 1987 through early November 2014. Video produced by the Climate.gov team, based on data provided by Mark Tschudi.”
Carl Sagan’s beautiful riff on our “Pale Blue Dot” is an incredible amalgam of science, philosophy and some kind of word jazz, and has inspired more than one video treatment (just search for “pale blue dot” in YouTube and you’ll see what I mean).
Here’s the best treatment I’ve found. I particularly love the momentary image flashing by of Atticus Finch sitting with his daughter Scout on his lap on their front porch just as we hear Sagan’s voice saying “every teacher of morality.”
I still miss Sagan’s sane voice. We need it now more than ever.
Stanford research reaffirms that right-to-carry gun laws are connected with an increase in violent crime. This debunks – with the latest empirical evidence – earlier claims that more guns actually lead to less crime.
New Stanford research confirms that right-to-carry gun laws are linked to an increase in violent crime.
Right-to-carry or concealed-carry laws have generated much debate in the past two decades – do they make society safer or more dangerous?
While there is no federal law on concealed-carry permits, all 50 states have passed laws allowing citizens to carry certain concealed firearms in public, either without a permit or after obtaining a permit from local government or law enforcement.
Recently published scholarship updates the empirical evidence on this issue. Stanford law Professor John J. Donohue III, Stanford law student Abhay Aneja and doctoral student Alexandria Zhang from Johns Hopkins University were the co-authors of the study.
“Trying to estimate the impact of right-to-carry laws has been a vexing task over the last two decades,” said Donohue, the C. Wendell and Edith M. Carlsmith Professor of Law, in an interview.
He explained that prior research based on data through 1992 indicated that the laws decreased violent crime. But in 2004, he noted, the National Research Council issued a report that found that even extending this data through 2000 revealed no credible statistical evidence these particular laws reduced crime.
‘Totality of the evidence’
Now, Donohue and his colleagues have shown that extending the data yet another decade (1999-2010) provides the most convincing evidence to date that right-to-carry laws are associated with an increase in violent crime.
“The totality of the evidence based on educated judgments about the best statistical models suggests that right-to-carry laws are associated with substantially higher rates” of aggravated assault, rape, robbery and murder, said Donohue.
The strongest evidence was for aggravated assault, with data suggesting that right-to-carry (RTC) laws increase this crime by an estimated 8 percent – and this may actually be understated, according to the researchers.
“Our analysis of the year-by-year impact of RTC laws also suggests that RTC laws increase aggravated assaults,” they wrote.
The evidence is less strong on rape and robbery, Donohue noted. The data from 1979 to 2010 provide evidence that the laws are associated with an increase in rape and robbery.
The murder rate increased in the states with existing right-to-carry laws for the period 1999-2010 when the “confounding influence” of the crack cocaine epidemic is controlled for. The study found that homicides increased in eight states that adopted right-to-carry laws during 1999-2010.
Research obstacles, next step
“Different statistical models can yield different estimated effects, and our ability to ascertain the best model is imperfect,” Donohue said, describing this as the most surprising aspect of the study.
He said that many scholars struggle with the issue of methodology in researching the effects of right-to-carry laws. But overall, his study benefits from the recent data.
Donohue suggested it is worth exploring other methodological approaches as well. “Sensitive results and anomalies – such as the occasional estimates that right-to-carry laws lead to higher rates of property crime – have plagued this inquiry for over a decade,” he said.
John J. Donohue III, Stanford Law School: (650) 721-6339, email@example.com
Clifton B. Parker, Stanford News Service: (650) 725-0224, firstname.lastname@example.org