Dirt First

The March/April 2016 issue of the beautiful environmental magazine, Orion, features an article about “a revolution in the science of dirt,” which — it claims — “is transforming American agriculture.” The article is called “Dirt First” and is written by Kristin Ohlson. Like most good stories, this one has a hero, in this case Rick Haney, a USDA soil scientist.

“Our entire agriculture industry is based on chemical inputs, but soil is not a chemistry set,” Haney explains. “It’s a biological system. We’ve treated it like a chemistry set because the chemistry is easier to measure than the soil biology.”

In nature, of course, plants grow like mad without added synthetic fertilizer, thanks to a multimillion-year-old partnership with soil microorganisms. Plants pull carbon dioxide from the atmosphere through photosynthesis and create a carbon syrup. About 60 percent of this fuels the plant’s growth, with the remaining exuded through the roots to soil microorganisms, which trade mineral nutrients they’ve liberated from rocks, sand, silt, and clay—in other words, fertilizer—for their share of the carbon bounty. Haney insists that ag scientists are remiss if they don’t pay more attention to this natural partnership.

“I’ve had scientific colleagues tell me they raised 300 bushels of corn [per acre] with an application of fertilizer, and I ask how the control plots, the ones without the fertilizer, did,” Haney says. “They tell me 220 bushels of corn. How is that not the story? How is raising 220 bushels of corn without fertilizer not the story?” If the natural processes at work in even the tired soil of a test plot can produce 220 bushels of corn, he argues, the yields of farmers consciously building soil health can be much higher.

Less than 50 percent of the synthetic fertilizer that farmers apply to most crops is actually used by plants, with much of the rest running off into drainage ditches and streams and, later, concentrating with disastrous effects in lakes and oceans. Witness the oxygen-free dead zone in the Gulf of Mexico or tap water tainted by neurotoxin-producing algae in Ohio: both phenomena are tied to fertilizer runoff. Farmers often apply fertilizer based on advice from manufacturers and university extension agents who are faithful to the agrochemical mindset, using formulas that tie X amount of desired yield to Y pounds of fertilizer applied per acre. Or they apply fertilizer based on a standard test that gauges the amount of inorganic nitrogen, potassium, and phosphorus—the basic ingredients of chemical fertilizers, often referred to as NPK—in a soil sample. Or they apply what they put on the year before, or what their neighbor applied, and then maybe a little bit more, hoping for a jackpot combination of rain, sunshine, and a good market.

[…]

The standard soil test, developed some sixty years ago, focuses only on the chemical properties of soil. Haney began developing his test in the early 1990s to focus instead on the soil’s biology. Based on the vigor of the microscopic community in a farmer’s soil, his recommendations usually call for far less than what the farmer hears elsewhere. The yields of those who heed his advice often remain the same, or rise.”

Read the full article here: “Dirt First


What’s a Carbon Farmer? How California Ranchers Use Dirt to Tackle Climate Change

Scientists believe that simple land management techniques can increase the rate at which carbon is absorbed from the atmosphere and stored in soils.

By 


Stewardship Carbon Project - M Woolsey_650.jpg

For many climate change activists, the latest rallying cry has been, “Keep it in the ground,” a call to slow and stop drilling for fossil fuels. But for a new generation of land stewards, the cry is becoming, “Put it back in the ground!”

As an avid gardener and former organic farmer, I know the promise that soil holds: Every ounce supports a plethora of life. Now, evidence suggests that soil may also be a key to slowing and reversing climate change.

Evidence suggests that soil may also be a key to slowing and reversing climate change.

“I think the future is really bright,” said Loren Poncia, an energetic Northern Californian cattle rancher. Poncia’s optimism stems from the hope he sees in carbon farming, which he has implemented on his ranch. Carbon farming uses land management techniques that increase the rate at which carbon is absorbed from the atmosphere and stored in soils. Scientists, policy makers, and land stewards alike are hopeful about its potential to mitigate climate change.

Carbon is the key ingredient to all life. It is absorbed by plants from the atmosphere as carbon dioxide and, with the energy of sunlight, converted into simple sugars that build more plant matter. Some of this carbon is consumed by animals and cycled through the food chain, but much of it is held in soil as roots or decaying plant matter. Historically, soil has been a carbon sink, a place of long-term carbon storage.

But many modern land management techniques, including deforestation and frequent tilling, expose soil-bound carbon to oxygen, limiting the soil’s absorption and storage potential. In fact, carbon released from soil is estimated to contribute one-third of global greenhouse gas emissions, according to the Food and Agriculture Organization of the United Nations.

Ranchers and farmers have the power to address that issue. Pastures make up 3.3 billion hectares, or 67 percent, of the world’s farmland. Carbon farming techniques can sequester up to 50 tons of carbon per hectare over a pasture’s lifetime. This motivates some ranchers and farmers to do things a little differently.

“It’s what we think about all day, every day,” said Sallie Calhoun of Paicines Ranch on California’s central coast. “Sequestering soil carbon is essentially creating more life in the soil, since it’s all fed by photosynthesis. It essentially means more plants into every inch of soil.”

Carbon released from soil is estimated to make up to one-third of global greenhouse gas emissions.

Calhoun’s ranch sits in fertile, rolling California pastureland about an hour’s drive east of Monterey Bay. She intensively manages her cattle’s grazing, moving them every few days across 7,000 acres. This avoids compaction, which decreases soil productivity, and also allows perennial grasses to grow back between grazing. Perennial grasses, like sorghum and bluestems, have long root systems that sequester far more carbon than their annual cousins.

By starting with a layer of compost, Calhoun has also turned her new vineyard into an effective carbon sink. Compost is potent for carbon sequestration because of how it enhances otherwise unhealthy soil, enriching it with nutrients and microbes that increase its capacity to harbor plant growth. Compost also increases water-holding capacity, which helps plants thrive even in times of drought. She plans to till the land only once, when she plants the grapes, to avoid releasing stored carbon back into the atmosphere.

Managed grazing and compost application are just a few common practices of the 35 that the Natural Resources Conservation Service recommends for carbon sequestration. All 35 methods have been proven to sequester carbon, though some are better documented than others.

Paicines Ranch_650.jpg
 Photo courtesy of Paicines Ranch.

David Lewis, director of the University of California Cooperative Extension, says the techniques Calhoun uses, as well as stream restoration, are some of the most common. Lewis has worked with theMarin Carbon Project, a collaboration of researchers, ranchers, and policy makers, to study and implement carbon farming in Marin County, California. The research has been promising: They found that one application of compost doubled the production of grass and increased carbon sequestration by up to 70 percent. Similarly, stream and river ecosystems, which harbor lots of dense, woody vegetation, can sequester up to one ton of carbon, or as much as a car emits in a year, in just a few feet along their beds.

One application of compost doubled the production of grass and increased carbon sequestration by up to 70 percent.

On his ranch, Poncia has replanted five miles of streams with native shrubs and trees, and has applied compost to all of his 800 acres of pasture. The compost-fortified grasses are more productive and have allowed him to double the number of cattle his land supports. This has had financial benefits. Ten years ago, Poncia was selling veterinary pharmaceuticals to subsidize his ranch. But, with the increase in cattle, he has been able to take up ranching full time. Plus, his ranch sequesters the same amount of carbon each year as is emitted by 81 cars.

Much of the research on carbon farming focuses on rangelands, which are open grasslands, because they make up such a large portion of ecosystems across the planet. They are also, after all, where we grow a vast majority of our food.

“Many of the skeptics of carbon farming think we should be planting forests instead,” Poncia said. “I think forests are a no-brainer, but there are millions of acres of rangelands across the globe and they are not sequestering as much carbon as they could be.”

The potential of carbon farming lies in wide-scale implementation. The Carbon Cycle Institute, which grew out of the Marin Carbon Project with the ambition of applying the research and lessons to other communities in California and nationally, is taking up that task.

“It really all comes back to this,” said Torri Estrada, pointing to a messy white board with the words SOIL CARBON scrawled in big letters. Estrada is managing director of the Carbon Cycle Institute, where he is working to attract more ranchers and farmers to carbon farming. The white board maps the intricate web of organizations and strategies the institute works with. They provide technical assistance and resources to support land stewards in making the transition.

“If the U.S. government would buy carbon credits from farmers, we would produce them.”

For interested stewards, implementation, and the costs associated with it, are different. It could be as simple as a one-time compost application or as intensive as a lifetime of managing different techniques. But for all, the process starts by first assessing a land’s sequestration potential and deciding which techniques fit a steward’s budget and goals. COMET-Farm, an online tool produced by the U.S. Department of Agriculture, can help estimate a ranch’s carbon input and output.

The institute also works with state and national policy makers to provide economic incentives for these practices. “If the U.S. government would buy carbon credits from farmers, we would produce them,” Poncia said. These credits are one way the government could pay farmers to mitigate climate change. “Farmers overproduce everything. So, if they can fund that, we will produce them,” he said. While he is already sequestering carbon, Poncia says that he could do more, given the funding.

Estrada sees the bigger potential of carbon farming to help spur a more fundamental conversation about how we relate to the land. “We’re sitting down with ranchers and having a conversation, and carbon is just the medium for that,” he said. Through this work, Estrada has watched ranchers take a more holistic approach to their management.

On his ranch, Poncia has shifted from thinking about himself as a grass farmer growing feed for his cattle to a soil farmer with the goal of increasing the amount of life in every inch of soil.


 

Email Signup

Sally Neas wrote this article for YES! Magazine. Sally is a freelance writer and community educator based in Santa Cruz, California. She has a background in permaculture, sustainable agriculture, and community development, and she covers social and environmental issues. She blogs at www.voicesfromthegreatturning.com.

Debacle at Doha: The Collapse of the Old Oil Order

By Michael T. Klare
Reprinted with permission from Tomdispatch.com

Sunday, April 17th was the designated moment.  The world’s leading oil producers were expected to bring fresh discipline to the chaotic petroleum market and spark a return to high prices. Meeting in Doha, the glittering capital of petroleum-rich Qatar, the oil ministers of the Organization of the Petroleum Exporting Countries (OPEC), along with such key non-OPEC producers as Russia and Mexico, were scheduled to ratify a draft agreement obliging them to freeze their oil output at current levels. In anticipation of such a deal, oil prices had begun to creep inexorably upward, from $30 per barrel in mid-January to $43 on the eve of the gathering. But far from restoring the old oil order, the meeting ended in discord, driving prices down again and revealing deep cracks in the ranks of global energy producers.

It is hard to overstate the significance of the Doha debacle. At the very least, it will perpetuate the low oil prices that have plagued the industry for the past two years, forcing smaller firms into bankruptcy and erasing hundreds of billions of dollars of investments in new production capacity. It may also have obliterated any future prospects for cooperation between OPEC and non-OPEC producers in regulating the market. Most of all, however, it demonstrated that the petroleum-fueled world we’ve known these last decades — with oil demand always thrusting ahead of supply, ensuring steady profits for all major producers — is no more.  Replacing it is an anemic, possibly even declining, demand for oil that is likely to force suppliers to fight one another for ever-diminishing market shares.

The Road to Doha

Before the Doha gathering, the leaders of the major producing countries expressed confidence that a production freeze would finally halt the devastating slump in oil prices that began in mid-2014. Most of them are heavily dependent on petroleum exports to finance their governments and keep restiveness among their populaces at bay.  Both Russia and Venezuela, for instance, rely on energy exports for approximately 50% of government income, while for Nigeria it’s more like 75%.  So the plunge in prices had already cut deep into government spending around the world, causing civil unrest and even in some cases political turmoil.

No one expected the April 17th meeting to result in an immediate, dramatic price upturn, but everyone hoped that it would lay the foundation for a steady rise in the coming months. The leaders of these countries were well aware of one thing: to achieve such progress, unity was crucial. Otherwise they were not likely to overcome the various factors that had caused the price collapsein the first place.  Some of these were structural and embedded deep in the way the industry had been organized; some were the product of their own feckless responses to the crisis.

On the structural side, global demand for energy had, in recent years, ceased to rise quickly enough to soak up all the crude oil pouring onto the market, thanks in part to new supplies from Iraq and especially from the expanding shale fields of the United States. This oversupply triggered the initial 2014 price drop when Brent crude — the international benchmark blend — went from a high of $115 on June 19th to $77 on November 26th, the day before a fateful OPEC meeting in Vienna. The next day, OPEC members, led by Saudi Arabia, failed to agree on either production cuts or a freeze, and the price of oil went into freefall.

The failure of that November meeting has been widely attributed to the Saudis’ desire to kill off new output elsewhere — especially shale production in the United States — and to restore their historic dominance of the global oil market. Many analysts were also convinced that Riyadh was seeking to punish regional rivals Iran and Russia for their support of the Assad regime in Syria (which the Saudis seek to topple).

The rejection, in other words, was meant to fulfill two tasks at the same time: blunt or wipe out the challenge posed by North American shale producers and undermine two economically shaky energy powers that opposed Saudi goals in the Middle East by depriving them of much needed oil revenues. Because Saudi Arabia could produce oil so much more cheaply than other countries — for as little as $3 per barrel — and because it could draw upon hundreds of billions of dollars in sovereign wealth funds to meet any budget shortfalls of its own, its leaders believed it more capable of weathering any price downturn than its rivals. Today, however, that rosy prediction is looking grimmer as the Saudi royals begin to feel the pinch of low oil prices, and find themselves cutting back on the benefits they had been passing on to an ever-growing, potentially restive population while still financing a costly, inconclusive, and increasingly disastrous war in Yemen.

Many energy analysts became convinced that Doha would prove the decisive moment when Riyadh would finally be amenable to a production freeze.  Just days before the conference, participants expressed growing confidence that such a plan would indeed be adopted. After all, preliminary negotiations between Russia, Venezuela, Qatar, and Saudi Arabia had produced a draft document that most participants assumed was essentially ready for signature. The only sticking point: the nature of Iran’s participation.

The Iranians were, in fact, agreeable to such a freeze, but only after they were allowed to raise their relatively modest daily output to levels achieved in 2012 before the West imposed sanctions in an effort to force Tehran to agree to dismantle its nuclear enrichment program.  Now that those sanctions were, in fact, being lifted as a result of the recently concluded nuclear deal, Tehran was determined to restore the status quo ante. On this, the Saudis balked, having no wish to see their arch-rival obtain added oil revenues.  Still, most observers assumed that, in the end, Riyadh would agree to a formula allowing Iran some increase before a freeze. “There are positive indications an agreement will be reached during this meeting… an initial agreement on freezing production,” said Nawal Al-Fuzaia, Kuwait’s OPEC representative, echoing the views of other Doha participants.

But then something happened. According to people familiar with the sequence of events, Saudi Arabia’s Deputy Crown Prince and key oil strategist, Mohammed bin Salman, called the Saudi delegation in Doha at 3:00 a.m. on April 17th and instructed them to spurn a deal that provided leeway of any sort for Iran. When the Iranians — who chose not to attend the meeting — signaled that they had no intention of freezing their output to satisfy their rivals, the Saudis rejected the draft agreement it had helped negotiate and the assembly ended in disarray.

Geopolitics to the Fore

Most analysts have since suggested that the Saudi royals simply considered punishing Iran more important than raising oil prices.  No matter the cost to them, in other words, they could not bring themselves to help Iran pursue its geopolitical objectives, including giving yet more support to Shiite forces in Iraq, Syria, Yemen, and Lebanon.  Already feeling pressured by Tehran and ever less confident of Washington’s support, they were ready to use any means available to weaken the Iranians, whatever the danger to themselves.

“The failure to reach an agreement in Doha is a reminder that Saudi Arabia is in no mood to do Iran any favors right now and that their ongoing geopolitical conflict cannot be discounted as an element of the current Saudi oil policy,” said Jason Bordoff of the Center on Global Energy Policy at Columbia University.

Many analysts also pointed to the rising influence of Deputy Crown Prince Mohammed bin Salman, entrusted with near-total control of the economy and the military by his aging father, King Salman. As Minister of Defense, the prince has spearheaded the Saudi drive to counter the Iranians in a regional struggle for dominance. Most significantly, he is the main force behind Saudi Arabia’s ongoing intervention in Yemen, aimed at defeating the Houthi rebels, a largely Shia group with loose ties to Iran, and restoring deposed former president Abd Rabbuh Mansur Hadi. After a year of relentless U.S.-backed airstrikes (including the use of cluster bombs), the Saudi intervention has, in fact, failed to achieve its intended objectives, though it has produced thousands of civilian casualties, provoking fierce condemnation from U.N. officials, and created space for the rise of al-Qaeda in the Arabian Peninsula. Nevertheless, the prince seems determined to keep the conflict going and to counter Iranian influence across the region.

For Prince Mohammed, the oil market has evidently become just another arena for this ongoing struggle. “Under his guidance,” the Financial Timesnoted in April, “Saudi Arabia’s oil policy appears to be less driven by the price of crude than global politics, particularly Riyadh’s bitter rivalry with post-sanctions Tehran.” This seems to have been the backstory for Riyadh’s last-minute decision to scuttle the talks in Doha. On April 16th, for instance, Prince Mohammed couldn’t have been blunter to Bloomberg, even if he didn’t mention the Iranians by name: “If all major producers don’t freeze production, we will not freeze production.”

With the proposed agreement in tatters, Saudi Arabia is now expected to boost its own output, ensuring that prices will remain bargain-basement low and so deprive Iran of any windfall from its expected increase in exports. The kingdom, Prince Mohammed told Bloomberg, was prepared to immediately raise production from its current 10.2 million barrels per day to 11.5 million barrels and could add another million barrels “if we wanted to” in the next six to nine months. With Iranian and Iraqi oil heading for market in larger quantities, that’s the definition of oversupply.  It would certainly ensure Saudi Arabia’s continued dominance of the market, but it might also wound the kingdom in a major way, if not fatally.

A New Global Reality

No doubt geopolitics played a significant role in the Saudi decision, but that’s hardly the whole story. Overshadowing discussions about a possible production freeze was a new fact of life for the oil industry: the past would be no predictor of the future when it came to global oil demand.  Whatever the Saudis think of the Iranians or vice versa, their industry is being fundamentally transformed, altering relationships among the major producers and eroding their inclination to cooperate.

Until very recently, it was assumed that the demand for oil would continue to expand indefinitely, creating space for multiple producers to enter the market, and for ones already in it to increase their output. Even when supply outran demand and drove prices down, as has periodically occurred, producers could always take solace in the knowledge that, as in the past, demand would eventually rebound, jacking prices up again. Under such circumstances and at such a moment, it was just good sense for individual producers to cooperate in lowering output, knowing that everyone would benefit sooner or later from the inevitable price increase.

But what happens if confidence in the eventual resurgence of demand begins to wither? Then the incentives to cooperate begin to evaporate, too, and it’s every producer for itself in a mad scramble to protect market share. This new reality — a world in which “peak oil demand,” rather than “peak oil,” will shape the consciousness of major players — is what the Doha catastrophe foreshadowed.

At the beginning of this century, many energy analysts were convinced that we were at the edge of the arrival of “peak oil”; a peak, that is, in the output of petroleum in which planetary reserves would be exhausted long before the demand for oil disappeared, triggering a global economic crisis. As a result of advances in drilling technology, however, the supply of oil has continued to grow, while demand has unexpectedly begun to stall.  This can be traced both to slowing economic growth globally and to an accelerating “green revolution” in which the planet will be transitioning to non-carbon fuel sources. With most nations now committed to measures aimed at reducing emissions of greenhouse gases under the just-signed Paris climate accord, the demand for oil is likely to experience significant declines in the years ahead. In other words, global oil demand will peak long before supplies begin to run low, creating a monumental challenge for the oil-producing countries.

This is no theoretical construct.  It’s reality itself.  Net consumption of oil in the advanced industrialized nations has already dropped from 50 million barrels per day in 2005 to 45 million barrels in 2014. Further declines are in store as strict fuel efficiency standards for the production of new vehicles and other climate-related measures take effect, the price of solar and wind power continues to fall, and other alternative energy sources come on line. While the demand for oil does continue to rise in the developing world, even there it’s not climbing at rates previously taken for granted. With such countries also beginning to impose tougher constraints on carbon emissions, global consumption is expected to reach a peak and begin an inexorable decline.According to experts Thijs Van de Graaf and Aviel Verbruggen, overall world peak demand could be reached as early as 2020.

In such a world, high-cost oil producers will be driven out of the market and the advantage — such as it is — will lie with the lowest-cost ones. Countries that depend on petroleum exports for a large share of their revenues will come under increasing pressure to move away from excessive reliance on oil. This may have been another consideration in the Saudi decision at Doha. In the months leading up to the April meeting, senior Saudi officials dropped hints that they were beginning to plan for a post-petroleum era and that Deputy Crown Prince bin Salman would play a key role in overseeing the transition.

On April 1st, the prince himself indicated that steps were underway to begin this process. As part of the effort, he announced, he was planning an initial public offering of shares in state-owned Saudi Aramco, the world’s number one oil producer, and would transfer the proceeds, an estimated $2 trillion, to its Public Investment Fund (PIF). “IPOing Aramco and transferring its shares to PIF will technically make investments the source of Saudi government revenue, not oil,” the prince pointed out. “What is left now is to diversify investments. So within 20 years, we will be an economy or state that doesn’t depend mainly on oil.”

For a country that more than any other has rested its claim to wealth and power on the production and sale of petroleum, this is a revolutionary statement. If Saudi Arabia says it is ready to begin a move away from reliance on petroleum, we are indeed entering a new world in which, among other things, the titans of oil production will no longer hold sway over our lives as they have in the past.

This, in fact, appears to be the outlook adopted by Prince Mohammed in the wake of the Doha debacle.  In announcing the kingdom’s new economic blueprint on April 25th, he vowed to liberate the country from its “addiction” to oil.”  This will not, of course, be easy to achieve, given the kingdom’s heavy reliance on oil revenues and lack of plausible alternatives.  The 30-year-old prince could also face opposition from within the royal family to his audacious moves (as well as his blundering ones in Yemen and possibly elsewhere).  Whatever the fate of the Saudi royals, however, if predictions of a future peak in world oil demand prove accurate, the debacle in Doha will be seen as marking the beginning of the end of the old oil order.


Michael T. Klare, a TomDispatch regular, is a professor of peace and world security studies at Hampshire College and the author, most recently, of The Race for What’s Left. A documentary movie version of his book Blood and Oil is available from the Media Education Foundation. Follow him on Twitter at @mklare1.

Follow TomDispatch on Twitter and join us on Facebook. Check out the newest Dispatch Book, Nick Turse’s Tomorrow’s Battlefield: U.S. Proxy Wars and Secret Ops in Africa, and Tom Engelhardt’s latest book, Shadow Government: Surveillance, Secret Wars, and a Global Security State in a Single-Superpower World.

Copyright 2016 Michael T. Klare


What We Know About The Progressive Future, Part I: Here Come The Millennials

By Sara Robinson
    
Originally published in Sara Robinson’s blog, February 2, 2016. Reprinted with the permission of the author.

In January 2011, I presented a futures research project to the Progressive Caucus in Congress, then the largest of all the caucuses in that body. The report, Progressives 2040 — which was sponsored by ProgressiveCongress.org and published by Demos — analyzed a large set of major trends that would shape the future of the progressive movement for the next three decades, and offered a set of scenarios that illustrated how these trends might work together to create a range of possible futures that the movement will need to be prepared for.

This is the first post in “What We Know About The Progressive Future” — a series that I imagine will be a long (probably 10-12 post) look at that research five years on, updating my conclusions and taking a fresh look at the big drivers and high leverage points that will determine the future of our movement. 

24051973744_5f2c1c2f6a_z-2

For most pundits, the most striking thing about the Iowa Caucus was the virtual tie between the two Democratic candidates (which portends a longer and perhaps more exciting election season and higher ratings for those in the media to look forward to), and the surprising 1-2-and-3 order of Cruz, Trump, and Rubio. I’m writing this less than 24 hours after the caucuses ended, and more than enough on both these topics has already been written by others (for God’s sake, people, it’s just Iowa), so I’m going to spare you another analysis ex cathedra from my belly button as to What It All Means For November.

I’m far more interested in another trend that emerged last night — a small detail that will almost certainly have a much longer historical tail than anything else that might happen between now and Election Day. This trend was crystallized by the stunning fact that Bernie Sanders got 85% of the votes of caucus-goers under 30.

That’s not a typo. Eighty-five percent.

That’s a number that strategists from every end of the political spectrum need to be paying attention to, because it is heralding the arrival of the Millennial Generation as a political force to be reckoned with.

My report saw this coming. Back in January 2011, I wrote this about them:

The Millennial generation (born 1980-2000) is the largest and most ethnically diverse generation in American history, with 44% identifying as members of a racial minority. They are the most globally connected generation to date: they travel more, speak more languages, and have friends all over the world. They are more progressive in their core values and attitudes that any cohort we’ve seen in at least a century. And they are rising fast: by 2020, they will be outvoting their elders, dominating elections and bringing their own priorities to the table. We can expect the Millennials to launch their first serious presidential candidate in 2020, and elect their first president probably no later than 2024.

Perhaps the most important fact about the Millennials is the sheer size of this generation. They’re the first cohort we’ve seen in the past 40 years that’s actually big enough to swamp their Boomer parents, whose interests and worldviews have dominated American politics ever since the youngest of them hit voting age in the late 1970s.  The Boom was the biggest generation in American history, to the point where their sheer size itself was transformative (as they say: quantity has a quality all its own). But the Millennials are even bigger. And between now and 2020, the youngest edge of this generation will finally turn 18 and register to vote. The results stand to be at least as transformative for us as a nation as the moment when the Boomers themselves arrived.

Conservative Millennials? Don’t hold your breath
Any number of GOP pundits have written thumb-sucking articles explaining how this cohort is going to become more conservative as it ages (because every generation does, right?) Feel free to rip those up: it’s not likely to happen, for several reasons — starting with the fact that no, not every generation does. The Boomers did, because from left to right and youth through their approaching old age, they’ve shared a belief in radical individualism — the primacy of the individual over any claims made by society — that fed everything from Evangelicalism and free market fundamentalism on the right to New Age religions and social experimentation on the left.  That individualism is the one shining through-line that defines everything that generation has ever embraced. It made them hippies. And it also made them vote for Reagan.

The Millennials are their historical opposite number — a generation raised  from babyhood to cooperate, share, include, network, and self-organize. They value conformity (Boomers and Xers are horrified by the “calling out” ritual that Millennials run on each other constantly as they vigilantly police each other’s behavior. We’d have choked on our own spit before telling each other what to say, think or do; and would  have rightly expected to be told to fuck off if we tried it),  and as this pervades their politics in the coming decades, it’s going to involve a lot of telling other people how they should live. That’s how their GI grandparents created and enforced the great American Consensus of the ’40s, ’50s, and ’60s, and it’s how they’re going to re-create a new consensus about the Next America they’re going to build.

That bred-in-the-bone collectivism is likely to be as durable a lifelong feature as Boomer individualism has been; but it stands in stark opposition to conservatism as it’s currently constituted. It’s possible to imagine another, distinctively Millennial form of conservatism emerging in time — but it would have to be rooted in the idea of a strong social contract, one that obligates individuals to cede some of their desires to the greater good, represented by trusted authorities — and is willing to use social shame as an enforcement mechanism. The GOP is a long way from offering any narratives along these lines now. If they do emerge, it could take another 20 years or more, becoming something today’s Millennials embrace as they age on through their 40s, 50s and 60s.

Other conservatives hold out hope that that all-time-high number of Millennials from immigrant families will benefit them in time, since the usual pattern has been for second-generation immigrants (the first generation born here) to do very well educationally and economically, and to vote more conservatively than either their parents or their third-gen kids. That might be a very plausible scenario — except for the nasty fact that Millennials have already grown up scarred and terrorized by a GOP that has never been able to lay off immigrant-bashing. Again, it’s going to take a radical change within that party — plus another 15 years of over-the-top effort — to win even the grudging trust of a generation that’s already marinated in decades of conservative anti-immigrant hysteria before that’s even remotely likely.

In any event: anybody waiting for the Great Millennial Conservative Revival probably shouldn’t hold their breath. If it comes at all, it’s going to be a very long while indeed. In the meantime, these young adults have a revolution to pull off.  And that moment is coming — much sooner than anybody seems ready to think.

Millennials and Elections
Obama, to his credit, was the first candidate to recognize the raw political power and profound unrest of this rising cohort in 2008. Even though fully half of the Millennial generation was still too young to vote, his overt efforts to capture the energy and attention of the half that could was a conscious strategy. The Millennials ended up supplying him with the margin that put him over the top in the election — support he later rewarded by bringing home the troops (most of whom were Millennials) and restructuring the federal student loan program to make over $30 billion more in Pell Grants available and reduce the loan burden on new graduates (both of which were policies I pointed to in my original 2011 report).

But the Millennials want more. They’re looking into a future that most of them understand is a fatal dead end without a radical, rip-up-the-floorboards restructuring of how the entire planet works — how we do everything from energy and money to community and education to transportation and agriculture. This yearning for a different kind of world even has the potential to upend our current understandings of “right” and “left,” as I wrote in my report:

Some research suggests that this generation’s politics lean toward the “independent” and the “centrist.” However, those words don’t mean the same thing to under-30 Americans that they do to older ones. The self-described “independents” also express core values that are deeply collectivist and inclusive, which gives them a strong affinity for progressive ideas and solutions. (Studies by Pew and Barna have even found these same affinities among self-identified conservatives in this cohort.) Likewise, these “centrists” see their generation’s communal focus on a shared future and shared prosperity as a matter of plain common sense. To them, “we’re in this together” is not a radical idea; indeed, it stands at the center of their politics.

The Millennials spurned Hillary in 2008 because they were craving a true change candidate — and Obama promised to be that. But in the end, the change he could deliver wasn’t enough. And that’s why this generation is going, overwhelmingly, for Bernie Sanders, whom they see as sitting entirely outside the corrupt party system that made it impossible for Obama to give them the goods, unbeholden to Wall Street, uncontaminated by party cronyism, unfiltered in the media — someone who seems to be entirely their own.  This is what their candidates look like — and are going to continue to look like for the next several election cycles.

Given that the youngest 15% or so of the Millennial cohort is still too young to vote, it’s not clear that the Millennials will get their revolution this year. My prediction above that they’d dominate our elections by 2020 was based on the fact that that’s when the very tail end of the cohort — the ones born in 2000 — will all have reached adulthood, putting them finally at their full political strength. Whether or not they show up for 2016 is also complicated by a few other factors, including:

  • How disillusioned the older ones are following their experience with Obama, whom many of them feel very disappointed by — a real problem that surfaced in 2012, when many of them didn’t return to the polls.
  • The general tendency of young adults in their 20s to not vote. Voting is a behavior that becomes more reliable with age. By 2020, the oldest Millennials will be 40, and half will be over 30 — which means they should start showing up far more regularly.
  • Persistent efforts on the part of the GOP to disenfranchise students, which have large effects in some parts of the country.
  • How well Sanders survives the onslaught of conservative attacks that we all know are coming.

 

It’s safe to say that the Millennials will be a vastly bigger factor in 2016 than they were in either 2008 or 2012 — and that Sanders’ success to date can and should be interpreted as this generation’s announcement of its growing political presence with far louder and more insistent authority than we’ve ever heard from them before.

However, in this election cycle, it’s not at all clear that it will be enough to get them what they want. We are tantalizingly close to a generational tipping point, but have not completely arrived at it just quite yet. But by the next cycle, that point will almost certainly be well behind us — and from then on, for the next 40 years, our politics will be pretty much entirely dominated, owned, and determined by the Millennials’ collectivist worldviews, interests, desires, and priorities. They will, this time or next, succeed in voting themselves the transformation they seek. It’s not a question of if, but when.

What we’re seeing when we look at the Bernie Sanders phenomenon is a direct window into our own political future. When will it emerge? Maybe not today, and maybe not this November — but it’s coming soon, and it or something like it will be the dominant political reality for the rest of our lives.

Photo: Ian Buck via Flickr


Sara_Robonson_thumbSara Robinson is a Seattle-based futurist and veteran blogger on culture, politics, and religion. Since 2006, her work (gathered in the Archive section of her blog) regularly appeared at Orcinus, Our Future, Group News Blog, and Alternet. She’s also written for Salon, Huffington Post, Grist, the New Republic, New York Magazine, Firedoglake, and many other sites.

Robinson holds an MS in Futures Studies from the University of Houston, and a BA in Journalism from the USC Annenberg School of Communication. She was a Schumann Fellow, and also held senior fellowships at the Campaign for America’s Future and the Commonweal Foundation. She currently serves on the national board of NARAL Pro-Choice America.


Teach A Man to Reason, And He’ll Think For A Lifetime


A Nuclear Armageddon in the Making in South Asia

By Dilip Hiro
Reprinted from TomDispatch.com

Undoubtedly, for nearly two decades, the most dangerous place on Earth has been the Indian-Pakistani border in Kashmir. It’s possible that a small spark from artillery and rocket exchanges across that border might — given the known military doctrines of the two nuclear-armed neighbors — lead inexorably to an all-out nuclear conflagration.  In that case the result would be catastrophic. Besides causing the deaths of millions of Indians and Pakistanis, such a war might bring on “nuclear winter” on a planetary scale, leading to levels of suffering and death that would be beyond our comprehension.

Alarmingly, the nuclear competition between India and Pakistan has now entered a spine-chilling phase. That danger stems from Islamabad’s decision to deploy low-yield tactical nuclear arms at its forward operating military bases along its entire frontier with India to deter possible aggression by tank-led invading forces. Most ominously, the decision to fire such a nuclear-armed missile with a range of 35 to 60 miles is to rest with local commanders. This is a perilous departure from the universal practice of investing such authority in the highest official of the nation. Such a situation has no parallel in the Washington-Moscow nuclear arms race of the Cold War era.

When it comes to Pakistan’s strategic nuclear weapons, their parts are stored in different locations to be assembled only upon an order from the country’s leader. By contrast, tactical nukes are pre-assembled at a nuclear facility and shipped to a forward base for instant use. In addition to the perils inherent in this policy, such weapons would be vulnerable to misuse by a rogue base commander or theft by one of the many militant groups in the country.

In the nuclear standoff between the two neighbors, the stakes are constantly rising as Aizaz Chaudhry, the highest bureaucrat in Pakistan’s foreign ministry, recently made clear. The deployment of tactical nukes, he explained, was meant to act as a form of “deterrence,” given India’s “Cold Start” military doctrine — a reputed contingency plan aimed at punishing Pakistan in a major way for any unacceptable provocations like a mass-casualty terrorist strike against India.

New Delhi refuses to acknowledge the existence of Cold Start. Its denials are hollow. As early as 2004, it was discussing this doctrine, which involved the formation of eight division-size Integrated Battle Groups (IBGs).  These were to consist of infantry, artillery, armor, and air support, and each would be able to operate independently on the battlefield. In the case of major terrorist attacks by any Pakistan-based group, these IBGs would evidently respond by rapidly penetrating Pakistani territory at unexpected points along the border and advancing no more than 30 miles inland, disrupting military command and control networks while endeavoring to stay away from locations likely to trigger nuclear retaliation. In other words, India has long been planning to respond to major terror attacks with a swift and devastating conventional military action that would inflict only limited damage and so — in a best-case scenario — deny Pakistan justification for a nuclear response.

Islamabad, in turn, has been planning ways to deter the Indians from implementing a Cold-Start-style blitzkrieg on their territory. After much internal debate, its top officials opted for tactical nukes. In 2011, the Pakistanis tested one successfully. Since then, according to Rajesh Rajagopalan, the New Delhi-based co-author of Nuclear South Asia: Keywords and Concepts, Pakistan seems to have been assembling four to five of these annually.

All of this has been happening in the context of populations that view each other unfavorably. A typical survey in this period by the Pew Research Center found that 72% of Pakistanis had an unfavorable view of India, with 57% considering it as a serious threat, while on the other side 59% of Indians saw Pakistan in an unfavorable light.

This is the background against which Indian leaders have said that a tactical nuclear attack on their forces, even on Pakistani territory, would be treated as a full-scale nuclear attack on India, and that they reserved the right to respond accordingly. Since India does not have tactical nukes, it could only retaliate with far more devastating strategic nuclear arms, possibly targeting Pakistani cities.

According to a 2002 estimate by the U.S. Defense Intelligence Agency (DIA), a worst-case scenario in an Indo-Pakistani nuclear war could result in eight to 12 million fatalities initially, followed by many millions later from radiation poisoning.  More recent studies have shown that up to a billion people worldwide might be put in danger of famine and starvation by the smoke and soot thrown into the troposphere in a major nuclear exchange in South Asia. The resulting “nuclear winter” and ensuing crop loss would functionally add up to a slowly developing global nuclear holocaust.

Last November, to reduce the chances of such a catastrophic exchange happening, senior Obama administration officials met in Washington with Pakistan’s army chief, General Raheel Sharif, the final arbiter of that country’s national security policies, and urged him to stop the production of tactical nuclear arms. In return, they offered a pledge to end Islamabad’s pariah status in the nuclear field by supporting its entry into the 48-member Nuclear Suppliers Group to which India already belongs. Although no formal communiqué was issued after Sharif’s trip, it became widely known that he had rejected the offer.

This failure was implicit in the testimony that DIA Director Lieutenant General Vincent Stewart gave to the Armed Services Committee this February. “Pakistan’s nuclear weapons continue to grow,” he said. “We are concerned that this growth, as well as the evolving doctrine associated with tactical [nuclear] weapons, increases the risk of an incident or accident.”

Strategic Nuclear Warheads

Since that DIA estimate of human fatalities in a South Asian nuclear war, the strategic nuclear arsenals of India and Pakistan have continued to grow. In January 2016, according to a U.S. congressional report, Pakistan’s arsenal probably consisted of 110 to 130 nuclear warheads. According to the Stockholm International Peace Research Institute, India has 90 to 110 of these. (China, the other regional actor, has approximately 260 warheads.)

As the 1990s ended, with both India and Pakistan testing their new weaponry, their governments made public their nuclear doctrines. The National Security Advisory Board on Indian Nuclear Doctrine, for example, stated in August 1999 that “India will not be the first to initiate a nuclear strike, but will respond with punitive retaliation should deterrence fail.” India’s foreign minister explained at the time that the “minimum credible deterrence” mentioned in the doctrine was a question of “adequacy,” not numbers of warheads. In subsequent years, however, that yardstick of “minimum credible deterrence” has been regularly recalibrated as India’s policymakers went on to commit themselves to upgrade the country’s nuclear arms program with a new generation of more powerful hydrogen bombs designed to be city-busters.

In Pakistan in February 2000, President General Pervez Musharraf, who was also the army chief, established the Strategic Plan Division in the National Command Authority, appointing Lieutenant General Khalid Kidwai as its director general. In October 2001, Kidwai offered an outline of the country’s updated nuclear doctrine in relation to its far more militarily and economically powerful neighbor, saying, “It is well known that Pakistan does not have a ‘no-first-use policy.’” He then laid out the “thresholds” for the use of nukes.  The country’s nuclear weapons, he pointed out, were aimed solely at India and would be available for use not just in response to a nuclear attack from that country, but should it conquer a large part of Pakistan’s territory (the space threshold), or destroy a significant part of its land or air forces (the military threshold), or start to strangle Pakistan economically (the economic threshold), or politically destabilize the country through large-scale internal subversion (the domestic destabilization threshold).

Of these, the space threshold was the most likely trigger. New Delhi as well as Washington speculated as to where the red line for this threshold might lie, though there was no unanimity among defense experts. Many surmised that it would be the impending loss of Lahore, the capital of Punjab, only 15 miles from the Indian border. Others put the red line at Pakistan’s sprawling Indus River basin.

Within seven months of this debate, Indian-Pakistani tensions escalated steeply in the wake of an attack on an Indian military base in Kashmir by Pakistani terrorists in May 2002. At that time, Musharraf reiterated that he would not renounce his country’s right to use nuclear weapons first. The prospect of New Delhi being hit by an atom bomb became so plausible that U.S. Ambassador Robert Blackwill investigated building a hardened bunker in the Embassy compound to survive a nuclear strike. Only when he and his staff realized that those in the bunker would be killed by the aftereffects of the nuclear blast did they abandon the idea.

Unsurprisingly, the leaders of the two countries found themselves staring into the nuclear abyss because of a violent act in Kashmir, a disputed territory which had led to three conventional wars between the South Asian neighbors since 1947, the founding year of an independent India and Pakistan. As a result of the first of these in 1947 and 1948, India acquired about half of Kashmir, with Pakistan getting a third, and the rest occupied later by China.

Kashmir, the Root Cause of Enduring Enmity

The Kashmir dispute dates back to the time when the British-ruled Indian subcontinent was divided into Hindu-majority India and Muslim-majority Pakistan, and indirectly ruled princely states were given the option of joining either one. In October 1947, the Hindu maharaja of Muslim-majority Kashmir signed an “instrument of accession” with India after Muslim tribal raiders from Pakistan invaded his realm. The speedy arrival of Indian troops deprived the invaders of the capital city, Srinagar. Later, they battled regular Pakistani troops until a United Nations-brokered ceasefire on January 1, 1949. The accession document required that Kashmiris be given an opportunity to choose between India and Pakistan once peace was restored. This has not happened yet, and there is no credible prospect of it taking place.

Fearing a defeat in such a plebiscite, given the pro-Pakistani sentiments prevalent among the territory’s majority Muslims, India found several ways of blocking U.N. attempts to hold one. New Delhi then conferred a special status on the part of Kashmir it controlled and held elections for its legislature, while Pakistan watched with trepidation.

In September 1965, when its verbal protests proved futile, Pakistan attempted to change the status quo through military force. It launched a war that once again ended in stalemate and another U.N.-sponsored truce, which required the warring parties to return to the 1949 ceasefire line.

A third armed conflict between the two neighbors followed in December 1971, resulting in Pakistan’s loss of its eastern wing, which became an independent Bangladesh. Soon after, Indian Prime Minister Indira Gandhi tried to convince Pakistani President Zulfikar Ali Bhutto to agree to transform the 460-mile-long ceasefire line in Kashmir (renamed the “Line of Control”) into an international border. Unwilling to give up his country’s demand for a plebiscite in all of pre-1947 Kashmir, Bhutto refused. So the stalemate continued.

During the military rule of General Zia al Haq (1977-1988), Pakistan initiated a policy of bleeding India with a thousand cuts by sponsoring terrorist actions both inside Indian Kashmir and elsewhere in the country. Delhi responded by bolstering its military presence in Kashmir and brutally repressing those of its inhabitants demanding a plebiscite or advocating separation from India, committing in the process large-scale human rights violations.

In order to stop infiltration by militants from Pakistani Kashmir, India built a double barrier of fencing 12-feet high with the space between planted with hundreds of land mines. Later, that barrier would be equipped as well with thermal imaging devices and motion sensors to help detect infiltrators. By the late 1990s, on one side of the Line of Control were 400,000 Indian soldiers and on the other 300,000 Pakistani troops. No wonder President Bill Clinton called that border “the most dangerous place in the world.”  Today, with the addition of tactical nuclear weapons to the mix, it is far more so.

Kashmir, the Toxic Bone of Contention

Even before Pakistan’s introduction of tactical nukes, tensions between the two neighbors were perilously high.  Then suddenly, at the end of 2015, a flicker of a chance for the normalization of relations appeared. Indian Prime Minister Narendra Modi had a cordial meeting with his Pakistani counterpart, Nawaz Sharif, on the latter’s birthday, December 25th, in Lahore. But that hope was dashed when, in the early hours of January 2nd, four heavily armed Pakistani terrorists managed to cross the international border in Punjab, wearing Indian Army fatigues, and attacked an air force base in Pathankot. A daylong gun battle followed. By the time order was restored on January 5th, all the terrorists were dead, but so were seven Indian security personnel and one civilian. The United Jihad Council, an umbrella organization of separatist militant groups in Kashmir, claimed credit for the attack. The Indian government, however, insisted that the operation had been masterminded by Masood Azhar, leader of the Pakistan-based Jaish-e Muhammad (Army of Muhammad).

As before, Kashmir was the motivating drive for the anti-India militants. Mercifully, the attack in Pathankot turned out to be a minor event, insufficient to heighten the prospect of war, though it dissipated any goodwill generated by the Modi-Sharif meeting.

There is little doubt, however, that a repeat of the atrocity committed by Pakistani infiltrators in Mumbai in November 2008, leading to the death of 166 people and the burning of that city’s landmark Taj Mahal Hotel, could have consequences that would be dire indeed. The Indian doctrine calling for massive retaliation in response to a successful terrorist strike on that scale could mean the almost instantaneous implementation of its Cold Start strategy. That, in turn, would likely lead to Pakistan’s use of tactical nuclear weapons, thus opening up the real possibility of a full-blown nuclear holocaust with global consequences.

Beyond the long-running Kashmiri conundrum lies Pakistan’s primal fear of the much larger and more powerful India, and its loathing of India’s ambition to become the hegemonic power in South Asia. Irrespective of party labels, governments in New Delhi have pursued a muscular path on national security aimed at bolstering the country’s defense profile.

Overall, Indian leaders are resolved to prove that their country is entering what they fondly call “the age of aspiration.” When, in July 2009, Prime Minister Manmohan Singh officially launched a domestically built nuclear-powered ballistic missile submarine, the INS Arihant, it was hailed as a dramatic step in that direction.  According to defense experts, that vessel was the first of its kind not to be built by one of the five recognized nuclear powers: the United States, Britain, China, France, and Russia.

India’s Two Secret Nuclear Sites

On the nuclear front in India, there was more to come. Last December, an investigation by the Washington-based Center for Public Integrity revealed that the Indian government was investing $100 million to build a top secret nuclear city spread over 13 square miles near the village of Challakere, 160 miles north of the southern city of Mysore. When completed, possibly as early as 2017, it will be “the subcontinent’s largest military-run complex of nuclear centrifuges, atomic-research laboratories, and weapons- and aircraft-testing facilities.” Among the project’s aims is to expand the government’s nuclear research, to produce fuel for the country’s nuclear reactors, and to help power its expanding fleet of nuclear submarines. It will be protected by a ring of garrisons, making the site a virtual military facility.

Another secret project, the Indian Rare Materials Plant, near Mysore is already in operation. It is a new nuclear enrichment complex that is feeding the country’s nuclear weapons programs, while laying the foundation for an ambitious project to create an arsenal of hydrogen (thermonuclear) bombs.

The overarching aim of these projects is to give India an extra stockpile of enriched uranium fuel that could be used in such future bombs. As a military site, the project at Challakere will not be open to inspection by the International Atomic Energy Agency or by Washington, since India’s 2008 nuclear agreement with the U.S. excludes access to military-related facilities. These enterprises are directed by the office of the prime minister, who is charged with overseeing all atomic energy projects. India’s Atomic Energy Act and its Official Secrets Act place everything connected to the country’s nuclear program under wraps. In the past, those who tried to obtain a fuller picture of the Indian arsenal and the facilities that feed it have been bludgeoned to silence.

Little wonder then that a senior White House official was recently quoted as saying, “Even for us, details of the Indian program are always sketchy and hard facts thin on the ground.” He added, “Mysore is being constantly monitored, and we are constantly monitoring progress in Challakere.” However, according to Gary Samore, a former Obama administration coordinator for arms control and weapons of mass destruction, “India intends to build thermonuclear weapons as part of its strategic deterrent against China. It is unclear, when India will realize this goal of a larger and more powerful arsenal, but they will.”

Once manufactured, there is nothing to stop India from deploying such weapons against Pakistan. “India is now developing very big bombs, hydrogen bombs that are city-busters,” said Pervez Hoodbhoy, a leading Pakistani nuclear and national security analyst. “It is not interested in… nuclear weapons for use on the battlefield; it is developing nuclear weapons for eliminating population centers.”

In other words, as the Kashmir dispute continues to fester, inducing periodic terrorist attacks on India and fueling the competition between New Delhi and Islamabad to outpace each other in the variety and size of their nuclear arsenals, the peril to South Asia in particular and the world at large only grows.


Dilip Hiro, a TomDispatch regular, is the author, among many other works, of The Longest August: The Unflinching Rivalry between India and Pakistan(Nation Books). His 36th and latest book is The Age of Aspiration: Money, Power, and Conflict in Globalizing India (The New Press).

Follow TomDispatch on Twitter and join us on Facebook. Check out the newest Dispatch Book, Nick Turse’s Tomorrow’s Battlefield: U.S. Proxy Wars and Secret Ops in Africa, and Tom Engelhardt’s latest book, Shadow Government: Surveillance, Secret Wars, and a Global Security State in a Single-Superpower World.

Copyright 2016 Dilip Hiro


Robert Reich: “Stop Voter Suppression”

By Robert Reich
Reprinted from Robert Reich’s blog at robertreich.org

A crowning achievement of the historic March on Washington, where Dr. Martin Luther King gave his “I have a dream” speech, was pushing through the landmark Voting Rights Act of 1965. Recognizing the history of racist attempts to prevent Black people from voting, that federal law forced a number of southern states and districts to adhere to federal guidelines allowing citizens access to the polls.

But in 2013 the Supreme Court effectively gutted many of these protections. As a result, states are finding new ways to stop more and more people—especially African-Americans and other likely Democratic voters—from reaching the polls.

Several states are requiring government-issued photo IDs—like drivers licenses—to vote even though there’s no evidence of the voter fraud this is supposed to prevent. But there’s plenty of evidence that these ID measures depress voting, especially among communities of color, young voters, and lower-income Americans.

Alabama, after requiring photo IDs, has practically closed driver’s license offices in counties with large percentages of black voters. Wisconsin requires a government-issued photo ID but hasn’t provided any funding to explain to prospective voters how to secure those IDs.

Other states are reducing opportunities for early voting.

And several state legislatures—not just in the South—are gerrymandering districts to reduce the political power of people of color and Democrats, and thereby guarantee Republican control in Congress.

We need to move to the next stage of voting rights—a new Voting Rights Act—that renews the law that was effectively repealed by the conservative activists on the Supreme Court.

That new Voting Rights Act should also set minimum national standards—providing automatic voter registration when people get driver’s licenses, allowing at least 2 weeks of early voting, and taking districting away from the politicians and putting it under independent commissions.

Voting isn’t a privilege. It’s a right. And that right is too important to be left to partisan politics.  We must not allow anyone’s votes to be taken away.


Robert__Reich_ThumbROBERT B. REICH is Chancellor’s Professor of Public Policy at the University of California at Berkeley and Senior Fellow at the Blum Center for Developing Economies. He served as Secretary of Labor in the Clinton administration, for which Time Magazine named him one of the ten most effective cabinet secretaries of the twentieth century. He has written fourteen books, including the best sellers “Aftershock, “The Work of Nations,” and”Beyond Outrage,” and, his most recent, “Saving Capitalism.” He is also a founding editor of the American Prospect magazine, chairman of Common Cause, a member of the American Academy of Arts and Sciences, and co-creator of the award-winning documentary, INEQUALITY FOR ALL.


Eat at Summer Thyme’s Restaurant and Money is Donated to Wolf Creek Community Alliance

By Don Pelton (Board Member of the Wolf Creek Community Alliance)

Dear Friends of Wolf Creek:

Logo_ThumbHere’s a great way to have some fun and trigger a contribution from Summer Thyme’s Restaurant to the Wolf Creek Community Alliance (without spending an extra cent of your own)!

Drop by Summer Thyme’s Restaurant at 231 Colfax Avenue in Grass Valley and have something to eat, ask the cashier for some wooden nickels when you order (you’ll get one “nickel” for every $5 you spend) and drop them in the donation box below the Wolf Creek Community Alliance poster (see picture below). Based of the number of “nickels” in the donation box, Summer Thyme’s will donate a percentage of their profits to WCCA.

This offer is good for every visit you make to Summer Thyme’s from now through June.

Wooden Nickels (in front of register):
Wooden_Nickels_Thumb 3

Poster:
Posters

Yum!
Latte_and_Chocolate_Croissant_Thumb  Salad_Thumb  Quesadillas_Thumb  Sandwich_Thumb  Southwest_Turkey_Wrap_Thumb

Here’s the Complete Summer Thyme’s Menu:  http://www.summerthymes.com/menus/


A Fukushima on the Hudson? The Growing Dangers of Indian Point

By Ellen Cantarow and Alison Rose Levy
Reprinted with permission from Tomdispatch.com

It was a beautiful spring day and, in the control room of the nuclear reactor, the workers decided to deactivate the security system for a systems test. As they started to do so, however, the floor of the reactor began to tremble. Suddenly, its 1,200-ton cover blasted flames into the air. Tons of radioactive radium and graphite shot 1,000 meters into the sky and began drifting to the ground for miles around the nuclear plant. The first firemen to the rescue brought tons of water that would prove useless when it came to dousing the fires. The workers wore no protective clothing and eight of them would die that night — dozens more in the months to follow.It was April 26, 1986, and this was just the start of the meltdown at the Chernobyl nuclear power plant in Ukraine, the worst nuclear accident of its kind in history. Chernobyl is ranked as a “level 7 event,” the maximum danger classification on the International Nuclear and Radiological Event Scale.  It would spew out more radioactivity than 100 Hiroshima bombs. Of the 350,000 workers involved in cleanup operations, according to the World Health Organization, 240,000 would be exposed to the highest levels of radiation in a 30-mile zone around the plant. It is uncertain exactly how many cancer deaths have resulted since. The International Atomic Energy Agency’s estimate of the expected death toll from Chernobyl was 4,000. A 2006 Greenpeace report challenged that figure, suggesting that 16,000 people had already died due to the accident and predicting another 140,000 deaths in Ukraine and Belarus still to come. A significant increase in thyroid cancers in children, a very rare disease for them, has been charted in the region — nearly 7,000 cases by 2005 in Belarus, Russia, and Ukraine.In March 2011, 25 years after the Chernobyl catastrophe, damage caused by a tsunami triggered by a massive 9.0 magnitude earthquake led to the meltdown of three reactors at a nuclear plant in Fukushima, Japan. Radioactive rain from the Fukushima accident fell as far away as Ireland.In 2008, the International Atomic Energy Agency had, in fact, warned the Japanese government that none of the country’s nuclear power plants could withstand powerful earthquakes. That included the Fukushima plant, which had been built to take only a 7.0 magnitude event. No attention was paid at the time. After the disaster, the plant’s owner, Tokyo Electric Power, rehired Shaw Construction, which had designed and built the plant in the first place, to rebuild it.

Near Misses, Radioactive Leaks, and Flooding

In both Chernobyl and Fukushima, areas around the devastated plants were made uninhabitable for the foreseeable future.  In neither place, before disaster began to unfold, was anyone expecting it and few imagined that such a catastrophe was possible.  In the United States, too, despite the knowledge since 1945 that nuclear power, at war or in peacetime, holds dangers of a stunning sort, the general attitude remains: it can’t happen here — nowhere more dangerously in recent years than on the banks of New York’s Hudson River, an area that could face a nuclear peril endangering a population ofnearly 20 million.

As the Fukushima tragedy struck, President Obama assured Americans that U.S. nuclear plants were closely monitored and built to withstand earthquakes. That statement covered one of the oldest plants in the country, the Indian Point Energy Center (IPEC) in Westchester, New York, first opened in 1962. One of 61 commercial nuclear plants in the country, it has two reactors that generate electricity for homes across New York City and Westchester County. It is located in the sixth most densely populated urban area in the world, the New York metropolitan region, just 30 miles north of Manhattan Island and the planet’s most economically powerful city.

The plant sits astride two seismic faults, which has prompted those opposing its continued operation to call for a detailed analysis of its capacity to resist an earthquake. In addition, a long series of accidents and ongoing hazards has only increased the potential for catastrophe. According to a report by the National Resources Defense Council (NDRC), if a nuclear disaster of a Fukushima magnitude were to strike Indian Point, it would necessitate the evacuation of at least 5.6 million people. In 2003, the existing evacuation plan for the area was deemed inadequate in a report by James Lee Witt, former head of the Federal Emergency Management Agency.

American officials have urged U.S. citizens to stay 50 miles away from the Fukushima plant. Such a 50-mile circle around IPEC would stretch past Kingston in Ulster County to the north, past Bayonne and Jersey City to the south, almost to New Haven, Connecticut, to the east, and into Pennsylvania to the west. It would include all of New York City except for Staten Island and all of Fairfield, Connecticut. “Many scholars have already argued that any evacuation plans shouldn’t be called plans, but rather ‘fantasy documents,’” Daniel Aldrich, a professor of political science at Purdue University, told the New York Times. 

Paul Blanch, a nuclear engineer who worked in the industry for 40 years as well as with the Nuclear Regulatory Commission (NRC), thinks a worst-case accident at Indian Point could make the region, including parts of Connecticut, uninhabitable for generations.

According to a report from the Indian Point Safe Energy Coalition, there were 23 reported problems at the plant from its inception to 2005, including steam generator tube ruptures, reactor containment flooding, transformer fires, the failure of backup power for emergency sirens, and leaks of radioactive water laced with tritium. In the latest tritium leak, reported only last month, an outflow of the radioactive isotope from the plant has infused both local groundwater and the Hudson River. (Other U.S. nuclear plants have had their share of tritium leaks as well, including Turkey Point nuclear plant in Florida where such a leak is at the moment threatening drinking water wells.)

Experts agree that although present levels of tritium in groundwater near the plant are “alarming,” the tritium in the river will not be considered harmful until it reaches a far greater concentration of 120,000 picocuries per liter of water. (A picocurie is a standard unit of measurement for radioactivity.) Tritium is the lightest radioactive substance to leak from Indian Point, but according to an assessment by the New York Department of State, other potentially more dangerous radioactive elements like strontium-90, cesium-137, cobalt-60, and nickel-63 are also escaping the plant and entering both the groundwater and the river.

Representatives of Entergy Corporation, which owns the Indian Point plant, report that they don’t know when the present leak began or what its source might be. “No one has made a statement as to when the leak started,” wrote Paul Blanch in an email to us. “It could have started two years ago.” Nor does anyone seem to know where the leak is, how much radioactive matter is leaking, or how it can be stopped. The longer the leak persists, the greater the likelihood of isotopes more potent than tritium contaminating local drinking water.

According to David Lochbaum, director of the Nuclear Safety Project for the Union of Concerned Scientists (UCS) and once a trainer for NRC inspectors, the danger of flooding at the reactor should be an even greater focus of concern than radioactive substance outflows, since it could result in a reactor core meltdown. Yet despite repeated calls for Indian Point’s shutdown from the early 1970s on, it keeps operating.

On April 2, 2000, the NRC rated one of Indian Point’s two reactors the most troubled in the country, and it has been closed for lengthy periods because of system failures of various sorts. This, it turns out, is typical of Entergy-owned reactors. There were 10 “near-miss” incidents at U.S. nuclear reactors last year, a majority of them at three Entergy plants, according to a UCS report on nuclear plant safety. A near-miss incident is an event or condition that could increase the chance of reactor core damage by a factor of 10 or more.  In response, the Nuclear Regulatory Commission must send an inspection team to investigate.

The number of such incidents has declined since UCS initiated its annual review in 2010, “overall, a positive trend,” according to report author Lochbaum. “Five years ago, there were nearly twice as many near misses. That said, the nuclear industry is only as good as its worst plant owner. The NRC needs to find out why Entergy plants are experiencing so many potentially serious problems.” Upstate New York’s Ginna plant, he adds, has been operating as long as Indian Point, but with only two “events” in its history. At Indian Point “there’s a major event every two to three years.”

What troubles Lochbaum more than anything else is Indian Point’s vulnerability to flooding. “There was a problem in May 2015 where a transformer exploded,” he told us. “There was an automatic fire sprinkler system installed to put this out. But it ended up flooding the building adjacent to where the explosion had taken place. Fortunately a worker noticed that an inch or two of water had accumulated. If the room had flooded up to five inches, all the power in the plant would have been lost. It would have plunged unit 3 into a ‘station blackout.’”

This might indeed have led to some kind of Fukushima-on-the-Hudson situation.  In Fukushima, after the earthquake wiped out the normal power supply and tsunami floodwaters took away the backup supply, workers were unable to get cooling water into the reactor cores and three of the plant’s six reactors melted down.

In 2007, when Indian Point’s plant owner applied to the NRC for a 20-year extension of the plant’s operating license, it was found that a flood alarm could be installed in the room in question for about $200,000. As Lochbaum explains, “The owner determined it was cost-beneficial, that if they installed this flood alarm… it [would reduce] the risk of core meltdown by 20%, and [reduce] the amount of radiation that people on the plant could be exposed to by about 40%, at a cost of about two cents per person for the 20 million people living within 50 miles of the plant.” But nine years later, he told us, that flood alarm has still not been installed.

Potential Pipeline Explosions

As if none of this were enough, a new set of dangers to Indian Point have arisen in recent years due to a high-pressure natural gas pipeline currently being built by Spectra Energy. Dubbed the Algonquin Incremental Market (AIM) pipeline, it is to carry fracked natural gas from the Marcellus Shale formation underlying New York and adjacent states to the Canadian border.  At 42 inches in diameter, this pipeline is the biggest that can at present be built — and here’s the catch: AIM is slated to pass within 150 feet of the plant’s reactors.

A former Spectra worker hired to help oversee safety during the pipeline’s construction told a reporter that the company had taken dangerous shortcuts in its rush to begin the project. He had witnessed, he said, “at least two dozen” serious safety violations and transgressions.

Taking shortcuts in pipeline construction could, in the end, prove a risky business. Pipeline ruptures are the commonest cause of gas explosions like the one that, in March 2014 in Manhattan’s East Harlem, killed eight, injured 70, and leveled two apartment buildings. Robert Miller, chairman of the National Association of Pipeline Safety Representatives, attributed the rising rates of such incidents in newly constructed pipelines to “poor construction practices or maybe not enough quality control, quality assurance programs out there to catch these problems before those pipelines go into service.”

In January 2015, the National Transportation Safety Board published a study documenting that gas accidents in “high-consequence” areas (where there are a lot of people and buildings) have been on the rise. With the New York metropolitan area so close to Indian Point, it seems odd indeed to independent experts that the nuclear plant with the sorriest safety history in the country has been judged safe enough for a high-pressure gas pipeline to be run right by it.

A hazards assessment replete with errors was the basis for the go-ahead. Richard B. Kuprewicz, a pipeline infrastructure expert and incident investigator with more than 40 years of energy industry experience, has called that risk assessment “seriously deficient and inadequate.”

At another nuclear plant subsequently shut down, as David Lochbaum points out, a rigorous risk analysis was conducted for possible explosions based on a worst-case scenario. (“I couldn’t think of any scenario that would be worse than what they presumed.”) At Indian Point, the risk analysis was, however, done on a best-case basis. Among other things, it assumed that any pipeline leak around the plant could be stopped in less than three minutes — an unlikelihood at best. “It’s night and day. They did a very conservative analysis for [the other plant] and a very cavalier best-case scenario for Indian Point… I don’t know why they opted for [this] drive-by analysis.”

Tombstone Regulation

Of all the contaminants released in this industrial world, radioactivity may, in a sense, be the least visible and least imaginable, even if the most potentially devastating, were something to go wrong. As a result, the dangers of the “peaceful” atom have often proved hard to absorb before disaster strikes — as at the Three Mile Island reactor near Middletown, Pennsylvania, on March 28, 1979. Even when such a power plant sits near a highway or a community, it’s usually a reality to which people pay scant attention, in part because nuclear science is alien territory. This is why safety at nuclear power plants has been something citizens have relied on the government for.

The history of Indian Point, however, offers a grim reminder that the government agencies expected to protect citizens from disaster aren’t doing a particularly good job of it. Over the past several years, for instance, residents in the path of the AIM pipeline project have begun accusing the Federal Energy Regulatory Commission (FERC) of overwhelming bias in the industry’s favor. As FERC has a corner on oversight and approval of all pipeline construction, this is alarming. Its stamp of approval on a pipeline can only be contested via appeals that lead directly back to FERC itself, as the Natural Gas Act of 1938 gave the agency sole discretion over pipeline construction in the U.S.  Ever since then, its officials have approved pipelines of every sort almost without exception. Worse yet, at Indian Point, the Nuclear Regulatory Commission joined FERC in green-lighting AIM.

During the two-and-a-half-year period in which the pipeline was approved and construction began, the mainstream media virtually ignored the project and its potential dangers. Only this February, when New York Governor Andrew Cuomo, who has been opposed to the relicensing of Indian Point, first raised concerns about the dangers of the pipeline, did the New York Times, the paper of record for the New York metropolitan area, finally publish a piece on AIM.  So it fell to a grassroots movement of local activists to bring AIM’s dangers to public attention. Its growing resistance to a pipeline that could precipitate just about anything up to a Fukushima-on-the-Hudson-style event evidently led Governor Cuomo to urge FERC to postpone construction until a safety review could be completed, a request that the agency rejected. In February, alarmed by reports of tritium leaking from the plant, the governor also directed the state’s departments of environmental conservation and health to investigate the likely duration and consequences of such a leak and its potential impacts on public health.

According to Paul Blanch, the risk of a pipeline explosion in proximity to Indian Point is one in 1,000, odds he believes are too high given what’s potentially at stake. (He considers a one-in-a-million chance acceptable.) “I’ve had over 45 years of nuclear experience and [experience in] safety issues. I have never seen [a situation] that essentially puts 20 million residents at risk, plus the entire economics of the United States by making a large area surrounding Indian Point uninhabitable for generations. I’m not an alarmist and haven’t been known as an alarmist, but the possibility of a gas line interacting with a plant could easily cause a Fukushima type of release.”

According to Blanch, attempts to regulate nuclear plants after a Fukushima- or Chernobyl-type catastrophe are known in the trade as “tombstone regulation.” Nobody, of course, should ever want to experience such a situation on the Hudson, or have America’s own mini-Hiroshima seven decades late, or find literal tombstones cropping up in the New York metropolitan area due to a nuclear disaster. One hope for preventing all of this and ensuring protection for New York’s citizenry: the continuing growth of impressive citizen pressure and increasing public alarm around both the pipeline and Indian Point. It gives new meaning to the phrase “power to the people.”


 

TomDispatch regular Ellen Cantarow reported on Israel and the West Bank from 1979 to 2009 for the Village Voice, Mother Jones, Inquiry, and Grand Street, among other publications. For the past five years she has been writing about the environmental ravages of the oil and gas industries.

Alison Rose Levy is a New York-based journalist who covers the nexus of health, science, the environment, and public policy. She has reported on fracking, pipelines, the Trans-Pacific Partnership, chemical pollution, and the health impacts of industrial activity for the Huffington Post, Alternet,Truthdig, and EcoWatch.  

Copyright 2016 Ellen Cantarow and Alison Rose Levy


Thomas Frank: How the Democrats Created a “Liberalism of the Rich”

By Thomas Frank, author of the just-published Listen, Liberal, or What Ever Happened to the Party of the People? (Metropolitan Books) from which this essay is adapted. He has also written Pity the Billionaire, The Wrecking Crew, and What’s the Matter With Kansas? among other works. He is the founding editor of The Baffler. Reprinted with permission from Tomdispatch.com

When you press Democrats on their uninspiring deeds — their lousy free trade deals, for example, or their flaccid response to Wall Street misbehavior — when you press them on any of these things, they automatically reply that this is the best anyone could have done. After all, they had to deal with those awful Republicans, and those awful Republicans wouldn’t let the really good stuff get through. They filibustered in the Senate. They gerrymandered the congressional districts. And besides, change takes a long time. Surely you don’t think the tepid-to-lukewarm things Bill Clinton and Barack Obama have done in Washington really represent the fiery Democratic soul.

So let’s go to a place that does. Let’s choose a locale where Democratic rule is virtually unopposed, a place where Republican obstruction and sabotage can’t taint the experiment.

Let’s go to Boston, Massachusetts, the spiritual homeland of the professional class and a place where the ideology of modern liberalism has been permitted to grow and flourish without challenge or restraint. As the seat of American higher learning, it seems unsurprising that Boston should anchor one of the most Democratic of states, a place where elected Republicans (like the new governor) are highly unusual. This is the city that virtually invented the blue-state economic model, in which prosperity arises from higher education and the knowledge-based industries that surround it.

The coming of post-industrial society has treated this most ancient of American cities extremely well. Massachusetts routinely occupies the number one spot on the State New Economy Index, a measure of how “knowledge-based, globalized, entrepreneurial, IT-driven, and innovation-based” a place happens to be. Boston ranks high on many of Richard Florida’s statistical indices of approbation — in 2003, it was number one on the “creative class index,” number three in innovation and in high tech — and his many books marvel at the city’s concentration of venture capital, its allure to young people, or the time it enticed some firm away from some unenlightened locale in the hinterlands.

Boston’s knowledge economy is the best, and it is the oldest. Boston’s metro area encompasses some 85 private colleges and universities, the greatest concentration of higher-ed institutions in the country — probably in the world. The region has all the ancillary advantages to show for this: a highly educated population, an unusually large number of patents, and more Nobel laureates than any other city in the country.

The city’s Route 128 corridor was the original model for a suburban tech district, lined ever since it was built with defense contractors and computer manufacturers. The suburbs situated along this golden thoroughfare are among the wealthiest municipalities in the nation, populated by engineers, lawyers, and aerospace workers. Their public schools are excellent, their downtowns are cute, and back in the seventies their socially enlightened residents were the prototype for the figure of the “suburban liberal.”

Another prototype: the Massachusetts Institute of Technology, situated in Cambridge, is where our modern conception of the university as an incubator for business enterprises began. According to a report on MIT’s achievements in this category, the school’s alumni have started nearly 26,000 companies over the years, including Intel, Hewlett Packard, and Qualcomm. If you were to take those 26,000 companies as a separate nation, the report tells us, its economy would be one of the most productive in the world.

Then there are Boston’s many biotech and pharmaceutical concerns, grouped together in what is known as the “life sciences super cluster,” which, properly understood, is part of an “ecosystem” in which PhDs can “partner” with venture capitalists and in which big pharmaceutical firms can acquire small ones. While other industries shrivel, the Boston super cluster grows, with the life-sciences professionals of the world lighting out for the Athens of America and the massive new “innovation centers” shoehorning themselves one after the other into the crowded academic suburb of Cambridge.

To think about it slightly more critically, Boston is the headquarters for two industries that are steadily bankrupting middle America: big learning and big medicine, both of them imposing costs that everyone else is basically required to pay and which increase at a far more rapid pace than wages or inflation. A thousand dollars a pill, 30 grand a semester: the debts that are gradually choking the life out of people where you live are what has madethis city so very rich.

Perhaps it makes sense, then, that another category in which Massachusetts ranks highly is inequality. Once the visitor leaves the brainy bustle of Boston, he discovers that this state is filled with wreckage — with former manufacturing towns in which workers watch their way of life draining away, and with cities that are little more than warehouses for people on Medicare. According to one survey, Massachusetts has the eighth-worst rate of income inequality among the states; by another metric it ranks fourth. However you choose to measure the diverging fortunes of the country’s top 10% and the rest, Massachusetts always seems to finish among the nation’s most unequal places.

Seething City on a Cliff

You can see what I mean when you visit Fall River, an old mill town 50 miles south of Boston. Median household income in that city is $33,000, among the lowest in the state; unemployment is among the highest, 15% in March 2014, nearly five years after the recession ended. Twenty-three percent of Fall River’s inhabitants live in poverty. The city lost its many fabric-making concerns decades ago and with them it lost its reason for being. People have been deserting the place for decades.

Many of the empty factories in which their ancestors worked are still standing, however. Solid nineteenth-century structures of granite or brick, these huge boxes dominate the city visually — there always seems to be one or two of them in the vista, contrasting painfully with whatever colorful plastic fast-food joint has been slapped up next door.

Most of the old factories are boarded up, unmistakable emblems of hopelessness right up to the roof. But the ones that have been successfully repurposed are in some ways even worse, filled as they often are with enterprises offering cheap suits or help with drug addiction. A clinic in the hulk of one abandoned mill has a sign on the window reading simply “Cancer & Blood.”

The effect of all this is to remind you with every prospect that this is a place and a way of life from which the politicians have withdrawn their blessing. Like so many other American scenes, this one is the product of decades of deindustrialization, engineered by Republicans and rationalized by Democrats. This is a place where affluence never returns — not because affluence for Fall River is impossible or unimaginable, but because our country’s leaders have blandly accepted a social order that constantly bids down the wages of people like these while bidding up the rewards for innovators, creatives, and professionals.

Even the city’s one real hope for new employment opportunities — an Amazon warehouse that is now in the planning stages — will serve to lock in this relationship. If all goes according to plan, and if Amazon sticks to the practices it has pioneered elsewhere, people from Fall River will one day get to do exhausting work with few benefits while being electronically monitored for efficiency, in order to save the affluent customers of nearby Boston a few pennies when they buy books or electronics.

But that is all in the future. These days, the local newspaper publishes an endless stream of stories about drug arrests, shootings, drunk-driving crashes, the stupidity of local politicians, and the lamentable surplus of “affordable housing.” The town is up to its eyeballs in wrathful bitterness against public workers. As in: Why do they deserve a decent life when the rest of us have no chance at all? It’s every man for himself here in a “competition for crumbs,” as a Fall River friend puts it.

The Great Entrepreneurial Awakening

If Fall River is pocked with empty mills, the streets of Boston are dotted with facilities intended to make innovation and entrepreneurship easy and convenient. I was surprised to discover, during the time I spent exploring the city’s political landscape, that Boston boasts a full-blown Innovation District, a disused industrial neighborhood that has actually been zoned creative — a projection of the post-industrial blue-state ideal onto the urban grid itself. The heart of the neighborhood is a building called “District Hall” — “Boston’s New Home for Innovation” — which appeared to me to be a glorified multipurpose room, enclosed in a sharply angular façade, and sharing a roof with a restaurant that offers “inventive cuisine for innovative people.” The Wi-Fi was free, the screens on the walls displayed famous quotations about creativity, and the walls themselves were covered with a high-gloss finish meant to be written on with dry-erase markers; but otherwise it was not much different from an ordinary public library. Aside from not having anything to read, that is.

This was my introduction to the innovation infrastructure of the city, much of it built up by entrepreneurs shrewdly angling to grab a piece of the entrepreneur craze. There are “co-working” spaces, shared offices for startups that can’t afford the real thing. There are startup “incubators” and startup “accelerators,” which aim to ease the innovator’s eternal struggle with an uncaring public: the Startup Institute, for example, and the famous MassChallenge, the “World’s Largest Startup Accelerator,” which runs an annual competition for new companies and hands out prizes at the end.

And then there are the innovation Democrats, led by former Governor Deval Patrick, who presided over the Massachusetts government from 2007 to 2015. He is typical of liberal-class leaders; you might even say he is their most successful exemplar. Everyone seems to like him, even his opponents. He is a witty and affable public speaker as well as a man of competence, a highly educated technocrat who is comfortable in corporate surroundings. Thanks to his upbringing in a Chicago housing project, he also understands the plight of the poor, and (perhaps best of all) he is an honest politician in a state accustomed to wide-open corruption. Patrick was also the first black governor of Massachusetts and, in some ways, an ideal Democrat for the era of Barack Obama — who, as it happens, is one of his closest political allies.

As governor, Patrick became a kind of missionary for the innovation cult. “The Massachusetts economy is an innovation economy,” he liked to declare, and he made similar comments countless times, slightly varying the order of the optimistic keywords: “Innovation is a centerpiece of the Massachusetts economy,” et cetera. The governor opened “innovation schools,” a species of ramped-up charter school. He signed the “Social Innovation Compact,” which had something to do with meeting “the private sector’s need for skilled entry-level professional talent.” In a 2009 speech called “The Innovation Economy,” Patrick elaborated the political theory of innovation in greater detail, telling an audience of corporate types in Silicon Valley about Massachusetts’s “high concentration of brainpower” and “world-class” universities, and how “we in government are actively partnering with the private sector and the universities, to strengthen our innovation industries.”

What did all of this inno-talk mean? Much of the time, it was pure applesauce — standard-issue platitudes to be rolled out every time some pharmaceutical company opened an office building somewhere in the state.

On some occasions, Patrick’s favorite buzzword came with a gigantic price tag, like the billion dollars in subsidies and tax breaks that the governor authorized in 2008 to encourage pharmaceutical and biotech companies to do business in Massachusetts. On still other occasions, favoring inno has meant bulldozing the people in its path — for instance, the taxi drivers whose livelihoods are being usurped by ridesharing apps like Uber. When these workers staged a variety of protests in the Boston area, Patrick intervened decisively on the side of the distant software company. Apparently convenience for the people who ride in taxis was more important than good pay for people who drive those taxis. It probably didn’t hurt that Uber had hired a former Patrick aide as a lobbyist, but the real point was, of course, innovation: Uber was the future, the taxi drivers were the past, and the path for Massachusetts was obvious.

A short while later, Patrick became something of an innovator himself. After his time as governor came to an end last year, he won a job as a managing director of Bain Capital, the private equity firm that was founded by his predecessor Mitt Romney — and that had been so powerfully denounced by Democrats during the 2012 election. Patrick spoke about the job like it was just another startup: “It was a happy and timely coincidence I was interested in building a business that Bain was also interested in building,” he told theWall Street Journal. Romney reportedly phoned him with congratulations.

Entrepreneurs First

At a 2014 celebration of Governor Patrick’s innovation leadership, Google’s Eric Schmidt announced that “if you want to solve the economic problems of the U.S., create more entrepreneurs.” That sort of sums up the ideology in this corporate commonwealth: Entrepreneurs first. But how has such a doctrine become holy writ in a party dedicated to the welfare of the common man? And how has all this come to pass in the liberal state of Massachusetts?

The answer is that I’ve got the wrong liberalism. The kind of liberalism that has dominated Massachusetts for the last few decades isn’t the stuff of Franklin Roosevelt or the United Auto Workers; it’s the Route 128/suburban-professionals variety. (Senator Elizabeth Warren is the great exception to this rule.) Professional-class liberals aren’t really alarmed by oversized rewards for society’s winners. On the contrary, this seems natural to them — because they are society’s winners. The liberalism of professionals just does not extend to matters of inequality; this is the area where soft hearts abruptly turn hard.

Innovation liberalism is “a liberalism of the rich,” to use the straightforward phrase of local labor leader Harris Gruman. This doctrine has no patience with the idea that everyone should share in society’s wealth. What Massachusetts liberals pine for, by and large, is a more perfect meritocracy — a system where the essential thing is to ensure that the truly talented get into the right schools and then get to rise through the ranks of society. Unfortunately, however, as the blue-state model makes painfully clear, there is no solidarity in a meritocracy. The ideology of educational achievement conveniently negates any esteem we might feel for the poorly graduated.

This is a curious phenomenon, is it not? A blue state where the Democrats maintain transparent connections to high finance and big pharma; where they have deliberately chosen distant software barons over working-class members of their own society; and where their chief economic proposals have to do with promoting “innovation,” a grand and promising idea that remains suspiciously vague. Nor can these innovation Democrats claim that their hands were forced by Republicans. They came up with this program all on their own.

Thomas Frank is the author of the just-published Listen, Liberal, or What Ever Happened to the Party of the People? (Metropolitan Books) from which this essay is adapted. He has also written Pity the Billionaire, The Wrecking Crew, and What’s the Matter With Kansas? among other works. He is the founding editor of The Baffler.

Copyright 2016 Thomas Frank


 

Follow TomDispatch on Twitter and join us on Facebook. Check out the newest Dispatch Book, Nick Turse’s Tomorrow’s Battlefield: U.S. Proxy Wars and Secret Ops in Africa, and Tom Engelhardt’s latest book, Shadow Government: Surveillance, Secret Wars, and a Global Security State in a Single-Superpower World.

 


Next Page »