Sunday, January 29, 2012

Kindle Fire - HT @bruces

What a Drag | Jared Bernstein

Jan 27, 2012

Here’s one reason we’re stuck in slow growth mode: the budget crunch among state and local governments.

Media_httpjaredbernst_vhfju

The figure shows the yearly percentage point contribution to or subtraction from real GDP growth from the state and local sectors since the late 1980s. The trend bounces around but the recent cliff dive is evident. It’s also why we keep losing jobs in these sectors month after month.

Source: BEA

Unlike the feds, states have to balance their budgets every year, which means they either raise taxes or cut services. They haven’t done much on the tax side, so they’ve been laying off teachers, cops, maintenance workers; practically every month over the past few years we’ve been adding private sector jobs and shedding public sector jobs.

In a very real sense, what you have here is a microcosm of austerity measures at work in cities and towns across the country. Moreover, this drag on growth is avoidable. One of the most successful parts of the Recovery Act was state fiscal relief, as those dollars went directly to preserving state and local jobs. The American Jobs Act proposed $35 billion to build on that progress, resources that would have prevented hundreds of thousands of ongoing layoffs. But it languishes in the dysfunctional Congress and we’re left with the fiscal drag you see in the figure.

Update: A commenter notes that this figure is a good argument against a balanced budget amendment. Amen. As I wrote around the time of that debate–and this bad idea hasn’t gone away–think of a recession as all the states piled in a boat together along with the federal government and the boat is taking on water. There’s really only one institution in that boat with bilge pump and that’s the feds. A BBA takes the pump away…then the boat sinks…

Saturday, January 28, 2012

Why Manufacturing Can’t Solve The Jobs Problem | ROYA WOLVERSON | Time Business

January 24, 2012.

Here in snowy Davos, the topic of job creation has been about as popular as the passed canapés and free champagne. Not surprisingly, President Obama’s latest jobs proposals — a combination of taxing outsourcing corporations and reviving U.S. manufacturing —haven’t been as popular. It’s not hard to see why.

Among other things, Obama’s State of the Union speech Tuesday drove home the idea that U.S. industries need more protection. “Over a thousand Americans are working today because we stopped a surge in Chinese tires,” he said in his speech. That’s all fine and good if your goal is to hold on to U.S. manufacturing jobs. But it’s not going to solve the country’s overall unemployment problem. And in the end, it may cost the American consumer more than those jobs are worth.

(MORE: Smack Down at Davos: Merkel and Soros Spar on the Euro’s Future)

For one thing, raising trade barriers on imported goods like tires makes tire-buying more expensive for American consumers, which, as Matthew Yglesias points out, only undermines those consumers’ ability to spend elsewhere. It also provokes countries like China to raise trade barriers on U.S. goods, which makes the job of increasing U.S. exports and export-related jobs even harder. Even if protections did save some manufacturing jobs, they wouldn’t be enough to move the needle on unemployment. It’s worth remembering that only 11% of U.S. jobs come from manufacturing, thanks to globalization, which has taken jobs abroad to lower-wage countries, and technological advances that have increased worker productivity. And that percentage has been declining steadily for several decades.

Losing jobs to globalization isn’t just an issue for the U.S. The trend has long been remaking workforces across the world. In China, higher wage demands have led many global companies to relocate their factories to countries with even cheaper labor, such as Vietnam and Malaysia. As economist Peter Diamond told me today, we have to get used to the fact that “globalization is a reality which isn’t going to stop.” And since we can’t reverse that process, the biggest gains in the job market can’t come from greater protections, but instead from gains in technology. Standard Chartered’s Gerald Lyons made the point today that, despite the enduring public perception that technology kills jobs, for every one job technology destroys, it creates 2.1 other jobs. Thus, instead of clinging to our past by supporting unproductive industries and erecting trade barriers, the U.S. has to find “the types of jobs that are fit for this country’s future,” argues

That’s also true because Americans have come to expect much higher living standards than what low-skilled manufacturing jobs can provide. Jonas Prising of ManpowerGroup stressed to me today that in the 1970s, roughly 25% of the workforce that made it into the middle class did so without a high school diploma. Today, only 10% of Americans who haven’t finished high school can say the same.

(MORE: The TIME Debate: Is Capitalism Working in the 21st Century?)

Of course, the problem with leaning on innovation to spur job creation is that it takes time, which will leave a lot of people in a lot of pain for a long time. What’s more, not everyone who’s unemployed can become an innovator, aka a high-skilled, high-paid engineer or math wizz. A lot of blue-collar workers will still need blue-collar jobs. That’s something the CEO set at Davos is having a hard time arguing around.

The labor leaders around here (I’ve come across two so far in this sea of corporate titans) are still pushing for some kind of industrial policy that follows the German model, one in which the public and private sector work together on finding more productive jobs for the middle-income worker. But that kind of policy, which has to be carefully tailored to the individual country, can take a decade or more to ramp up. In the meantime, the only solution is for the middle segment of the workforce to develop “middle skills,” says Prising, in other words, two-year degrees and retraining programs that help the low-skilled or mis-skilled worker fill some of the more productive jobs we know we’re going to need. And what are those jobs? Healthcare and infrastructure are two examples of sectors destined to grow.

Of course, infrastructure requires public investment, and in this political environment, more public spending is a nearly impossible sell. That’s why we need a total reshuffle of national priorities driven from the ground up, says Diamond. “We have a debt problem and not a debt crisis. And yet we’re acting like we have a job problem, and not a job crisis.” Continued political wrangling over the merits of public spending may make a crisis of both.

Friday, January 27, 2012

Diets based on a grain of truth | Sidney Morning Herald - Erik Jensen

January 28, 2012 - 3:00AM

More than anyone else, Igor Cetojevic is the man credited with revolutionising the world No. 1's tennis

''He's done a great job in changing my diet after we established I am allergic to some food ingredients, like gluten,'' Djokovic said of the diagnosis that turned around his career two years ago. ''It means I can't eat stuff like pizza, pasta and bread. I have lost some weight but it's only helped me because my movement is

The improvements to Djokovic's form are not in contention. But the explanation for the Serbian's success

''There are a whole lot of people who believe they are gluten intolerant, who don't have coeliac disease,'' says Professor Peter Gibson, professor of gastroenterology at the Alfred Hospital in Melbourne. ''This is very controversial because there is a quite big percentage - even up to 10 per cent - of people who are avoiding gluten because they think gluten is their problem. Naturopaths have put them on a diet, or they have done it

As yet unpublished research from Monash University, co-written by Professor Gibson, found only 14 per cent of people on gluten-free diets were put on the regime by a doctor. Almost half had simply decided to cut wheat and grains from their diet because they assumed they were intolerant. More than 60 per cent had not been tested

''It's a very emotive area,'' Gibson said. ''Fortunately, now there is a lot of work going on around the world trying to

The issue is a question of medical distinction: coeliac disease is an immunological complaint in which gluten interferes with the body's ability to absorb nutrients, identifiable by a blood test; gluten intolerance has no diagnostic

Improvements to a person's health without gluten can be explained several ways, by placebo effect or by the fact a gluten-free diet removes other agents from the body - most importantly the poorly absorbed carbohydrates known as

An Australian study published last year in the American Journal of Gastroenterology showed for the first time that gluten could trigger symptoms of fatigue in people without coeliac disease - making the argument for what doctors

''Gluten intolerance in individuals without coeliac disease is a controversial issue and has recently been described as the 'no man's land of gluten sensitivity','' the authors wrote. ''The evidence base for such claims is unfortunately very

Finland has done more than any other nation to identify its coeliacs. It has the most reliable data on increased prevalence: a doubling, from 1 per cent to 2 per cent between 1979 and 2000. Fins have been eating gluten free

It is accepted that coeliac disease affects about one in every 100 Australians - although there is no local research to confirm the Finnish findings. Some academics argue perceived increases in coeliac disease are heightened by

The increase in people identifying with non-coeliac gluten intolerance is more conflicted. An editorial in the Medical Journal of Australia last year noted the distinction: ''The popularity of the 'fad' gluten-free diet might be peaking, but

Penny Dellsperger, a dietitian at Coeliac NSW, said there were significant medical risks to people adopting gluten free diets without first ascertaining whether they suffered coeliac disease. She said the symptoms could easily relate to

''Obviously there are a lot of people on gluten free diets who don't need to be and who haven't had the proper tests.

''I don't understand why you would [maintain a gluten free diet] if you didn't need to. It's been marketed a lot and

This story was found at: http://www.smh.com.au/lifestyle/diets-based-on-a-grain-of-truth-20120127-1qlc...

Low IQ & Conservative Beliefs Linked to Prejudice | Stephanie Pappas | LiveScience.com

Thu, Jan 26, 2012

There's no gentle way to put it: People who give in to racism and prejudice may simply be dumb, according to a new study that is bound to stir public controversy.

The research finds that children with low intelligence are more likely to hold prejudiced attitudes as adults. These findings point to a vicious cycle, according to lead researcher Gordon Hodson, a psychologist at Brock University in Ontario. Low-intelligence adults tend to gravitate toward socially conservative ideologies, the study found. Those ideologies, in turn, stress hierarchy and resistance to change, attitudes that can contribute to prejudice, Hodson wrote in an email to LiveScience.

Prejudice is extremely complex and multifaceted, making it critical that any factors contributing to bias are uncovered and understood, he said.

Controversy ahead

The findings combine three hot-button topics.

They've pulled off the trifecta of controversial topics, said Brian Nosek, a social and cognitive psychologist at the University of Virginia who was not involved in the study. When one selects intelligence, political ideology and racism and looks at any of the relationships between those three variables, it's bound to upset somebody.

Polling data and social and political science research do show that prejudice is more common in those who hold right-wing ideals that those of other political persuasions, Nosek told LiveScience. [7 Thoughts That Are Bad For You]

The unique contribution here is trying to make some progress on the most challenging aspect of this, Nosek said, referring to the new study. It's not that a relationship like that exists, but why it exists.

Brains and bias

Earlier studies have found links between low levels of education and higher levels of prejudice, Hodson said, so studying intelligence seemed a logical next step. The researchers turned to two studies of citizens in the United Kingdom, one that has followed babies since their births in March 1958, and another that did the same for babies born in April 1970. The children in the studies had their intelligence assessed at age 10 or 11; as adults ages 30 or 33, their levels of social conservatism and racism were measured. [Life's Extremes: Democrat vs. Republican ]

In the first study, verbal and nonverbal intelligence was measured using tests that asked people to find similarities and differences between words, shapes and symbols. The second study measured cognitive abilities in four ways, including number recall, shape-drawing tasks, defining words and identifying patterns and similarities among words. Average IQ is set at 100.

Social conservatives were defined as people who agreed with a laundry list of statements such as Family life suffers if mum is working full-time, and Schools should teach children to obey authority. Attitudes toward other races were captured by measuring agreement with statements such as I wouldn't mind working with people from other races . (These questions measured overt prejudiced attitudes, but most people, no matter how egalitarian, do hold unconscious racial biases ; Hodson's work can't speak to this underground racism.)

As suspected, low intelligence in childhood corresponded with racism in adulthood. But the factor that explained the relationship between these two variables was political: When researchers included social conservatism in the analysis, those ideologies accounted for much of the link between brains and bias.

People with lower cognitive abilities also had less contact with people of other races.

This finding is consistent with recent research demonstrating that intergroup contact is mentally challenging and cognitively draining, and consistent with findings that contact reduces prejudice, said Hodson, who along with his colleagues published these results online Jan. 5 in the journal Psychological Science.

A study of averages

Hodson was quick to note that the despite the link found between low intelligence and social conservatism, the researchers aren't implying that all liberals are brilliant and all conservatives stupid. The research is a study of averages over large groups, he said.

There are multiple examples of very bright conservatives and not-so-bright liberals, and many examples of very principled conservatives and very intolerant liberals, Hodson said.

Nosek gave another example to illustrate the dangers of taking the findings too literally.

We can say definitively men are taller than women on average, he said. But you can't say if you take a random man and you take a random woman that the man is going to be taller. There's plenty of overlap.

Nonetheless, there is reason to believe that strict right-wing ideology might appeal to those who have trouble grasping the complexity of the world.

Socially conservative ideologies tend to offer structure and order, Hodson said, explaining why these beliefs might draw those with low intelligence. Unfortunately, many of these features can also contribute to prejudice.

In another study, this one in the United States, Hodson and Busseri compared 254 people with the same amount of education but different levels of ability in abstract reasoning. They found that what applies to racism may also apply to homophobia. People who were poorer at abstract reasoning were more likely to exhibit prejudice against gays. As in the U.K. citizens, a lack of contact with gays and more acceptance of right-wing authoritarianism explained the link. [5 Myths About Gay People Debunked]

Simple viewpoints

Hodson and Busseri's explanation of their findings is reasonable, Nosek said, but it is correlational. That means the researchers didn't conclusively prove that the low intelligence caused the later prejudice. To do that, you'd have to somehow randomly assign otherwise identical people to be smart or dumb, liberal or conservative. Those sorts of studies obviously aren't possible.

The researchers controlled for factors such as education and socioeconomic status, making their case stronger, Nosek said. But there are other possible explanations that fit the data. For example, Nosek said, a study of left-wing liberals with stereotypically naïve views like every kid is a genius in his or her own way, might find that people who hold these attitudes are also less bright. In other words, it might not be a particular ideology that is linked to stupidity, but extremist views in general.

My speculation is that it's not as simple as their model presents it, Nosek said. I think that lower cognitive capacity can lead to multiple simple ways to represent the world, and one of those can be embodied in a right-wing ideology where 'People I don't know are threats' and 'The world is a dangerous place '. ... Another simple way would be to just assume everybody is wonderful.

Prejudice is of particular interest because understanding the roots of racism and bias could help eliminate them, Hodson said. For example, he said, many anti-prejudice programs encourage participants to see things from another group's point of view. That mental exercise may be too taxing for people of low IQ.

There may be cognitive limits in the ability to take the perspective of others, particularly foreigners, Hodson said. Much of the present research literature suggests that our prejudices are primarily emotional in origin rather than cognitive. These two pieces of information suggest that it might be particularly fruitful for researchers to consider strategies to change feelings toward outgroups, rather than thoughts.

You can follow LiveScience senior writer Stephanie Pappas on Twitter @sipappas . Follow LiveScience for the latest in science news and discoveries on Twitter @livescience and on Facebook . Understanding the 10 Most Destructive Human Behaviors Inside the Brain: A Journey Through Time Busted! 6 Gender Myths in the Bedroom & Beyond

How Much Is an Astronaut’s Life Worth? | Robert Zubrin - Reason Magazine

Posted on Thursday Jan 26th at 10:30am

If we could put a man on the Moon, why can’t we put a man on the Moon? Starting with near zero space capability in 1961, the National Aeronautics and Space Administration (NASA) put men on our companion world in eight years. Yet despite vastly superior technology and hundreds of billions of dollars in subsequent spending, the agency has been unable to send anyone else farther than low Earth orbit ever since. Why? Because we insist that our astronauts be as safe as possible.

Keeping astronauts safe merits significant expenditure. But how much? There is a potentially unlimited set of testing procedures, precursor missions, technological improvements, and other protective measures that could be implemented before allowing human beings to once again try flying to other worlds. Were we to adopt all of them, we would wind up with a human spaceflight program of infinite cost and zero accomplishment. In recent years, the trend has moved in precisely that direction, with NASA’s manned spaceflight effort spending more and more to accomplish less and less. If we are to achieve anything going forward, we have to find some way to strike a balance between human life and mission accomplishment. What we need is a quantitative criterion to assess what constitutes a rational expenditure to avert astronaut risk. In plain English, we need to answer a basic question: How much is an astronaut’s life worth? The Worth of an Astronaut The life of an astronaut is intrinsically precious, but no more so than that of anyone else. Let’s therefore consider how much other government programs spend to save people’s lives. Based on data from hundreds of programs, policy analyst John D. Graham and his colleagues at the Harvard Center for Risk Analysis found in 1997 that the median cost for lifesaving expenditures and regulations by the U.S. government in the health care, residential, transportation, and occupational areas ranges from about $1 million to $3 million spent per life saved in today’s dollars. The only marked exception to this pattern occurs in the area of environmental health protection (such as the Superfund program) which costs about $200 million per life saved. Graham and his colleagues call the latter kind of inefficiency “statistical murder,” since thousands of additional lives could be saved each year if the money were used more cost-effectively. To avoid such deadly waste, the Department of Transportation has a policy of rejecting any proposed safety expenditure that costs more than $3 million per life saved. That ceiling therefore may be taken as a high-end estimate for the value of an American’s life as defined by the U.S. government. But astronauts are not just anyone. They are highly trained personnel in whom the government has invested tens of millions of dollars (the exact figure varies from astronaut to astronaut). Some, such as former fighter pilots, have received much more training than others. Let us therefore err on the high side and assign a value of $50 million per astronaut, including intrinsic worth and training. Looking at the matter this way can provide some useful guidance for weighing risk against expenditure in the human spaceflight program. The issue is well illustrated by the case of the Hubble Space Telescope. The Hubble Deserters In January 2004, Sean O’Keefe, then NASA’s administrator, announced that he was canceling the agency’s planned space shuttle mission to save, repair, and upgrade the Hubble Space Telescope, thereby sentencing the Hubble to death by equipment failure and eventual total destruction upon re-entry into the Earth’s atmosphere due to orbital decay. According to O’Keefe, the February 2003 explosion of the space shuttle Columbia showed how risky such telescope-maintenance flights were. As a responsible government official, he said, he could not authorize such a perilous venture. The Hubble Space Telescope is a unique astronomical observatory that has made world-historic contributions to science, discovering, among other things, that the universe’s expansion is accelerating, indicating the existence of a previously unsuspected fundamental physical force. It also represents a cash investment of about $5 billion by American taxpayers. To be conservative, let us assume that all the safety improvements undertaken after the Columbia accident accomplished absolutely nothing, so that the space shuttle’s reliability rate was still just the 98 percent demonstrated up until that time (123 successful flights out of 125). Based on the $50-million-per-astronaut value we arrived at above, the seven-person crew of the shuttle can be assigned a value of $350 million, to which we’ll add the replacement cost of the shuttle orbiter itself, around $3 billion. Proceeding with the mission—which would have extended Hubble’s life for another decade, yielding incalculable scientific knowledge—therefore would have posed a 2 percent risk of losing $3.35 billion, which implies a probabilistic loss of $67 million. Comparing that $67 million risk or insurance cost to Hubble’s $5 billion value, we can see that O’Keefe’s argument for abandoning Hubble was completely irrational. Imagine that the captain of a $5 billion aircraft carrier let his ship sink rather than allow seven volunteers to attempt a repair, on the grounds that the odds favoring their survival were only 50 to 1. Such an officer would be court-martialed and regarded with universal contempt both by his brother officers and by society at large. The attempted Hubble desertion demonstrates how a refusal to accept human risk has led to irresponsible conduct on the part of NASA’s leadership. The affair was such a wild dereliction of duty, in fact, that O’Keefe was eventually forced out and the shuttle mission completed by his replacement. But in its broad approach to human space exploration, NASA has been generally—if not so obviously—feckless. Put simply, when the agency takes some $4 billion in taxpayer money per year to fly humans into space, it really has to fly them there and put them to good use. That amount of money, if spent on ground-based life-saving efforts such as childhood vaccinations, swimming lessons, fire escape inspections, highway repairs, body armor for the troops, save (at the government average of $2 million per life) roughly 2,000 lives. This is the sacrifice that the nation makes so NASA can run a human spaceflight program. In the face of such sacrifice, real results are required. The Long Way to Mars Mars is key to humanity’s future in space. It is the closest planet that has the resources needed to support life and technological civilization. Its complexity uniquely demands the skills of human explorers, who will pave the way for human settlers. It is therefore the proper destination for NASA’s human spaceflight program, and the agency has publicly embraced it as such. But according to NASA, before the agency attempts such a mission, it must minimize the risk by conducting a variety of preparatory programs, including the now-ended shuttle program, the continuing space station program, a variety of robotic probes, a set of near-Earth asteroid expeditions, the construction of a lunar base, missions to the Martian moons, and an assortment of allegedly valuable orbital infrastructure projects and advanced propulsion systems. Discounting the probes, which don’t cost much and actually are quite useful, the rest of this agenda comes with a price tag on the order of $500 billion and a delay in mission accomplishment by half a century. NASA’s Apollo-era leadership wanted to send men to Mars by 1981. Their plan was canned in favor of the space shuttle, the space station, and an extended program of learning how to live and work in low Earth orbit before we venture further. It would have been unquestionably risky to attempt a Mars mission in the 1980s, just as it was to reach for the Moon in the 1960s. But even if we ignore the fact that the multi-decade preparatory exercise adopted as an alternative to real space exploration has already cost the lives of 14 astronauts, and will almost certainly cost more as it drags on, the question must be asked: How rational is it to spend such huge sums to marginally reduce risk to the crew of the perpetually deferred Mars I? Let’s do the math. It’s true that nearly anything we do in space will provide experience that will reduce risk to subsequent missions, but by how much? Suppose that by doing one of the aforementioned intermediate activities—say, running the space station program for another 10 years—we can increase the probability that the first expedition to Mars will succeed from 90 percent to 95 percent. Assume that the extended space station program costs $50 billion, that we disregard its own risk, and that the crew of the first Mars mission consists of five people. Cutting the risk to five people by 5 percent each is equivalent to saving 25 percent of one human life. At a cost of $50 billion, that would work out to $200 billion per life saved, a humanitarian effort 100,000 times less efficient than the average achieved by the Department of Transportation. Meanwhile, the space station program would entail considerable risk of its own, while tacking on an additional decade of delay in achievement of the primary mission. Such an approach makes no sense. The Mission Comes First The contrast between NASA’s current attitude toward risk and that of earlier explorers is stark. Neither Columbus nor Lewis and Clark would have imagined demanding 99.999 percent safety assurances as a precondition for their expeditions. Under such a standard, no human voyages of exploration would ever have been attempted. For those courageous souls who sought and found the paths that took our species from its ancestral home in the Kenyan Rift Valley to every continent and clime of the globe, it was enough that the game was worth the candle and that they had a fighting chance to win. During its Apollo days, NASA had a similar attitude because Apollo was mission driven. It was called into being by John F. Kennedy, a former torpedo boat commander, and the men who flew it—the younger brothers of those who had stormed beaches and machine gun nests to liberate Europe and Asia—were quite prepared to put their necks on the line to further the cause and expand the frontiers of freedom. It’s when the space program lacks a mission that it cannot bear risk. Instead, it (and we) can only recoil in horror at the spectacle of the Columbia crew—which included Israeli Col. Ilan Ramon, the pilot who led the daring raid that destroyed Saddam Hussein’s Osirak nuclear bomb factory—dying on a flight devoted to ant farms, recycled-urine-based finger paints, and other science fair experiments. Should a true private entrepreneurial space sector emerge, its captains may take the same heroic stance as the great explorers did during the Age of Discovery, whose bold quests for gold, glory, and God gave so much to a sometimes ungrateful posterity. But speaking realistically, while SpaceX and its competitors may substantially reduce the costs of NASA’s exploration program, they remain vendors to that program. NASA supplies the funds and therefore calls the shots. This situation makes the question of risk a matter of public policy. So, am I saying that we should just bull ahead, regardless of the risk? No. What I am saying is that in space exploration, the top priority must not be human safety, but mission success. These sound like the same thing, but they are not. Let me explain the difference by means of an example. Imagine you are the manager of a Mars robotic-rover program. You have a fixed budget and two options for how to spend it. The first option is to spend half the money on development and testing, the rest on manufacturing and flight operations. If you take this choice, you get two rovers, each with a 90 percent chance of success. The other option is to spend three-quarters of the budget on development and testing, leaving a quarter for the actual mission. If you do it this way, you get just one rover, but it has a success probability of 95 percent. Which option should you choose? The right answer is to go for two rovers, because if you do it that way, you will have a 99 percent probability of succeeding with at least one of the vehicles and an 81 percent probability of getting two successful rovers—an outcome that is not even possible with the other approach. This being a robotic mission, with no lives at stake, that’s all clear enough. But if we were talking about a human mission, what would the right choice be? The correct answer would be the same, because with tens of billions of dollars that could be used instead to meet all kinds of other pressing human needs, the first obligation must be to get the job done. Of course, if the choice were between two missions that each had just a 10 percent success probability and one with a 90 percent chance, the correct answer would be different. The point is that there is a methodology, well established in other fields, that can help assess the rationality of risk reduction expenditures in the human spaceflight program. If NASA disagrees with the suggested assignment of $50 million for the life of an astronaut, it should come up with its own figure, substantiate it, and then subject its proposed plan of action to a quantitative cost-benefit analysis based on that assessment. But it needs to be a finite number, for to set an infinite value on the life of an astronaut is to set both the goals of the space exploration effort and the needs of the rest of humanity at naught. This may seem like a harsh approach. But the many billions being spent on the human spaceflight program are not being spent for the safety of the astronauts; they could stay safe if they stayed home. The money is being spent to open the space frontier. Human spaceflight vehicles are not amusement park rides. They are daring ships of exploration that need to sail in harm’s way if they are to accomplish a mission critical to the human future. That mission needs to come first. Robert Zubrin is president of Pioneer Astronautics and of the Mars Society. An updated edition of his book The Case for Mars: The Plan to Settle the Red Planet and Why We Must has just been published by The Free Press.

Graphene: The perfect water filter | ExtremeTech

Researchers from the home of graphene, the University of Manchester in England, have discovered — seemingly by chance — one of the most important properties of graphene yet: It’s impermeable to everything but water. It is the perfect water filter.

In an experiment, the University of Manchester researchers filled a metal container with a variety of liquids and gases and then covered it with a film of graphene oxide. Their most sensitive equipment was unable to register any molecules leaving the container, except water vapor. The graphene oxide filter even prevented helium gas from escaping, which is notoriously finicky.

This fantastical feature joins a huge list of properties that have led graphene to be called a “wonder material.” Graphene, which is merely a single layer of carbon atoms, is the most conductive material in the world, both electrically and thermally. It is incredibly strong, and yet the thinnest material in the known universe. Graphene enables CPUs that can operate at 300GHz or higher, batteries that last 10 times as long, and petabit and exabit network transmission speeds. It even creates electricity when struck by light!

The graphene oxide water filterNow, technically graphene oxide isn’t quite the same thing as graphene, but in a good way: graphene oxide is much easier to make. Basically, graphene oxide forms into single-atom-thick sheets, like graphene, but it then likes to stack up, layer after layer, to form a laminate. The University of Manchester researchers think that it is this laminate form that allows water molecules through. “Graphene oxide sheets arrange in such a way that between them there is room for exactly one layer of water molecules,” says Dr Rahul Nair, who leads the project. “If another atom or molecule tries the same trick, it finds that graphene capillaries either shrink in low humidity or get clogged with water molecules.”

In another experiment, Dr Nair & Co. sealed a bottle of vodka with the graphene filter. This allowed just the water to evaporate, effectively distilling it into super-vodka. Beyond silly experiments, though, it’s easy to see the awesome potential of this new filter. With an ever-increasing strain on the world’s water supplies, water filtration is one of the hottest (and most valuable) topics at the moment — and by the sound of it, if graphene oxide really is completely impermeable to everything except water, this new filter would make clean water out of anything. Sea water, gray water, sewage…

Read more at ScienceDaily

Wednesday, January 25, 2012

Infographic: Out Innovate

Climate Change and Farming: How Not to Go Hungry in a Warmer World | TIME.COM

Tuesday, Jan. 24, 2012

Climate change might hit us in the most vital place of all — the dinner plate Why do we care about climate change? Obviously we worry about what warming temperatures might do to the geography of the planet —particularly melting polar ice and raising global sea levels. We fear the impact that climate change could have on endangered species, as warming temperatures speed the already rapid pace of extinction for wildlife that have been pushed to the edge by habitat loss and hunting. We focus on the changing risk of extreme weather, of more powerful storms causing billions of dollars of damage in richer nations — and taking thousands of lives in poorer ones. Sometimes we're simply uneasy with idea that our actions are altering the Earth, changing the rhythms of the seasons, shifting weather patterns we've been accustomed to for as long as human beings can remember.

All of that is important — but not as important as the impact that climate change might have on the most vital function of any species: feeding itself. The human population broke the 7 billion mark late last year, and the reason that happened — and the reason we can and will keep growing, barring major changes — is that we've become amazing proficient at raising food. Our distribution is far from perfect — which is the reason the world is simultaneously home to 1 billion hungry and more than 300 million obese people — and the side effects of large-scale farming can damage the environment. But food production still remains humanity's most amazing accomplishment. (PHOTOS: A Worldwide Day's Worth of Food)

That's why the threat that climate change could mess with agriculture is so scary — and why experts are worried that we're not stepping up to the challenge. In last week's Science, an international group of leading investigators — led by John Beddington, the chief science adviser for the British government — published a call urging policymakers to ensure that agriculture becomes a more vital part of global action against climate change. Global agriculture must produce more food to feed a growing population, they write. Yet scientific assessments point to climate change as a growing threat to agricultural yields and food security. In other words, the potential risks to farming are one more reason we need to reduce carbon emissions soon — and the fact that the climate is already changing, and will continue to change, means that we also need to start adapting agriculture to a warmer world immediately.

How exactly could climate change diminish our ability to feed ourselves? Warming alone could do it, with already hot and dry parts of the world — like the American Southwest or the Horn of Africa — predicted to become hotter and drier still. The catastrophic droughts that have gripped Texas and East Africa — leading to a devastating famine in the latter case — this past summer are likely signs of things to come. (And it's not just climate change that should cause us worry there: both regions have a history of mega-droughts in the geologic past, before they were widely settled by human beings, which means even the norm may be drier than we think.) While additional carbon in the air may help some plants, warmer temperatures can also retard growth, so extreme heat could lead to greater crop loss. (PHOTOS: Severe Drought in Texas)

It's not just drought, though; rain at the wrong time can be disastrous for agriculture as well. That much was obvious during the relentless floods in Pakistan in 2010, which not only killed thousands of Pakistanis but also washed away crops. Those losses helped drive food prices to record highs during the past year — a level from which they're only now beginning to drop. As the atmosphere warms, it can hold more moisture, which means we can expect heavier storms when the rain does fall. Pakistan, too, could be a harbinger of a warmer future.

Warming isn't the only threat to our ability to feed ourselves — it acts in concert with rising population, the growing demand for grain and water-intensive meat and the civil dysfunction and conflict that often frustrates poor farmers in the developing world. (The ongoing famine in Somalia has as much to do with the civil war there as it does with drought.) That's why scientists are calling for more integrated research as the first step to adapting agriculture to climate change, to ensure that farmers know what's coming — and that they can prepare for it.

No one answer will fit all agricultural ecosystems. Problems and solutions will be different in rich countries and poor ones; cool, damp ones and hot, dry ones. There are clearly major opportunities this year for scientists to provide the evidence required to rapidly generate new investments and policies that will ensure agriculture can adapt to the impact of climate change, says Bob Scholes of South Africa's Council for Scientific and Industrial Research, who was a co-author of the Science paper.

Smart climate adaptation will also cost money — money that rich nations can spare, but that poor countries, which already face the brunt of climate change, likely can't. If there's one area on which policymakers in the climate arena should focus their effort, it's ensuring that developing nations have the funds — and the expertise — needed to keep feeding themselves as the globe warms. The window of opportunity to avert a humanitarian, environmental and climate crisis is rapidly closing, and we need better information and tools for managing the tradeoffs in how we grow our food and use our resources, says Molly Jahn, dean of the University of Wisconsin's College of Agricultural and Life Sciences and another author of the Science article. If we hope to thrive in a warmer world — one that's coming — we have no other choice.

Monday, January 23, 2012

Startup Makes 'Wireless Router for the Brain | Technology Review

Monday, January 23, 2012

Kendall Research's devices could make optogenetics research much more practical.

By Courtney Humphries

Optogenetics has been hailed as a breakthrough in biomedical science—it promises to use light to precisely control cells in the brain to manipulate behavior, model disease processes, or even someday to deliver treatments.

But so far, optogenetic studies have been hampered by physical constraints. The technology requires expensive, bulky lasers for light sources, and a fiber-optic cable attached to an animal—an encumbrance that makes it difficult to study how manipulating cells affects an animal's normal behavior.

Now Kendall Research, a startup in Cambridge, Massachusetts, is trying to free optogenetics from these burdens. It has developed several prototype devices that are small and light and powered wirelessly. The devices would allow mice and other small animals to move freely. The company is also developing systems to control experiments automatically and remotely, making it possible to use the technique for high-throughput studies.

Christian Wentz, the company's founder, began the work while a student in Ed Boyden's lab at MIT. He was studying ways to make optogenetics more useful for research on how the brain affects behavior. Optogenetics relies on genetically altering certain cells to make them responsive to light, and then selectively stimulating them with a laser to either turn the cells on or off. Instead of a laser light source, Kendall Research uses creatively packaged LEDs and laser diodes, which are incorporated into a small head-borne device that plugs into an implant in the animal's brain.

The device, which weighs only three grams, is powered wirelessly by supercapacitors stationed below the animal's cage or testing area. Such supercapacitors are ideal for applications that need occasional bursts of power rather than a continuous source. The setup also includes a wirelessly connected controller that plugs into a computer through a USB. It's essentially a wireless router for the brain, says Wentz.

The wireless capabilities allow researchers to control the optogenetics equipment remotely, or even schedule experiments in advance.

Casey Halpern, a neurosurgeon at the University of Pennsylvania and one of several researchers beta-testing the device, says the physical impediments of current optogenetics techniques are tremendous. You almost can't do any behavioral experiment in a meaningful way, he says.

Halpern, for instance, studies feeding behavior, and would like to understand how activating or inhibiting specific groups of neurons change the way mice eat. The ability to test that question right in the animal's cage without a human in the room makes it more likely the animal will behave normally.

Wentz says that while the cost of the initial setup is comparable to a single laser system, it can be scaled up far more cheaply. This, coupled with the ability to remotely control experiments, would make it easier to conduct optogenetics experiments in a high-throughput fashion.

Kendall Research plans to make it possible to collect data from the brain through the device. The data could then be wirelessly transmitted to a computer. Sanjay Magavi, a research scientist at Vertex Pharmaceuticals, says while it isn't yet clear how this will be used in industry, there's increasing interest in using optogenetics in animals to develop more sophisticated models of disease for preclinical drug testing.

Copyright Technology Review 2012.

Sunday, January 22, 2012

Exercise and longevity: Worth all the sweat | The Economist

Just why exercise is so good for people is, at last, being understood

Jan 21st 2012 | from the print edition

ONE sure giveaway of quack medicine is the claim that a product can treat any ailment. There are, sadly, no panaceas. But some things come close, and exercise is one of them. As doctors never tire of reminding people, exercise protects against a host of illnesses, from heart attacks and dementia to diabetes and infection.

How it does so, however, remains surprisingly mysterious. But a paper just published in Nature by Beth Levine of the University of Texas Southwestern Medical Centre and her colleagues sheds some light on the matter.

Dr Levine and her team were testing a theory that exercise works its magic, at least in part, by promoting autophagy. This process, whose name is derived from the Greek for “self-eating”, is a mechanism by which surplus, worn-out or malformed proteins and other cellular components are broken up for scrap and recycled.

To carry out the test, Dr Levine turned to those stalwarts of medical research, genetically modified mice. Her first batch of rodents were tweaked so that their autophagosomes—structures that form around components which have been marked for recycling—glowed green. After these mice had spent half an hour on a treadmill, she found that the number of autophagosomes in their muscles had increased, and it went on increasing until they had been running for 80 minutes.

To find out what, if anything, this exercise-boosted autophagy was doing for mice, the team engineered a second strain that was unable to respond this way. Exercise, in other words, failed to stimulate their recycling mechanism. When this second group of modified mice were tested alongside ordinary ones, they showed less endurance and had less ability to take up sugar from their bloodstreams.

There were longer-term effects, too. In mice, as in people, regular exercise helps prevent diabetes. But when the team fed their second group of modified mice a diet designed to induce diabetes, they found that exercise gave no protection at all.

Dr Levine and her team reckon their results suggest that manipulating autophagy may offer a new approach to treating diabetes. And their research is also suggestive in other ways. Autophagy is a hot topic in medicine, as biologists have come to realise that it helps protect the body from all kinds of ailments.

The virtues of recycling

Autophagy is an ancient mechanism, shared by all eukaryotic organisms (those which, unlike bacteria, keep their DNA in a membrane-bound nucleus within their cells). It probably arose as an adaptation to scarcity of nutrients. Critters that can recycle parts of themselves for fuel are better able to cope with lean times than those that cannot. But over the past couple of decades, autophagy has also been shown to be involved in things as diverse as fighting bacterial infections and slowing the onset of neurological conditions like Alzheimer’s and Huntington’s diseases.

Most intriguingly of all, it seems that it can slow the process of ageing. Biologists have known for decades that feeding animals near-starvation diets can boost their lifespans dramatically. Dr Levine was a member of the team which showed that an increased level of autophagy, brought on by the stress of living in a constant state of near-starvation, was the mechanism responsible for this life extension.

The theory is that what are being disposed of in particular are worn-out mitochondria. These structures are a cell’s power-packs. They are where glucose and oxygen react together to release energy. Such reactions, though, often create damaging oxygen-rich molecules called free radicals, which are thought to be one of the driving forces of ageing. Getting rid of wonky mitochondria would reduce free-radical production and might thus slow down ageing.

A few anti-ageing zealots already subsist on near-starvation diets, but Dr Levine’s results suggest a similar effect might be gained in a much more agreeable way, via vigorous exercise. The team’s next step is to test whether boosted autophagy can indeed explain the life-extending effects of exercise. That will take a while. Even in animals as short-lived as mice, she points out, studying ageing is a long-winded process. But she is sufficiently confident about the outcome that she has, in the meantime, bought herself a treadmill.

from the print edition | Science and technology

Thursday, January 19, 2012

Disney Princesses vs. Hayao Miyazaki | GeekDad | Wired.com

By Erik Wecks

Perhaps it is a leftover from my ancient academic ambitions or the early development of my reading habit, but I tend to take stories more seriously than the average person. As a dad, I am also highly sensitive to the influence both stories and the broader culture have on my children. Young children are still developing their capacity to distinguish fact from fiction. It seems reasonable to assume they are more influenced by the stories we give them than an adult, who is better able to separate himself from the impact and message of a story. This is such a common-sense assumption that most of us take it for granted. Yet, it underlies so many of the cultural rules and regulations by which we organize our children’s lives, from the ratings on videogames, movies and graphic novels, to the vain attempts by legislators to regulate internet pornography and advertising during children’s programming.

If I keep a close watch on the adult content in the media my three daughters consume, I am no different than many parents. I mean most of us do try to aspire to something greater than the Chris Rock standard of parenting. (Warning: the link has adult language and content.)

However, what causes a small spike on the overactive parent detector is my refusal to accept at face value the stories our consumer-driven culture tries to sell my children. Many parents will react strongly to sexual content, foul language or violence, but as long as such taboos are not broken, they appear to be content to let their children consume just about any story sold to them by our corporate storytellers.

On the other hand, I can spontaneously launch into a whole list of diatribes on the failings of quality children’s storytelling in visual media with only the slightest provocation. Nothing brings a conversation among a group of parents to a full stop like launching into an impassioned plea for family films to present healthy male role models for my daughters. “Why is it dad is almost always the source of conflict?” I will ask. After a long uncomfortable silence, in which the other parents try to assess whether I just need therapy or if they need to avoid play dates at my home, someone will move the conversation along to a nice safe topic like last week’s swim lessons.

Wednesday, January 18, 2012

The Real SOPA Battle: Innovators vs. Goliath | James Allworth and Maxwell Wessel

10:23 AM Wednesday January 18, 2012

Looking around the web today, you're going to see a few things that are a bit different. Wikipedia is going dark. WordPress is too. Google has its logo blocked out. Twitter is absolutely abuzz. It all relates to legislation known as SOPA in front of the US House of Representatives, and PIPA in front of the US Senate. If you'd like to understand what the legislation would actually mean for the Internet, you can see HBR's earlier coverage about the bill from before it was renamed. But the purpose of this article isn't to explain what SOPA and PIPA will do. Instead, it's about explaining what's brought them about: SOPA and PIPA are prime examples of big companies trying to do everything they can to stop new competitors from innovating. They're also examples of how lobbying in the United States has become one of the most effective ways of limiting this sort of competition.

The argument over this legislation has essentially been characterized in the press as having two sides. The first side, which is generally represented by big content, is that piracy (and any new technology that facilitates it) is an existential threat to any business based on intellectual property. That's actually a line that has been used a few times before — most famously by Jack Valenti, head of the MPAA, when he testified in front of congress that the VCR was to the movie industry what the Boston Strangler was to women.

And on the other side of the argument? Well, they have been mostly characterized as the technology industries. They've been making the case that SOPA and PIPA will chill innovation and threaten free speech.

But content vs technology doesn't do justice to describing the two sides. Tim O'Reilly, the CEO of O'Reilly Media — a very well-known publishing and media company that derives a large portion of its revenue from the sale of books — has been one of the most ardent critics of SOPA and PIPA. On the other hand, GoDaddy.com, the largest of the web's domain name registrars, was very much in favor of SOPA — at least until a boycott caused them to back down. Similarly, there are plenty of other technology firms that have supported SOPA.

So if content vs technology doesn't capture what's going on in this fight, what does? Well, SOPA makes much more sense if you look at the debate as big companies unwilling to accept change versus the innovative companies and startups that embrace change. And if we accept that startups are created to find new ways to create value for consumers, the debate is actually between the financial interests of big content shareholders versus consumer interests at large.

If you take a look at many of the largest backers of SOPA or PIPA — the Business of Software Alliance, Comcast, Electronic Arts, Ford, L'Oreal, Scholastic, Sony, Disney —you'll see that they represent a wide range of businesses. Some are technology companies, some are content companies, some are historic innovators, and some are not. But one characteristic is the same across all of SOPA's supporters — they all have an interest in preserving the status quo. If there is meaningful innovation by startups in content creation and delivery, the supporters of SOPA and PIPA are poised to lose.

Even for those SOPA supporters that are historic innovators, their organizations focus on improving products in the pursuit of profit. They innovate to increase prices and limit production cost. Even when new models and technologies give rise to huge businesses, these incumbent firms reject meaningful innovation.

On the other side of the debate, you'll see a few the most successful companies in recent history. Wikipedia. Google. Twitter. Zynga. What these firms have in common is they have upended entire industries — and many are still in the process of doing so. Each of these businesses has roots in embracing new technologies and building models to deliver value to customers at the lowest cost. They're fighting this legislation because they're aware it will tip the finely tuned balance of creative destruction against startups and very much in favor of companies unwilling to embrace change. For example, Viacom has been locked in a legal fight with YouTube — so far, unsuccessfully. If SOPA were to become law, however, Viacom would be able to entirely shut down YouTube's revenue stream while the case was in court. Balance tipped.

To be fair to the big companies supporting SOPA and PIPA, they're acting rationally. From their perspective, investing in lobbying instead of business model innovation is a sensible investment. Jack Abramoff has recently detailed how a 22,000% ROI isn't unusual for firms hiring lobbyists.

But even if it makes sense for these companies to support SOPA and PIPA, do we want to censor the Internet and limit innovation? Should our legislative process be used to protect the business interests of firms unwilling to embrace change? A recent exchange on Twitter between Jack Dorsey, co-Founder of Twitter, and Steve Case, the Co-Founder of AOL, summed it up nicely:

Jack: Startups collaborate & redefine. As companies and organizations grow, they naturally tend to defend & react, both internally and externally. Steve: Agree! Think of it as attackers vs defenders. Entrepreneurs attack/disrupt to maximize upside. Corp execs defend to protect downside.

SOPA is a legislative attempt by big companies with vested interests to protect their downside. And unfortunately, these companies have conscripted Congress to help them. What's worse is that even though limiting start-up innovation might help big content in the short run, it's not going to do them in favors in the long run. Nor is going to do America any favors. In the midst of one of the worst recessions in living memory, passage of legislation like this is just going to result in innovators moving to geographies where the regulatory environment is more favorable. Start-ups will be less competitive in the United States and we'll have effectively disabled one of the few remaining growth engines of the economy

Julian Assange in Rolling Stone

"From the glory days of American radicalism, which was the American Revolution, I think that Madison's view on government is still unequaled, he tells me during the three days I spend with him as he settles into his new location in England. That people determined to be in a democracy, to be their own governments, must have the power that knowledge will bring - because knowledge will always rule ignorance. You can either be informed and your own rulers, or you can be ignorant and have someone else, who is not ignorant, rule over you. The question is, where has the United States betrayed Madison and Jefferson, betrayed these basic values on how you keep a democracy? I think that the U.S. military-industrial complex and the majority of politicians in Congress have betrayed those values."

http://m.rollingstone.com/entry/view/id/21307/pn/all/p/0/?KSID=aa1c202dd3b2d3...

Monday, January 16, 2012

Andrew Sullivan: How Obama's Long Game Will Outsmart His Critics | Daily Beast

'The right calls him a socialist, the left says he sucks up to Wall Street, and independents think he's a wimp. Andrew Sullivan on how the president may just end up outsmarting them all."

You hear it everywhere. Democrats are disappointed in the president. Independents have soured even more. Republicans have worked themselves up into an apocalyptic fervor. And, yes, this is not exactly unusual.

A president in the last year of his first term will always get attacked mercilessly by his partisan opponents, and also, often, by the feistier members of his base. And when unemployment is at remarkably high levels, and with the national debt setting records, the criticism will—and should be—even fiercer. But this time, with this president, something different has happened. It’s not that I don’t understand the critiques of Barack Obama from the enraged right and the demoralized left. It’s that I don’t even recognize their description of Obama’s first term in any way. The attacks from both the right and the left on the man and his policies aren’t out of bounds. They’re simply—empirically —wrong.

A caveat: I write this as an unabashed supporter of Obama from early 2007 on. I did so not as a liberal, but as a conservative-minded independent appalled by the Bush administration’s record of war, debt, spending, and torture. I did not expect, or want, a messiah. I have one already, thank you very much. And there have been many times when I have disagreed with decisions Obama has made—to drop the Bowles-Simpson debt commission, to ignore the war crimes of the recent past, and to launch a war in Libya without Congress’s sanction, to cite three. But given the enormity of what he inherited, and given what he explicitly promised, it remains simply a fact that Obama has delivered in a way that the unhinged right and purist left have yet to understand or absorb. Their short-term outbursts have missed Obama’s long game—and why his reelection remains, in my view, as essential for this country’s future as his original election in 2008.

The right’s core case is that Obama has governed as a radical leftist attempting a “fundamental transformation” of the American way of life. Mitt Romney accuses the president of making the recession worse, of wanting to turn America into a European welfare state, of not believing in opportunity or free enterprise, of having no understanding of the real economy, and of apologizing for America and appeasing our enemies. According to Romney, Obama is a mortal threat to “the soul” of America and an empty suit who couldn’t run a business, let alone a country.

Leave aside the internal incoherence—how could such an incompetent be a threat to anyone? None of this is even faintly connected to reality—and the record proves it. On the economy, the facts are these. When Obama took office, the United States was losing around 750,000 jobs a month. The last quarter of 2008 saw an annualized drop in growth approaching 9 percent. This was the most serious downturn since the 1930s, there was a real chance of a systemic collapse of the entire global financial system, and unemployment and debt—lagging indicators—were about to soar even further. No fair person can blame Obama for the wreckage of the next 12 months, as the financial crisis cut a swath through employment. Economies take time to shift course.

But Obama did several things at once: he continued the bank bailout begun by George W. Bush, he initiated a bailout of the auto industry, and he worked to pass a huge stimulus package of $787 billion.

All these decisions deserve scrutiny. And in retrospect, they were far more successful than anyone has yet fully given Obama the credit for. The job collapse bottomed out at the beginning of 2010, as the stimulus took effect. Since then, the U.S. has added 2.4 million jobs. That’s not enough, but it’s far better than what Romney would have you believe, and more than the net jobs created under the entire Bush administration. In 2011 alone, 1.9 million private-sector jobs were created, while a net 280,000 government jobs were lost. Overall government employment has declined 2.6 percent over the past 3 years. (That compares with a drop of 2.2 percent during the early years of the Reagan administration.) To listen to current Republican rhetoric about Obama’s big-government socialist ways, you would imagine that the reverse was true. It isn’t.

The right claims the stimulus failed because it didn’t bring unemployment down to 8 percent in its first year, as predicted by Obama’s transition economic team. Instead, it peaked at 10.2 percent. But the 8 percent prediction was made before Obama took office and was wrong solely because it relied on statistics that guessed the economy was only shrinking by around 4 percent, not 9. Remove that statistical miscalculation (made by government and private-sector economists alike) and the stimulus did exactly what it was supposed to do. It put a bottom under the free fall. It is not an exaggeration to say it prevented a spiral downward that could have led to the Second Great Depression.

You’d think, listening to the Republican debates, that Obama has raised taxes. Again, this is not true. Not only did he agree not to sunset the Bush tax cuts for his entire first term, he has aggressively lowered taxes on most Americans. A third of the stimulus was tax cuts, affecting 95 percent of taxpayers; he has cut the payroll tax, and recently had to fight to keep it cut against Republican opposition. His spending record is also far better than his predecessor’s. Under Bush, new policies on taxes and spending cost the taxpayer a total of $5.07 trillion. Under Obama’s budgets both past and projected, he will have added $1.4 trillion in two terms. Under Bush and the GOP, nondefense discretionary spending grew by twice as much as under Obama. Again: imagine Bush had been a Democrat and Obama a Republican. You could easily make the case that Obama has been far more fiscally conservative than his predecessor—except, of course, that Obama has had to govern under the worst recession since the 1930s, and Bush, after the 2001 downturn, governed in a period of moderate growth. It takes work to increase the debt in times of growth, as Bush did. It takes much more work to constrain the debt in the deep recession Bush bequeathed Obama.

The great conservative bugaboo, Obamacare, is also far more moderate than its critics have claimed. The Congressional Budget Office has projected it will reduce the deficit, not increase it dramatically, as Bush’s unfunded Medicare Prescription Drug benefit did. It is based on the individual mandate, an idea pioneered by the archconservative Heritage Foundation, Newt Gingrich, and, of course, Mitt Romney, in the past. It does not have a public option; it gives a huge new client base to the drug and insurance companies; its health-insurance exchanges were also pioneered by the right. It’s to the right of the Clintons’ monstrosity in 1993, and remarkably similar to Nixon’s 1974 proposal. Its passage did not preempt recovery efforts; it followed them. It needs improvement in many ways, but the administration is open to further reform and has agreed to allow states to experiment in different ways to achieve the same result. It is not, as Romney insists, a one-model, top-down prescription. Like Obama’s Race to the Top education initiative, it sets standards, grants incentives, and then allows individual states to experiment. Embedded in it are also a slew of cost-reduction pilot schemes to slow health-care spending. Yes, it crosses the Rubicon of universal access to private health care. But since federal law mandates that hospitals accept all emergency-room cases requiring treatment anyway, we already obey that socialist principle—but in the most inefficient way possible. Making 44 million current free-riders pay into the system is not fiscally reckless; it is fiscally prudent. It is, dare I say it, conservative.

On foreign policy, the right-wing critiques have been the most unhinged. Romney accuses the president of apologizing for America, and others all but accuse him of treason and appeasement. Instead, Obama reversed Bush’s policy of ignoring Osama bin Laden, immediately setting a course that eventually led to his capture and death. And when the moment for decision came, the president overruled both his secretary of state and vice president in ordering the riskiest—but most ambitious—plan on the table. He even personally ordered the extra helicopters that saved the mission. It was a triumph, not only in killing America’s primary global enemy, but in getting a massive trove of intelligence to undermine al Qaeda even further. If George Bush had taken out bin Laden, wiped out al Qaeda’s leadership, and gathered a treasure trove of real intelligence by a daring raid, he’d be on Mount Rushmore by now. But where Bush talked tough and acted counterproductively, Obama has simply, quietly, relentlessly decimated our real enemies, while winning the broader propaganda war. Since he took office, al Qaeda’s popularity in the Muslim world has plummeted.

Obama’s foreign policy, like Dwight Eisenhower’s or George H.W. Bush’s, eschews short-term political hits for long-term strategic advantage. It is forged by someone interested in advancing American interests—not asserting an ideology and enforcing it regardless of the consequences by force of arms. By hanging back a little, by “leading from behind” in Libya and elsewhere, Obama has made other countries actively seek America’s help and reappreciate our role. As an antidote to the bad feelings of the Iraq War, it has worked close to perfectly.

But the right isn’t alone in getting Obama wrong. While the left is less unhinged in its critique, it is just as likely to miss the screen for the pixels. From the start, liberals projected onto Obama absurd notions of what a president can actually do in a polarized country, where anything requires 60 Senate votes even to stand a chance of making it into law. They have described him as a hapless tool of Wall Street, a continuation of Bush in civil liberties, a cloistered elitist unable to grasp the populist moment that is his historic opportunity. They rail against his attempts to reach a Grand Bargain on entitlement reform. They decry his too-small stimulus, his too-weak financial reform, and his too-cautious approach to gay civil rights. They despair that he reacts to rabid Republican assaults with lofty appeals to unity and compromise.

They miss, it seems to me, two vital things. The first is the simple scale of what has been accomplished on issues liberals say they care about. A depression was averted. The bail-out of the auto industry was—amazingly—successful. Even the bank bailouts have been repaid to a great extent by a recovering banking sector. The Iraq War—the issue that made Obama the nominee—has been ended on time and, vitally, with no troops left behind. Defense is being cut steadily, even as Obama has moved his own party away from a Pelosi-style reflexive defense of all federal entitlements. Under Obama, support for marriage equality and marijuana legalization has crested to record levels. Under Obama, a crucial state, New York, made marriage equality for gays an irreversible fact of American life. Gays now openly serve in the military, and the Defense of Marriage Act is dying in the courts, undefended by the Obama Justice Department. Vast government money has been poured into noncarbon energy investments, via the stimulus. Fuel-emission standards have been drastically increased. Torture was ended. Two moderately liberal women replaced men on the Supreme Court. Oh, yes, and the liberal holy grail that eluded Johnson and Carter and Clinton, nearly universal health care, has been set into law. Politifact recently noted that of 508 specific promises, a third had been fulfilled and only two have not had some action taken on them. To have done all this while simultaneously battling an economic hurricane makes Obama about as honest a follow-through artist as anyone can expect from a politician.

What liberals have never understood about Obama is that he practices a show-don’t-tell, long-game form of domestic politics. What matters to him is what he can get done, not what he can immediately take credit for. And so I railed against him for the better part of two years for dragging his feet on gay issues. But what he was doing was getting his Republican defense secretary and the chairman of the Joint Chiefs to move before he did. The man who made the case for repeal of “don’t ask, don’t tell” was, in the end, Adm. Mike Mullen. This took time—as did his painstaking change in the rule barring HIV-positive immigrants and tourists—but the slow and deliberate and unprovocative manner in which it was accomplished made the changes more durable. Not for the first time, I realized that to understand Obama, you have to take the long view. Because he does.

Or take the issue of the banks. Liberals have derided him as a captive of Wall Street, of being railroaded by Larry Summers and Tim Geithner into a too-passive response to the recklessness of the major U.S. banks. But it’s worth recalling that at the start of 2009, any responsible president’s priority would have been stabilization of the financial system, not the exacting of revenge. Obama was not elected, despite liberal fantasies, to be a left-wing crusader. He was elected as a pragmatic, unifying reformist who would be more responsible than Bush.

And what have we seen? A recurring pattern. To use the terms Obama first employed in his inaugural address: the president begins by extending a hand to his opponents; when they respond by raising a fist, he demonstrates that they are the source of the problem; then, finally, he moves to his preferred position of moderate liberalism and fights for it without being effectively tarred as an ideologue or a divider. This kind of strategy takes time. And it means there are long stretches when Obama seems incapable of defending himself, or willing to let others to define him, or simply weak. I remember those stretches during the campaign against Hillary Clinton. I also remember whose strategy won out in the end.

This is where the left is truly deluded. By misunderstanding Obama’s strategy and temperament and persistence, by grandstanding on one issue after another, by projecting unrealistic fantasies onto a candidate who never pledged a liberal revolution, they have failed to notice that from the very beginning, Obama was playing a long game. He did this with his own party over health-care reform. He has done it with the Republicans over the debt. He has done it with the Israeli government over stopping the settlements on the West Bank—and with the Iranian regime, by not playing into their hands during the Green Revolution, even as they gunned innocents down in the streets. Nothing in his first term—including the complicated multiyear rollout of universal health care—can be understood if you do not realize that Obama was always planning for eight years, not four. And if he is reelected, he will have won a battle more important than 2008: for it will be a mandate for an eight-year shift away from the excesses of inequality, overreach abroad, and reckless deficit spending of the last three decades. It will recapitalize him to entrench what he has done already and make it irreversible.

Yes, Obama has waged a war based on a reading of executive power that many civil libertarians, including myself, oppose. And he has signed into law the indefinite detention of U.S. citizens without trial (even as he pledged never to invoke this tyrannical power himself). But he has done the most important thing of all: excising the cancer of torture from military detention and military justice. If he is not reelected, that cancer may well return. Indeed, many on the right appear eager for it to return.

Sure, Obama cannot regain the extraordinary promise of 2008. We’ve already elected the nation’s first black president and replaced a tongue-tied dauphin with a man of peerless eloquence. And he has certainly failed to end Washington’s brutal ideological polarization, as he pledged to do. But most Americans in polls rightly see him as less culpable for this impasse than the GOP. Obama has steadfastly refrained from waging the culture war, while the right has accused him of a “war against religion.” He has offered to cut entitlements (and has already cut Medicare), while the Republicans have refused to raise a single dollar of net revenue from anyone. Even the most austerity-driven government in Europe, the British Tories, are to the left of that. And it is this Republican intransigence—from the 2009 declaration by Rush Limbaugh that he wants Obama “to fail” to the Senate Majority Leader Mitch McConnell’s admission that his primary objective is denying Obama a second term—that has been truly responsible for the deadlock. And the only way out of that deadlock is an electoral rout of the GOP, since the language of victory and defeat seems to be the only thing it understands.

If I sound biased, that’s because I am. Biased toward the actual record, not the spin; biased toward a president who has conducted himself with grace and calm under incredible pressure, who has had to manage crises not seen since the Second World War and the Depression, and who as yet has not had a single significant scandal to his name. “To see what is in front of one’s nose needs a constant struggle,” George Orwell once wrote. What I see in front of my nose is a president whose character, record, and promise remain as grotesquely underappreciated now as they were absurdly hyped in 2008. And I feel confident that sooner rather than later, the American people will come to see his first term from the same calm, sane perspective. And decide to finish what they started.

©2011 The Newsweek/Daily Beast Company LLC

Jeremy Rifkin: Energy-sharing is the new internet (Wired UK)

The Second Industrial ­Revolution, powered by oil and other fossil fuels, is spiralling into a dangerous endgame: prices are climbing, unemployment remains high, debt is soaring and the recovery is slowing. Worse, climate change from fossil-fuel-based industrial activity looms. Facing a collapse of the global economy, humanity is desperate for a new vision to take us into the future.

History's great economic revolutions occur when new communication technologies converge with new energy systems. Energy revolutions make possible more expansive and integrated trade. Accompanying communication revolutions manage the new complex commercial activities. In the 18th and 19th centuries, cheap print technology and the introduction of state schools gave rise to a print-literate workforce with the skills to manage the increased commercial activity made possible by coal and steam power, ushering in the First Industrial Revolution. In the 20th century, centralised electricity communication -- the telephone, radio and television -- became the medium to manage a more complex and dispersed oil, auto and suburban era, and the mass consumer culture of the Second Industrial Revolution.

Today, internet technology and renewable energies are about to merge to create a powerful infrastructure for a Third Industrial Revolution (TIR). In the coming era, hundreds of millions of people will produce their own green energy and share it in an "energy internet", just as we now ­generate and share information online. The ­creation of a renewable energy regime, loaded by buildings, partially stored in the form of hydrogen, distributed via an energy ­internet and connected to plug-in zero-emission transport, establishes a five-pillar infrastructure that will spawn thousands of businesses and millions of sustainable jobs. The democratisation of energy will also bring with it a reordering of human relationships, impacting the way we conduct business, govern society, educate our children and engage in civic life.

The TIR will lay the foundations for a collaborative age. Its completion will signal the end of a 200-year commercial saga characterised by industrious thinking, entrepreneurial markets and mass workforces, and the beginning of a new era marked by collaborative behaviour, social networks and boutique professional and technical workforces. In the coming half-century, conventional, centralised business operations will be increasingly subsumed by the distributed business practices of the TIR; and the traditional, hierarchical organisation of power will give way to lateral power organised nodally across society.

At first glance, lateral power seems a contradiction. Power, after all, has traditionally been organised pyramidically. Today, however, the collaborative power unleashed by internet technology and renewable energies restructures human relationships, from top to bottom to side to side, with profound consequences. The music companies didn't understand distributed power until millions of people began sharing music online, and corporate revenues tumbled in less than a decade. Encyclopedia Britannica did not appreciate the collaborative power that made Wikipedia the leading reference source in the world. Newspapers didn't take the blogosphere seriously; now many titles are either going out of business or moving online. The implications of people sharing energy are even more far-reaching.

To appreciate how economically disruptive the TIR is, consider the changes over the past 20 years. The democratisation of information and communication has altered the nature of global commerce and social relations as significantly as the print revolution. Now, imagine the impact that the democratisation of energy across all of society is likely to have when managed by Internet technology.

Jeremy Rifkin is the author of The Third Industrial Revolution: How Lateral Power Is Transforming Energy, the Economy, and the World (Palgrave Macmillan)

Sunday, January 15, 2012

Mourning in a Digital Age | BRUCE FEILER - NYTimes

By BRUCE FEILER Published: January 13, 2012

I HAVE found myself in a season of loss. Every few weeks for the last six months, friends in the prime of life have suffered the death of a close family member. These deaths included a mother, a father, a sister, a brother, a spouse and, in one particularly painful case, a teenage child who died on Christmas morning.

The convergence of these passings brought home an awkward truth: I had little idea how to respond. Particularly when the surviving friend was young, the funeral was far away and the grieving party did not belong to a religious institution, those of us around that friend had no clear blueprint for how to handle the days following the burial.

In several of these cases, a group of us organized a small gathering. E-mails were sent around, a few pizzas and a fruit salad were rounded up, someone baked a cake. And suddenly we found ourselves in what felt like the birth pangs of a new tradition.

“It’s a secular shiva,” the hostess announced.

So what exactly were we creating? Grieving has been largely guided by religious communities, from celebratory Catholic wakes, to the 49 days of mourning for Buddhists, to the wearing of black (or white) in many Protestant traditions, to the weeklong in-house condolence gatherings that make up the Jewish tradition of shiva. Today, with religiosity in decline, families dispersed and the pace of life feeling quickened, these elaborate, carefully staged mourning rituals are less and less common. Old customs no longer apply, yet new ones have yet to materialize.

“We’re just too busy in this world to deal with losing people,” said Maggie Callanan, a hospice nurse for the last 30 years and the author of “Final Gifts,” an influential book about death and dying. “And yet we have to.”

Ms. Callanan and others in the field point to the halting emergence of guidelines to accommodate our high-speed world, in which many people are disconnected from their friends physically, yet connected to them electronically around the clock.

One puzzle I encountered is the proper way to respond to a mass e-mailing announcing a death.

“We still feel it’s nice to pick up the phone or send a card,” said Danna Black, an owner of Shiva Sisters, an event-planning company in Los Angeles that specializes in Jewish funeral receptions. “But if the griever feels comfortable sending out an e-mail, you can feel comfortable sending one back. Just don’t hit Reply All.”

Facebook presents its own challenges. The site’s public platform is an ideal way to notify a large number of people, and many grievers I know have taken comfort in supportive messages from friends. Like CaringBridge, CarePages and similar sites, social networks can become like virtual shiva locations for faraway loved ones.

But Megory Anderson, the founder of the Sacred Dying Institute in San Francisco (it seeks to bring spirituality to the act of dying), said problems arise when grievers begin encroaching on the personal space of others.

“The safest thing is to share your own story,” she said. Since everyone grieves differently, she cautions against sharing private details of other family members, loved ones or the deceased themselves. She also recommends sending a private message to grievers instead of writing on their wall.

Especially in a world in which so much communication happens online, the balming effect of a face-to-face gathering can feel even more magnified. The Jewish tradition of sitting shiva offers an appealing template. Named after the Hebrew word for “seven,” shiva is a weeklong mourning period, dating back to biblical times, in which immediate family members welcome visitors to their home to help fortify the soul of the deceased and comfort the survivors. Though many contemporary Jews shorten the prescribed length, the custom is still widely practiced.

The “secular shivas” we organized had a number of notable differences that proved crucial to their success. First, we organized them for Jews and non-Jews alike. Second, no prayers or other religious rituals were offered. Third, we held them away from the home of the griever, to reduce the burden. And finally, we offered the grieving party the option of speaking about the deceased, something not customary under Jewish tradition.

I recently reached out to the guests of honor, and, along with a few professionals, tried to identify a few useful starting points.

Don’t wait for the griever to plan. As Ms. Callanan observed: “One thing you can assume with a grieving person is that they’re overwhelmed with life. Suddenly keeping up with the bills, remembering to disconnect the hoses or shoveling the sidewalk no longer seem necessary.” With a traditional shiva, the burden falls on the family to open their home to sometimes hundreds of people. If you are considering a “secular shiva,” insist on doing the planning yourself, from finding a location, to notifying guests, to ordering food.

By invitation only. Traditional shivas are open houses; they’re communitywide events in which friends, neighbors and colleagues can stop by uninvited. Our events were more restricted, with the guest of honor suggesting fewer than a dozen invitees. “An old-fashioned shiva would have felt foreign to me,” said my friend Karen, who lost her mother last summer. “I’m more private. If it was twice the size, I wouldn’t have felt comfortable.”

“Would you like to share a few stories?” At the event we held for Karen, she opted to speak about her mom. For 45 emotional minutes, she talked about her mother’s sunny disposition, her courtship, her parenting style. It was like watching a vintage movie.

“I liked speaking about my mom,” she told me. “One, I hadn’t had time to fully grieve because I was so focused on my dad. And two, there was something each of you could come away with about who my mom was in the world.”

At a later event, a Catholic friend who had lost her brother chose not to speak about him. She felt too fragile, she later explained. Instead she handed out CDs with a photo montage of her brother’s life. “I think if I hadn’t had the pictures, I would have felt the need to talk about him.”

The comfort of crowds. While I came away from these events convinced we had hit on a new tool for our circle of friends, I was quickly warned not to assume our model was universal.

“Introverts need to grieve, too,” Ms. Andrews said. “For some, a gathering of this kind might be a particular kind of torture.”

My friend who lost her brother had that reaction initially. “On the way over, I had some misgivings,” she said. “I was still crying every time I mentioned his name.” But the event surprised her, she said. “Seeing all my friends gathered, I couldn’t help but be happy. There was a reaffirming glimmer.”

Six months after my string of losses began, it hardly feels over. What I’ve taken away from the experience is a reminder of what I’ve seen often in looking at contemporary religion. Rather than chuck aside time-tested customs in favor of whiz-bang digital solutions, a freshening of those rituals is often more effective. Our “secular shivas” took some advantages of the Internet (e-mail organizing, ordering food online); coupled them with some oft-forgotten benefits of slowing down and reuniting; and created a nondenominational, one-size-doesn’t-fit-all tradition that can be tinkered to fit countless situations.

Like all such traditions, they may not soften the blow of a loss, but they had the unmistakable boon of reaffirming the community itself.

Power, Confidence, and High Heels | Anthropology in Practice, Scientific American

Cinderella got the prince and Dorothy was envied. Why? They well shod. What’s the deal with women’s relationship to their footwear?

Watch Me Walk Away

Click. Click. Click. Click.

With each measured step, my heels echoed with a finality that emphasized my leaving, which was important: I was angry and I wanted to be taken seriously. The sound of my three-inch heels striking the tiles spoke volumes—and did so much more eloquently than I would have been able to at the moment.

I had just had my first turn-on-your-heel-and-walk-away moment. A meeting with a senior vice president at a leading digital agency in New York City had gone horribly wrong: Her team had asked me to consult on a project they were considering, but within a few minutes it became clear that we would not be able to work together. She was rude to her staff and made two disparaging remarks about anthropologists. Annoyed, and believing that her behavior toward her staff spoke volumes about the sort of relationship we would have, I decided I had had enough. So I picked up my coat, turned on my heel, and walked out. It was empowering. It was a moment I’ll likely not forget soon. And it would not have been the same had I been wearing flats.

Many Western women make high-heels a part of their daily wardrobe. The relationship women have with their shoes often becomes the butt of jokes and a point of dismissal, often on the following points:

Do women need to own so many shoes? Many men admit to have having 3-4 pairs of shoes: boots, sneakers, and a pair or two of dress shoes in black and brown. Women on the other hand can easily have 3-4 times as many.

Do they need to be so high? Culturally, we’re primed to note the Buffy heel and the red sole of Louboutin, but it defies logic: High-heels can damage feet, which were not meant to be crammed into too tight quarters for eight hours a day (at least) or be balanced precariously on skinny supports.

Is it really sensible to spend so much on shoes? Forbes reports that women spent $17 billion on footwear between Oct. 2004 and Oct. 2005. More recent data seems to suggest that women aren’t spending quite so much—though popular opinion disagrees (1,2).

I’ve been thinking about this moment with the SVP and my relationship with heels recently. And so it appears have others around me—been thinking about my relationship with my shoes, I mean. I’ve only recently joined the ranks of the well-heeled. I was actually schooled in the “sensible shoe” philosophy, and will admit to be being more at home in sneakers than in three-inch heels. But I’ve found that when you stand at 4’11” in flats, the world tends to overlook you—a point that a few friends have disagreed with, but then again, they’re all taller than 4’11”. Apparently, my rising heel has elicited some commentary between a subset of friends who are rather surprised that a smart, sensible woman such as myself would subject my feet to such a tortuous experience. But I am not alone: on the subway and on the street, on their way to the office or a night out, there appears to be any number of women for whom shoes are an important aspect of dress. While it’s true that an individual woman’s presence is so much more than the footwear she has chosen for the day, shoes can influence our interactions with others: they change how we walk, how we stand, and how others perceive us.

A Short History of the High-Heel

Our early ancestors didn’t concern themselves with stilettos or the spring collection of Manolos. In all likelihood, they went barefoot. Shoes in the form of sandals emerged around 9,000 years ago as a means of protecting bare feet from the elements (specifically, frostbite) (3). The Greeks viewed shoes as an indulgence—a means of increasing status, though it was a Greek, Aeschylus, who created the first high heel, calledkorthonos for theatrical purposes. His intent was to “add majesty to the heroes of his plays so that they would stand out from the lesser players and be more easily recognized” (4). Greek women adopted the trend, taking the wedge heel to new heights that the late Alexander McQueen would have likely applauded, although being unshod was the norm in Grecian culture. The adoption of shoes, and the heel, for Greeks appears to coincide with Roman influence, and ultimately Roman conquest. Roman fashion was viewed as a sign of power and status, and shoes represented a state of civilization.

In Europe, it was common for women to use a patten to help keep their skirts and soft slipper shoes clean as the streets weren’t paved. Pattens were slightly elevated platforms that were worn over the slipper-type shoes that were common at the time. Heels served a functional purpose. However this begins to shift during the High Renaissance, when the Venetian courtesans began to wear chopines: extremely high platform shoes. Chopines could add 30(!) inches to a woman’s height, and were quickly adopted by the wealthy as a means of showing status—the higher one’s chopines, the higher one’s place in society. They were so difficult to walk in that women often needed a female servant to help keep them upright, and were ultimately banned for pregnant women as a number of women in Venice suffered miscarriages after falling (5). Chopines remained in vogue, however, because they proved effective at keeping clothes (and feet) clear of the muck that covered the streets.

The widespread popularity of the heel is credited to Catherine de Medici who wore heels to make her look taller. When she wore them to her wedding to Henry II of France, they became a status symbol for the wealthy. Commoners were banned from wearing them—though it’s doubtful that they would have been able to afford them anyway. Later, the French heel—predecessor to the narrow, tall heel of today—would be made popular by Marquise de Pompadour, mistress of Louis XV. These shoes initially required women to use walking sticks to keep their balance until the height of the heel was reduced.

In the US, the French heel was popularized in the late 19th-century by a brothel, Madam Kathy’s, where the proprietor noted that business boomed after she employed a French woman who wore high-heels. So she ordered shoes for all of her girls—it seemed the “the leggy look and mobile torso derived from wearing high heels was of considerable interest to patrons,” who then ordered these French heeled shoes for their wives (6). Heel height would fall and rise again through the subsequent decades leading ultimately to the various options available today, As we turn our attention to the next section, it should not escape the Reader’s notice that heels have been linked to “professional” women as well as the aristocracy. Hold onto this thought, Readers, as we will come back to it.

Suffering for Fashion … and Sex Appeal?

Nine out of ten women wear shoes that are too tight for them. And eight out of ten women admit to wearing shoes that hurt. According to the American Academy of Orthopaedic Surgeons, women are nine times more likely to develop a foot problem due to improperly fitting shoes when compared to men (7). These statistics are high because our feet weren’t intended to be slaves to fashion.

The human foot is one the most intricate structures in the body: it contains one-third of the bones in the body (26), has 35 joints, and more than 100 ligaments, tendons, and muscles. Our feet absorb at least 2.5 times our body weight when we walk, were designed to help keep us upright, and bear striking differences when compared with the feet of other primates (8):

The big toe projects beyond other toes (generally, though the exception known as the Grecian toe is noted, where the second toe tends to be longer than the big toe), and is bound to the other toes (non-grasping), which has been linked to the development of the ball of the foot, and is connected to the human stride.

The arch(es) of the foot supports weight, absorbs the shock of walking, and enhances balance.

The heel of the foot is home to an enlarged muscle that helps lift the body up and forward, shifting weight to the ball of the foot, enabling us to walk and run.

High-heels place undue stress on feet, directing pressure to the toes instead of distributing it evenly between toe and heel, and the arch loses its ability to absorb the shock and help us balance. (Take some time and watch a woman walk in heels. While much attention is given to the sway of her hips, actually look at her feet —most women wobble just a little as their feet attempt to keep them stable.) Over time, these pressures can deform the foot creating major problems for women later in life. Some of the damage resulting from high-heels includes:

fractures

bunions

lower back pain and posture change

shortened Achilles tendon

reduced mobility and heightened targeting in unsafe conditions

and increased energy demands (heart rate and oxygen consumption increases with heel height (9).

The costs associated with high-heels have caused anthropologist E.O. Smith to further the argument that heel-height may be related to mate attraction—a case of sexual selection:

Based on comparative animal ecology and behavior one would predict that males should be advertising through the display of their assets (physical or otherwise). And while males do advertise in Western society, females also engage in equally conspicuous advertising and sexual signaling. Not only do we have male-male competition and female choice, but we also have female-female competition and make choice acting simultaneously (10).

Smith discusses the ways high-heels can alter the female silhouette into the shape touted by Western culture as sensual:

Increased heel height creates an optical illusion of ‘shortening’ the foot, slenderizes the ankle, contributes to the appearance of long legs, adds a sensuous look to the strike, and increases height to generate the sensation of power and status (11).

These ideas have been explored previously by numerous other researchers. For example, Rossi notes that high-heels alter the tilt of the pelvis, resulting in more prominence of the buttocks and displaying of the breasts, creating a “come-hither pose” also described by Rossi as the “pouter pigeon” pose, “with lots of breast and tail balanced precariously on a pair of stilts” (12). Smith concedes that we cannot definitely link the wearing of high-heels with sexually selected mating strategies in humans, but suggests that heels are a culturally derived and defined trait that helps women meet an ideal of beauty that may help them attract a mate.

Blurring the Line Between Courtesan and Lady

To some degree, the popular opinion generally agrees with Smith. One of the comments made by a colleague about my tendency to sport heels with my wardrobe was that she was surprised by the heel height. For her it was a sign of shifting cultural norms as heels “that high” (three inches) were typically reserved for Saturday night or going out [in her day]—in other words, they were not “work” shoes. Another—a man—noted that my heels may be an attempt to “show oats” (not sow, but show, as in “show off and attract attention”). In these comments linger traces of those who helped popularize heels: the courtesans, the prostitutes, and those women otherwise involved in selling beauty and appeal.

But we can’t overlook the role of the aristocrats either, who wore heels to reflect an elevated status, hide defects, and distinguish themselves. There is something to be said for being able to look someone (as close as possible) in the eye. Louis XIV knew this: a notoriously short man, he had cork heels added to his shoes, raising them to almost four inches in height. (When his court followed suite, he lowered his heel to about an inch.) And yet no one is implying that he was attempting to increase his sexual fitness—as a monarch, I think he had that taken care of. Perhaps courtesans wore heels to enhance their sexuality, but perhaps it also helped them transact their business in a more serious manner. Perhaps they knew what the aristocracy discovered: meeting someone’s eye changes the way they interact with you—it shifts the power dynamic, and that certainly can be appealing.

Heels have gone up, and come down again reflect the culture and time, and needs of the population. Recently, author Elizabeth Semmalhack linked heel height in the US to periods of economic depression, suggesting that heels provided a sense of escapism in dire times (13). It is true that following the French Revolution, heels in France were lowered as the aristocrats sought to distance themselves from the power and status the higher heel represented.

Germaine Greer said:

Yet if a woman never lets herself go, how will she ever know how far she might have got? If she never takes off her high-heeled shoes, how will she ever know how far she could walk or how fast she could run?

I’m not denying that my heels don’t change the way I walk, or stand. But I am asserting that heels change the way others—men and women—interact with me. It may have to do with the fact that I seem to walk more authoritatively (as I attempt to keep my balance, each foot must come down surely), and my standing stance is a bit straighter (again, balance) but the added height definitely helps. But with Greer’s remarks in mind, I make sure I have a pair of flats with me for when I want and need to run.

References: E.O. Smith (1999). High Heels and Evolution: Natural Selection, Sexual Selection, and High Heels Psychology, Evolution, and Gender, 1 (3), 245-277

Notes: 1. Forbes. Most Expensive Women’s Shoes 2. Fashion Bomb Daily. New Study Says Most Women Own About 17 Pairs of Shoes. 3. The earliest confirmed instance of footwear dates to approximately 9,000 year ago, and was found in Oregon. However, trace imprints of what may be sandals have been dated to 500,000 years ago. 4. Smith, E.O. (1999) High Heels and Evolution: 254 5. History of Footwear 6. Smith 1999: 255 7. AAOS. Tight Shoes and Foot Problems 8. Smith 1999: 251 9. Smith 1999: 265 10. Smith 1999: 268 11. Smith 1999: 269 12. Smith 1999: 269 13. Shine. Dangerous High Heels

#SciAmBlogs Friday - snowboarding crow, lads' mags, high heels, magnetoastrocoolness, SOPA and more.

New Orleans Finally Gets a Hurricane Protection Plan

Getting Ready for Scientific American Tweet-Up at the American Museum of Natural History

Magnetoastrocoolness: How Cosmic Magnetic Fields Shape Planetary Systems

#SciAmBlogs - Haiti recovery, science fairs, GMO foods, Dostoevsky, and more.

Continue

About the Author: Krystal D'Costa is an anthropologist working in digital media in New York City. You can follow AiP on Facebook. Follow on Twitter @krystaldcosta.