Friday, June 29, 2012

Linux grabs its single biggest win | Jack Wallen TechRepublic

By Jack Wallen June 18, 2012, 7:48 AM PDT

Takeaway: The U.S. Navy and Dept. of Defense have learned valuable lessons that translate to huge contracts for the Linux OS. What does this mean for open source and the community that drives it? Jack Wallen offers his take.

Northrop Grumman Transformational Fire Scout Vertical Takeoff and Landing Tactical Unmanned Aerial Vehicle system. Ever hear of it? It’s a U.S. Navy drone, otherwise known as the MQ-8B Fire Scout. Why is it significant? Because recently the Navy decided to drop the Windows operating system that was running in favor of Linux. And just why did they drop the previous operating system?

A virus.

That’s right…previously a virus had infected the operating system on the U.S. Air Force’s drone control system.

A virus…on the system controlling drones. Think about it. Imagine the consequences of a drone or fighter plane suffering from a computer virus — while armed! That was a significant enough “oops” to lead the U.S. Navy to migrate their drone systems from Windows to Linux.

When I read this, I was shocked. First and foremost, I couldn’t believe such planes were controlled by anything powered with any flavor of the Windows operating system — not when the U.S. Navy has enough intelligence and resources to even create their own OS. Once that shock flushed from my system, I had to wonder…who would be the one to run Combofix on the systems running those drones? What a horrible job that would be…having to take the fall for an infected computer system on a military aircraft.

Anyway…I digress.

The decision brings a 28 million dollar contract to the Linux community (who, exactly, will be getting this contract is unknown), but that is not all. Based on this (and other issues) with non-free software, the U.S. Department of Defense is laying out guidelines on how its agencies can use open source code. And even though the DOD’s use of open source code will alter the GPL for said code (they can’t, for obvious reasons, release any code they use and modify back into the wild), this is a huge deal for open source everywhere.

Think about it. The DOD has decided that open source is a more secure and reliable route than proprietary systems. That trickle down is going to have a serious, lasting effect in the world of Linux. Here’s how I see this working:

DOD begins Linux roll out US Government begins wide-spread roll out Civilian security companies world-wide begin roll out Universities fall in line Consumers begin clamoring for better security on their OS

Although this could seem like a pipe dream (for this to rain down upon the consumers), if the masses really want to start getting serious about their security (and they should), this should be a lesson from up high that should not be taken lightly.

Windows is a good desktop operating system — but one with many, serious security flaws. And although Microsoft is doing their best to tighten it all down, it’s simply and fundamentally insecure. The U.S. Navy and Department of Defense get this now. Maybe it’s time for the consumer to pick up on that thread and demand Linux on their desktops.

After all, if it’s good enough for the DOD and the Navy, isn’t it good enough for you?

I can already hear the naysayers proclaiming their usual litany of hate and doubt.

“Not enough games!” “No support!” “It doesn’t run ‘X’!”

Well, guess what, if there’s enough demand for it, eventually those complaints will fade away. Think of it like a relationship — you want to start a long term relationship built on a foundation of stability, friendship, and trust. Why? Because eventually the bedroom antics will dissipate and what remains will have to carry you into your twilight. Wouldn’t you rather have an operating system built on that same, strong foundation? Your love of games will eventually fade away. If your platform is solid and secure, you’ll enjoy it for many years to come. And if more people begin enjoying the Linux platform, eventually the games and the support and ‘X’ will arrive as well.

The U.S. Navy saw this.

Be the Navy.

About Jack Wallen

A writer for over 12 years, Jack's primary focus is on the Linux operating system and its effects on the open source and non-open source communities.

Linux grabs its single biggest win | Jack Wallen TechRepublic

By Jack Wallen June 18, 2012, 7:48 AM PDT

Takeaway: The U.S. Navy and Dept. of Defense have learned valuable lessons that translate to huge contracts for the Linux OS. What does this mean for open source and the community that drives it? Jack Wallen offers his take.

Northrop Grumman Transformational Fire Scout Vertical Takeoff and Landing Tactical Unmanned Aerial Vehicle system. Ever hear of it? It’s a U.S. Navy drone, otherwise known as the MQ-8B Fire Scout. Why is it significant? Because recently the Navy decided to drop the Windows operating system that was running in favor of Linux. And just why did they drop the previous operating system?

A virus.

That’s right…previously a virus had infected the operating system on the U.S. Air Force’s drone control system.

A virus…on the system controlling drones. Think about it. Imagine the consequences of a drone or fighter plane suffering from a computer virus — while armed! That was a significant enough “oops” to lead the U.S. Navy to migrate their drone systems from Windows to Linux.

When I read this, I was shocked. First and foremost, I couldn’t believe such planes were controlled by anything powered with any flavor of the Windows operating system — not when the U.S. Navy has enough intelligence and resources to even create their own OS. Once that shock flushed from my system, I had to wonder…who would be the one to run Combofix on the systems running those drones? What a horrible job that would be…having to take the fall for an infected computer system on a military aircraft.

Anyway…I digress.

The decision brings a 28 million dollar contract to the Linux community (who, exactly, will be getting this contract is unknown), but that is not all. Based on this (and other issues) with non-free software, the U.S. Department of Defense is laying out guidelines on how its agencies can use open source code. And even though the DOD’s use of open source code will alter the GPL for said code (they can’t, for obvious reasons, release any code they use and modify back into the wild), this is a huge deal for open source everywhere.

Think about it. The DOD has decided that open source is a more secure and reliable route than proprietary systems. That trickle down is going to have a serious, lasting effect in the world of Linux. Here’s how I see this working:

DOD begins Linux roll out US Government begins wide-spread roll out Civilian security companies world-wide begin roll out Universities fall in line Consumers begin clamoring for better security on their OS

Although this could seem like a pipe dream (for this to rain down upon the consumers), if the masses really want to start getting serious about their security (and they should), this should be a lesson from up high that should not be taken lightly.

Windows is a good desktop operating system — but one with many, serious security flaws. And although Microsoft is doing their best to tighten it all down, it’s simply and fundamentally insecure. The U.S. Navy and Department of Defense get this now. Maybe it’s time for the consumer to pick up on that thread and demand Linux on their desktops.

After all, if it’s good enough for the DOD and the Navy, isn’t it good enough for you?

I can already hear the naysayers proclaiming their usual litany of hate and doubt.

“Not enough games!” “No support!” “It doesn’t run ‘X’!”

Well, guess what, if there’s enough demand for it, eventually those complaints will fade away. Think of it like a relationship — you want to start a long term relationship built on a foundation of stability, friendship, and trust. Why? Because eventually the bedroom antics will dissipate and what remains will have to carry you into your twilight. Wouldn’t you rather have an operating system built on that same, strong foundation? Your love of games will eventually fade away. If your platform is solid and secure, you’ll enjoy it for many years to come. And if more people begin enjoying the Linux platform, eventually the games and the support and ‘X’ will arrive as well.

The U.S. Navy saw this.

Be the Navy.

Get IT Tips, news, and reviews delivered directly to your inbox by subscribing to TechRepublic’s free newsletters.

About Jack Wallen

A writer for over 12 years, Jack's primary focus is on the Linux operating system and its effects on the open source and non-open source communities.

Join the conversation!

See all comments See all comments

Join the TechRepublic Community and join the conversation! Signing-up is free and quick, Do it now, we want to hear your opinion.

Tuesday, June 12, 2012

Why The Economy Can’t Get Out of First Gear | ROBERT B. REICH,

TUESDAY, JUNE 12, 2012

Rarely in history has the cause of a major economic problem been so clear yet have so few been willing to see it.

The major reason this recovery has been so anemic is not Europe’s debt crisis. It’s not Japan’s tsumami. It’s not Wall Street’s continuing excesses. It’s economists tell us, because taxes are too high oñ corporations and the rich, and safety nets are too generous to the needy. It’s not even, as some liberals contend, because the Obama administration hasn’t spent enough on a temporary Keynesian stimulus.

The answer is in front of our faces. It’s because American consumers, whose spending is 70 percent of economic activity, don’t have the dough to buy enough to boost the economy – and they can no longer borrow like they could before the crash of 2008.

If you have any doubt, just take a look at the Survey of Consumer Finances, released Monday

Reserve. Median family income was $49,600 in 2007. By 2010 it was $45,800 – a drop of 7.7%.

All of the gains from economic growth have been going to the richest 1 percent – who, because they’re so rich, spend no more than half what they take in.

Can I say this any more simply? The earnings of the great American middle class fueled

expansion for three decades after World War II. Their relative lack of earnings in more recent years set us up for the great American bust.

Starting around 1980, globalization and automation began exerting downward pressure on median wages. Employers began busting unions in order to make more profits. And increasingly deregulated financial markets began taking over the real economy.

The result was slower wage growth for most households. Women surged into paid work in order to prop up family incomes – which helped for a time. But the median wage kept flattening, and then, after 2001, began to decline.

Households tried to keep up by going deeply into debt, using the rising values of their homes as collateral. This also helped – for a time. But then the housing bubble popped.

The Fed’s latest report shows how loud that pop was. Between 2007 and 2010 (the latest data available) American families’ median net worth fell almost 40 percent – down to levels last seen in 1992. The typical family’s wealth is their home, not their stock portfolio – and housing values have dropped by a third since 2006.

Families have also become less confident about how much income they can expect in the future. In 2010, over 35% of American families said they did not “have a good idea of what their income would be for the next year.” That’s up from 31.4% in 2007.

But because their incomes and their net worth have both dropped, families are saving less.

families that said they had saved in the preceding year fell from 56.4% in 2007 to 52% in 2010, the lowest level since the Fed began collecting that information in 1992.

Bottom line: The American economy is still struggling because the vast American middle class can’t spend more to get it out of first gear.

What to do? There’s no simple answer in the short term except to hope we stay in first gear and don`t slide backwards.

Over the longer term the answer is to make sure the middle class gets far more of the gains from economic growth.

How? We might learn something from history. During the 1920s, income concentrated at the top. By 1928, the top 1 percent was raking in an astounding 23.94 percent of the total (close to the 23.5 percent the top 1 percent got in 2007) according to analyses of tax records by my colleague Emmanuel Saez and Thomas Piketty. At that point the bubble popped and we fell into the Great Depression.

But then came the Wagner Act, requiring employers to bargain in good faith with organized labor. Social Security and unemployment insurance.

Administration and Civilian Conservation Corps. A national minimum wage. And to contain Wall Street: The Securities Act and Glass-Steagall Act.

In 1941 America went to war – a vast mobilization that employed every able-bodied adult and put money in their pockets. And after the war, the GI Bill, sending millions of returning veterans to college. A vast expansion of public higher education. and infrastructure investments, such as the National Defense Highway Act. Taxes on the rich remained at least 70 percent until 1981.

The result: By 1957, the top 1 percent of Americans raked in only 10.1 percent of total income. Most of the rest went to a growing middle class – whose members fueled the greatest economic boom in the history of the world.

Get it? We won’t get out of first gear until the middle class regains the bargaining power it had in the first three decades after World War II to claim a much larger share of the gains from productivity growth.

Virginia's dying marshes and climate change denial | Daniel Nasaw BBC News Magazine

5 June 2012 Last updated at 20:12 ET
By Daniel Nasaw BBC News Magazine
York River, Virginia

Dying wetland trees along Virginia's coastline are evidence that rising sea levels threaten nature and humans, scientists say - and show the limits of political action amid climate change scepticism.

Dead trees loom over the marsh like the bones of a whale beached long ago.

In the salt marshes along the banks of the York River in the US state of Virginia, pine and cedar trees and bushes of holly and wax myrtle occupy small islands, known as hummocks.

But as the salty estuary waters have risen in recent years, they have drowned the trees on the hummocks' lower edges. If - when - the sea level rises further, it will inundate and drown the remaining trees and shrubs, and eventually sink the entire marsh.

That threatens the entire surrounding ecosystem, because fish, oysters and crabs depend on the marsh grass for food.

These are just the early warning signs of what's coming, says avian ecologist Bryan Watts, stepping carefully among the fallen pines.

The sea level in the Chesapeake Bay area and in south-eastern Virginia is predicted to rise by as much as 5.2ft (1.6m) by the end of the century.

Ancient geologic forces are causing the land literally to sink, while the amount of water in the oceans is increasing because of global warming, scientists say.

As a result, the low-lying coastal areas - and the towns in it - are at tremendous risk of flooding.

To address the problem, climate scientists, environmentalists and their political supporters say the US must dramatically reduce its fossil fuel emissions, while also taking steps to lessen the impact of coastal flooding and wetland erosion.

There is time to turn the ship around, says Michael Mann, a former University of Virginia climate scientist, but there is not a whole lot of time.

But in Virginia's state capital Richmond, as in Washington, many politicians remain sceptical about the extent to which humans are responsible for global warming.

They fear measures needed to curb climate change would hurt the economy, threaten private property, and harm commercial and industrial interests.

Here in Virginia there is very little political will to address the mitigation side of things - reducing our carbon footprint, reducing greenhouse gas emissions, says Carl Hershner, who studies coastal resources management at the Virginia Institute of Marine Science.

There is a high degree of scepticism in the political and the general public.

Virginia's attorney general, Republican Ken Cuccinelli, has waged an aggressive public battle against the Obama administration's efforts to rein in greenhouse gas emissions, which he said would drive up electricity costs and kill jobs in the state's coal industry.

While politicians in Washington and in Richmond, Virginia's state capital, have done little to address the problem, authorities along Virginia's coast have watched the waters rise and have been forced to take action.

The city government of Norfolk spends about $6m (£3.8m) a year to elevate roads, improve drainage, and help homeowners literally raise their houses to keep their ground floors dry, says Assistant City Manager Ron Williams.

About 5%-10% of the city's lowest-lying neighbourhoods are subject to heavy flooding during storms. City planners do not currently recommend any areas be abandoned to the tide, but you have to have the conversation as you look 50 years out , Mr Williams says.

At Naval Station Norfolk, the world's largest naval base, the US Navy is spending hundreds of millions of dollars to replace aging piers with new ones better able to withstand the rising water.

Sea level rise was having a measurable impact on the readiness of the ships, says retired Capt Joseph Bouchard, who was commander of the base from 2000-2003. And that's unacceptable.

So the Navy decided to replace the old piers with double-decked piers - one for utilities, the other for the ship operations - with the upper deck 21ft above current sea level.

Were it not for sea level rise caused by climate change, the Navy could have replaced those piers with single deck piers at much much less cost, he says.

Even a measure as ostensibly mild as funding for a flooding study was fraught with climate change politics.

Senator Ralph Northam, a Democrat, and Chris Stolle, a Republican member of the Virginia's lower House of Delegates, this year shepherded a resolution through the legislature spending $50,000 on a comprehensive study of the economic impact of coastal flooding on the Virginia and to investigate ways to adapt.

To pass the bill, at Stolle's suggestion Northam excised the words relative sea level rise from an initial draft of the bill, replacing them with recurrent flooding in the final version.

Stolle says the change was necessary to ensure the bill focused on the issues Virginia politicians can handle - flooding - and not those they cannot address - global warming. In any case, the jury's still out on mankind' s contribution to global warming, he says.

Other folks can go argue about sea-level rise and global warming, Stolle says. What matters is people's homes are getting destroyed, and that's what we want to focus on. To think that we are going to stop climate change is absolute hubris. The climate is going to change whether we're here or not.

Northam describes the change in language as pragmatic politics - necessary to win support from conservatives sceptical of climate change science.

If you mention climate change to them, it's like a big red flag, he says. A barrier goes up. That's the way it is here in the Virginia.

BBC © 2012

Science Denial Is a Large and Growing Problem | Ryan Cooper Washington Monthly

June 11, 2012 3:06 PM

Kevin Drum isn’t happy with the latest talk around the long-running evolution belief survey, and people wringing their hands over the fact that nearly half of American’s espouse a recent creationism view when it comes to humankind:

Come on. This 46% number has barely budged over the past three decades, and I’m willing to bet it was at least as high back in the 50s and early 60s, that supposed golden age of comity and bipartisanship. It simply has nothing to do with whether we can all get along and nothing to do with whether we can construct a civil discourse.

The fact is that belief in evolution has virtually no real-life impact on anything. That’s why 46% of the country can safely choose not to believe it: their lack of belief has precisely zero effect on their lives. Sure, it’s a handy way of saying that they’re God-fearing Christians — a “cultural signifier,” as Andrew puts it — but our lives are jam-packed with cultural signifiers. This is just one of thousands, one whose importance probably barely cracks America’s top 100 list.

And the reason it doesn’t is that even creationists don’t take their own views seriously. How do I know this? Well, creationists like to fight over whether we should teach evolution in high school, but they never go much beyond that. Nobody wants to remove it from university biology departments. Nobody wants to shut down actual medical research that depends on the workings of evolution. In short, almost nobody wants to fight evolution except at the purely symbolic level of high school curricula, the one place where it barely matters in the first place. The dirty truth is that a 10th grade knowledge of evolution adds only slightly to a 10th grade understanding of biology.

I think this goes too far. For starters, saying evolution adds only slightly to a 10th grade understanding of biology is to say that there is no 10th grade understanding of biology, at all. Evolution is the single most important concept in biology, the idea that changed it from a random collection of facts to a real scientific discipline. Biology without evolution is akin to physics without math, and denying it is akin to denying heliocentrism.

Furthermore, I say a lack of wide understanding of evolution is hurting the country, most obviously in the form of antibiotic resistance. Industrial feedlots grow their animals stewed in powerful antibiotics to shave their operating costs, which is leading to bacteria evolving past them and resistant infections cropping up in humans. It’s a classic case of concentrated benefits and dispersed costs, which are tough to overcome in any case, but an understanding of evolution makes the situation immediately and alarmingly obvious, while disbelief can cloud the situation. Witness hack “scientists” at Liberty University, who publish work quibbling with the details of the evidence and thereby muddy the conversation. I’m not saying that’s the only factor, but surely if 80 percent of the country had a strong understanding of evolution, it would be easier to horsewhip the FDA into outlawing antibiotic use in non-sick animals.

More fundamentally, science denial in general is growing like gangbusters on the right, most obviously with respect to climate change. All the denier techniques now in common use among people like Jim DeMint—hysterical accusations, the fog of bogus but science-y sounding data, incessant TV appearances of the few deniers with actual credentials, taking things out of context, character assassination, repetition of debunked talking points, etc.—all these were perfected in the trenches of the evolution-creationism wars. It’s no accident that global warming denial found such fertile ground on the right.

Ryan Cooper is a General Assistant at the Washington Monthly, on Twitter at twitter.com/ryanlcooper.

The Fiscal Legacy of George W. Bush | BRUCE BARTLETT The New York Times

Bruce Bartlett held senior policy roles in the Reagan and George H.W. Bush administrations and served on the staffs of Representatives Jack Kemp and Ron Paul. He is the author of “The Benefit and the Burden: Tax Reform – Why We Need It and What It Will Take.”

Republicans assert that Barack Obama assumed sole responsibility for the budget on Jan. 20, 2009. From that date, all increases in the debt or deficit are his responsibility and no one else’s, they say.

The Fiscal Legacy of George W. Bush
JUNE 12, 2012, 6:00 AM

By BRUCE BARTLETT

This is, of course, nonsense – and the American people know it. As I documented in a previous post, even today 43 percent of them hold George W. Bush responsible for the current budget deficit versus only 14 percent who blame Mr. Obama.

The American people are right; Mr. Bush is more responsible, as a new report from the Congressional Budget Office documents.

In January 2001, the office projected that the federal government would run a total budget surplus of $3.5 trillion through 2008 if policy was unchanged and the economy continued according to forecast. In fact, there was a deficit of $5.5 trillion.

The projected surplus was primarily the result of two factors. First was a big tax increase in 1993 that every Republican in Congress voted against, saying that it would tank the economy. This belief was wrong. The economy boomed in 1994, growing 4.1 percent that year and strongly throughout the Clinton administration.

The second major contributor to budget surpluses that emerged in 1998 was tough budget controls that were part of the 1990 and 1993 budget deals. The main one was a requirement that spending could not be increased or taxes cut unless offset by spending cuts or tax increases. This was known as Paygo, for pay as you go.

During the 2000 campaign, Mr. Bush warned that budget surpluses were dangerous because Congress might spend them, even though Paygo rules prevented this from happening. His Feb. 28, 2001, budget message reiterated this point and asserted that future surpluses were likely to be even larger than projected due principally to anticipated strong revenue growth.

This was the primary justification for a big tax cut. Subsequently, as it became clear that the economy was slowing – a recession began in March 2001 – that became a further justification.

The 2001 tax cut did nothing to stimulate the economy, yet Republicans pushed for additional tax cuts in 2002, 2003, 2004, 2006 and 2008. The economy continued to languish even as the Treasury hemorrhaged revenue, which fell to 17.5 percent of the gross domestic product in 2008 from 20.6 percent in 2000. Republicans abolished Paygo in 2002, and spending rose to 20.7 percent of G.D.P. in 2008 from 18.2 percent in 2001.

According to the C.B.O., by the end of the Bush administration, legislated tax cuts reduced revenues and increased the national debt by $1.6 trillion. Slower-than-expected growth further reduced revenues by $1.4 trillion.

However, the Bush tax cuts continued through 2010, well into the Obama administration. These reduced revenues by another $369 billion, adding that much to the debt. Legislated tax cuts enacted by President Obama and Democrats in Congress reduced revenues by an additional $407 billion in 2009 and 2010. Slower growth reduced revenues by a further $1.3 trillion. Contrary to Republican assertions, there were no additional revenues from legislated tax increases.

In late 2010, Mr. Obama agreed to extend all the Bush tax cuts for another two years. In 2011, this reduced revenues by $105 billion.

On the spending side, legislated increases during the Bush administration added $2.4 trillion to deficits and the debt through 2008. This includes $121 billion for Medicare Part D, a new entitlement program enacted by Republicans in 2003.

Economic factors added almost nothing to increased spending – just $27 billion in total. This is mainly because interest rates were much lower than C.B.O. had anticipated, leading to lower spending for interest on the debt.

After 2008, it becomes harder to separate spending that was initiated under Mr. Bush from that under Mr. Obama. We do know that spending for Part D has risen rapidly – Republicans phased in the program to disguise its budgetary cost – adding $150 billion to the debt during 2009-11.

According to a recent report from the Center for Strategic and International Studies, the unfunded wars in Iraq and Afghanistan increased the debt by $795 billion through the end of fiscal 2008. The continuation of these wars by Mr. Obama added another $488 billion through the end of 2011.

Putting all the numbers in the C.B.O. report together, we see that continuation of tax and budget policies and economic conditions in place at the end of the Clinton administration would have led to a cumulative budget surplus of $5.6 trillion through 2011 – enough to pay off the $5.6 trillion national debt at the end of 2000.

Tax cuts and slower-than-expected growth reduced revenues by $6.1 trillion and spending was $5.6 trillion higher, a turnaround of $11.7 trillion. Of this total, the C.B.O. attributes 72 percent to legislated tax cuts and spending increases, 27 percent to economic and technical factors. Of the latter, 56 percent occurred from 2009 to 2011.

Republicans would have us believe that somehow we could have avoided the recession and balanced the budget since 2009 if only they had been in charge. This would be a neat trick considering that the recession began in December 2007, according to the National Bureau of Economic Research.

They would also have us believe that all of the increase in debt resulted solely from higher spending, nothing from lower revenues caused by tax cuts. And they continually imply that one of the least popular spending increases of recent years, the Troubled Asset Relief Program, was an Obama administration program, when in fact it was a Bush administration initiative proposed by the Treasury Department that was signed into law by Mr. Bush on Oct. 3, 2008.

Lastly, Republicans continue to insist that tax cuts are highly stimulative, often saying that they add nothing to the debt, when this is obviously ridiculous.

Conversely, they are adamant that tax increases must not be part of any deficit-reduction package because they never reduce deficits and instead are spent. This is also ridiculous, as the experience of the Clinton administration clearly shows. The new C.B.O. data confirm these facts.

Copyright 2012

Saturday, June 09, 2012

Understanding Bitcoin | Al Jazeera Nicolas Mendoza

Bitcoin is at the forefront of 'hacktivism', giving its users a free alternative to contemporary financial mechanisms. Last Modified: 09 Jun 2012 16:14

Bitcoin is an online decentralised economic system bypassing traditional infrastructures of modern finance [zcopley] The above image is licensed under Creative Commons, and can be found here.

Hong Kong - The term hacktivism has been grossly misconstrued by the media. The image of masked saboteurs attacking from the darkness has romantic appeal but this spectacular narrative of sabotage ultimately misinforms, other-ising hackers and distorting hacking itself. Richard Stallman defines hacking as exploring the limits of what is possible, in a spirit of playful cleverness . Real hacktivism, then, is less about denial of service attacks, which are acts of digital protest, than about the clever creation or intervention of software forms for social change. It is less about sabotage than about alternatives.

Hacktivism allows dissent to overcome the limitations of protest, actually implementing alternatives and making them widely available without asking for permission from the status quo. It gives wings to the possibility for gradual peaceful revolution: alternatives no longer need to remain dreams, but can become real options for real people.

Hacktivism often opens real spaces by selling the idea first to the machines, after which people realise other ways are possible and allow themselves to think in new ways. This is what the work of a programmer known as Satoshi Nakamoto did for economics. In 2008 he coded a critique of the world's monetary system into a P2P computer protocol he called Bitcoin. Bitcoin started running on January 3, 2009, and is now a working decentralised monetary system with thousands of users around the world.

The Bitcoin protocol is based on a fundamental critique of the world's monetary system: that it demands undeserved amounts of trust from us. Nakamoto thought that it would be better to place trust outside the monetary system itself and back into social life:

The root problem with conventional currency is all the trust that's required to make it work. The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust. Banks must be trusted to hold our money and transfer it electronically, but they lend it out in waves of credit bubbles with barely a fraction in reserve. We have to trust them with our privacy, trust them not to let identity thieves drain our accounts. Their massive overhead costs make micropayments impossible.

Through a clever use of encryption technology, the Bitcoin protocol enables this move. In networked storage systems, Nakamoto explains, strong encryption technology affords end users peace of mind because they no longer need to trust the system admin with their privacy. He argues that if money could be similarly encrypted, middlemen who provide trust (e.g. banks) could be bypassed:

It's time we had the same thing [strong encryption] for money. With e-currency based on cryptographic proof, without the need to trust a third party middleman, money can be secure and transactions effortless. (…)

[Bitcoin] takes advantage of the nature of information being easy to spread but hard to stifle. The result is a distributed system with no single point of failure. Users hold the crypto keys to their own money and transact directly with each other, with the help of the P2P network to check for double spending.

The result is Bitcoin. It is not controlled by any state nor owned by any company; neither is it a company in itself. It is merely an open source computer protocol that runs over the internet.

Finite fiat, if I may

Strictly speaking, I would argue, Bitcoin is a fiat currency. The term fiat is Latin for let it be done: it designates systems where an entity (eg the Federal Reserve) summons new money into existence by saying, in a god-like way, let it be done . In the case of Bitcoin new coins are brought into existence across the network by the algorithms in the protocol.

No entity or individuals are entitled to new bitcoins on merits other than their standing among the sum of active nodes in the network in terms of computing power. Of course, anyone can also earn already-existing bitcoins through work, through the exchange of goods and services, or (in the case of social organisations) through the trust the public places on their ability to do good.

In short, bitcoins are created through a transparent and distributed process determined by mathematics. Bitcoin is finite post-Westphalian fiat, a monetary system where currency is indeed created, but through an algorithm driven by the logic of the network of distributed - rather than concentrated - power.

Bitcoin does a better job than central entities (like the Fed) at creating new money because it does so in a decentralised way and without the need to create debt; it does a better job at storage than banks because it does so for free; and it does a better job at transfer than SWIFT because it is faster, cheaper, available to anyone, and not subjected to the control of Western powers: SWIFT's ability to blockade Iranian banking transactions shows the ultimately unilateral nature of global financial channels.

What we have here is radically different from the current system where money creation is based on debt, politically motivated, surrounded by secrecy, inflationary, unilateralist, colonialist, and exploitative of powerless nations, etc. The flaws in the design of modern currency are at the roots of the social and ecological disasters we face today. Alternative currencies in general hold the promise of a way out, and the emergence of a vibrant Bitcoin economy in particular is one of the most interesting developments in recent times.

Money not owed

Bitcoin, I think, is revolutionary especially because it distributes the creation of money. The current system, based on national and personal debt, insidiously concedes obscene yet hard to object power to rich nations and global banks. Debt-based money not only provides exorbitant privileges to powerful nations and threatens collapse in Europe and the US under its own absurdity; it also deforms the nature of human sociality.

The disastrous social consequences of placing debt at the root of the creation of money cannot be understated. A society whose currency is backed by debt (aka the modern world) is a society where freedom is just a word because the reality of everyday life, even for the middle classes of so-called rich countries, tends toward sublimated forms of slavery or debt peonage. An economy built on debt-based currency can only grow , the 2008 economic collapse showed us, by putting more people deeper into debt. Inevitably, this leads to a society where the many always owe more and more to the few, eventually making democracy a farce. Bankers, as Robert Fisk puts it, are the dictators of the West.

The whole world's formal economy is backed by debt, and debt is backed by violence. This can be verified by defaulting, and subsequently resisting eviction: state force will be used sooner rather than later. Anthropologist David Graeber, one of the most prominent scholars in the Occupy Wall Street movement, articulates in his book Debt: The First 5,000 Years, how debt embeds our culture and our very selves into an inhuman, unsustainable, condition of iniquity:

…by turning human sociality itself into debts, they transform the very foundations of our being since what else are we, ultimately, except the sum of the relations we have with others into matters of fault, sin, and crime, and making the world into a place of iniquity that can only be overcome by completing some great cosmic transaction that will annihilate everything.

The P2P money creation system that Bitcoin proposes is truly something else as it deflates the dark power of debt-based money in society; it allows envisioning a world where the wheels of debt are no longer at the origin of economic activity. This does not mean that Bitcoin is necessarily the final and perfect answer to our needs, but it is an important step in demonstrating that it can be done.

Ten years

It has already been over three years since the Bitcoin protocol started running, and yet these are still the very early days. What Nakamoto created is really just an open and autonomous backbone for global finance. Several layers of complementary technologies and services will need to be developed around this backbone before Bitcoin can aspire to really become an operational global currency for the 21st century.

Cleverly, he devised a system in a way that planted the incentive to take over this extremely complex task in individuals likely to have intimate knowledge of technology and an understanding of the nature of networked sociality.

The first miners joined the network out of intellectual curiosity when it was nothing more than an experiment posted to an obscure cryptography forum. Bitcoins were easy to mine in the beginning, and as the network grew they gained real value. Suddenly many realised they had run into small fortunes, that they could potentially become larger fortunes if Bitcoin succeeded, and that it was really up to them to make it happen. They understood that their success depends on making Bitcoin useful, safe and easy for the largest possible amount of people.

Bitcoin entrepreneurs have already developed an impressive, if experimental and imperfect, ecology of operational support infrastructures. Available services include exchange, escrow, arbitrage, transfer, storage, consulting, investment, auction, payment, mobile support, etc. A lot of things can already be paid for using Bitcoin. These services are autonomous initiatives, driven by no authority other than that which emanates from the needs of Bitcoin users and the nature of the Bitcoin protocol

On a larger scale, Bitcoin's neutrality also gives it the potential to be a good national reserve currency as well as a low-friction medium for international trade. Governments, especially in poor countries , could start their own Bitcoin mining operations and make Bitcoin an acceptable means of tax payment: a Bitcoin reserves strategy could shield vulnerable economies from global currency cycles and provide increased autonomy from foreign powers.

If Bitcoin is to become a widely used everyday currency, it will not happen overnight. Rick Falkvinge, founder of the Swedish Pirate Party, believes that it will take Bitcoin about eight more years to reach the level of usability required for wide adoption:

I predict that Bitcoin will reach usability sometime around 2019. I base that prediction on earlier disruption technologies, where blogging started appearing in 1994 and reached mainstream adoption in 2004; file sharing started in 1989 over the net and Napster hit in 1999. You had streaming video 1995, mainly porn sites streaming animated gifs, what was then tip of the spear technology; Youtube was founded 2005 and just swept the floor with everyone else just because they were usable. This is not something bad; it is just an observation that it takes ten years to get a disruptive technology from inception to becoming so easy to use that it reaches mainline adoption.

When it comes to money, people are understandably reluctant towards experimentation. Either it works, meaning it provides clear advantages, or it doesn't. No part of the Bitcoin economy will last unless it is objectively a better deal for the end user than the flawed-but-known ways of today. In this sense Bitcoin is perhaps one of the hacktivist revolution's greatest tests: can the network itself actually handle the globe's finance? Can it really deliver better money for this incredibly complex world? It could very well be that it actually will. It seems to be advancing in that direction, slowly, step by step.

Nicolás Mendoza is a scholar, artist and researcher in global media from The University of Melbourne and a member of the P2P Foundation. Nicolas Mendoza is a scholar, artist and researcher in global media from The University of Melbourne. Follow him on Twitter: @nicolasmendo

Friday, June 08, 2012

Romney wrong on tax cuts | Fareed Zakaria OpEd in The Washington Post

Fareed Zakaria

Thursday, Jun 7, 2012 The Obama campaign’s attack ad about Bain Capital presented a simplistic picture of a complicated reality. Private-equity firms can play a crucial role in keeping companies competitive. And although some firms have engaged in some bad practices, on the whole the industry has grown so large because it performs a useful function. But the worst part about the ad was that it had little to do with America’s challenges or Obama’s policies.

By contrast, Mitt Romney’s first major ad is substantive — and wrong. He tells us that on his first day in office — after approving the Keystone XL pipeline — he will “introduce tax cuts . . . that reward job creators not punish them.” The one idea that is almost certain not to jump-start this economy is a tax cut.

Why can we be sure of this? Because that is what we have done for the past three years. For those who think President Obama’s policies have done little to produce growth, keep in mind that the single largest piece of his policies — in dollar terms — has been tax cuts. They actually began before Obama, with the tax cut passed under the George W. Bush administration in response to the financial crisis in 2008. Then came the stimulus bill, of which tax cuts were the largest chunk by far — one-third of the total. The Department of Transportation, by contrast, got 6 percent of the total to fix infrastructure.

That wasn’t the end of it. There was the payroll tax cut, the small business tax cut, the extension of the payroll tax cut, and so on. The president’s Twitter feed boasted: “President Obama has signed 21 tax cuts to support middle class families.” And how has that worked out?

In the wake of a financial crisis caused by excessive debt, tax cuts are highly unlikely to lead to increased economic activity. People use the money to pay down their debts rather than shop for cars, houses and appliances. As for the idea that job creators are not creating jobs because their taxes are too high, think about it: Would Mitt Romney invest more of his money in American factories if only he had paid less than the 13.9 percent rate he paid last year? Please!

The Wall Street Journal invoked Milton Friedman to say that the problem with all of these tax cuts is that they are temporary. If only we had across-the-board cuts in rates. Except that these were tried as well. The 2001 Bush tax cuts were designed precisely along those lines. They were, in dollar terms, the largest tax cuts in U.S. history.

And the nonpartisan Congressional Research Service concluded in 2010 that “by almost any economic indicator, the economy performed better in the period before the [Bush] tax cuts than after the tax cuts were enacted. . . . GDP growth, median real household income growth, weekly hours worked, the employment-population ratio, personal savings, and business investment growth were all lower in the period after the tax cuts were enacted.” The years 2000 to 2007 were the period of the weakest job growth in the United States since the Great Depression.

The one certain effect of tax cuts would be to balloon the deficit. Bruce Bartlett, a former economic official under Ronald Reagan, points out that the aggregate revenue loss of the Bush tax cuts was the largest in U.S. history. “Both Harry Truman and Ronald Reagan passed larger individual tax cuts, but both took back about half of them with subsequent tax increases.”

When pressed, Romney and his advisers sometimes say that they are just for tax reform; other times, they cite the Simpson-Bowles plan. I’ve long argued that reforming the nation’s bloated and corrupt tax code is vital and that Simpson-Bowles is a superb framework for deficit reduction. But neither will cut taxes. Simpson-Bowles raises them by more than a trillion dollars. You can use euphemisms such as “ending tax expenditures” and “closing loopholes,” but when you do that, someone’s taxes will go up. And when you close big loopholes such as the deduction of mortgage interest — which is the only way to get real revenue —tens of millions of peoples’ taxes will go up.

Tax cuts have been a central cause of America’s deficit problems. For four decades, Washington politicians have bought popularity by cutting taxes, always saying that spending cuts or growth will make up for lost revenue. That rarely happened, and the result is $11 trillion in federal debt held by the public. To perpetuate this pandering one more time is not just dishonest — it is dangerous.

comments@fareedzakaria.com

Monday, June 04, 2012

Salt, We Misjudged You | The New York Times

June 2, 2012
By GARY TAUBES Oakland, Calif.

THE first time I questioned the conventional wisdom on the nature of a healthy diet, I was in my salad days, almost 40 years ago, and the subject was salt. Researchers were claiming that salt supplementation was unnecessary after strenuous exercise, and this advice was being passed on by health reporters. All I knew was that I had played high school football in suburban Maryland, sweating profusely through double sessions in the swamplike 90-degree days of August. Without salt pills, I couldn’t make it through a two-hour practice; I couldn’t walk across the parking lot afterward without cramping.

While sports nutritionists have since come around to recommend that we should indeed replenish salt when we sweat it out in physical activity, the message that we should avoid salt at all other times remains strong. Salt consumption is said to raise blood pressure, cause hypertension and increase the risk of premature death. This is why the Department of Agriculture’s dietary guidelines still consider salt Public Enemy No. 1, coming before fats, sugars and alcohol. It’s why the director of the Centers for Disease Control and Prevention has suggested that reducing salt consumption is as critical to long-term health as quitting cigarettes.

And yet, this eat-less-salt argument has been surprisingly controversial — and difficult to defend. Not because the food industry opposes it, but because the actual evidence to support it has always been so weak.

When I spent the better part of a year researching the state of the salt science back in 1998 — already a quarter century into the eat-less-salt recommendations — journal editors and public health administrators were still remarkably candid in their assessment of how flimsy the evidence was implicating salt as the cause of hypertension.

“You can say without any shadow of a doubt,” as I was told then by Drummond Rennie, an editor for The Journal of the American Medical Association, that the authorities pushing the eat-less-salt message had “made a commitment to salt education that goes way beyond the scientific facts.”

While, back then, the evidence merely failed to demonstrate that salt was harmful, the evidence from studies published over the past two years actually suggests that restricting how much salt we eat can increase our likelihood of dying prematurely. Put simply, the possibility has been raised that if we were to eat as little salt as the U.S.D.A. and the C.D.C. recommend, we’d be harming rather than helping ourselves.

WHY have we been told that salt is so deadly? Well, the advice has always sounded reasonable. It has what nutritionists like to call “biological plausibility.” Eat more salt and your body retains water to maintain a stable concentration of sodium in your blood. This is why eating salty food tends to make us thirsty: we drink more; we retain water. The result can be a temporary increase in blood pressure, which will persist until our kidneys eliminate both salt and water.

The scientific question is whether this temporary phenomenon translates to chronic problems: if we eat too much salt for years, does it raise our blood pressure, cause hypertension, then strokes, and then kill us prematurely? It makes sense, but it’s only a hypothesis. The reason scientists do experiments is to find out if hypotheses are true.

In 1972, when the National Institutes of Health introduced the National High Blood Pressure Education Program to help prevent hypertension, no meaningful experiments had yet been done. The best evidence on the connection between salt and hypertension came from two pieces of research. One was the observation that populations that ate little salt had virtually no hypertension. But those populations didn’t eat a lot of things — sugar, for instance — and any one of those could have been the causal factor. The second was a strain of “salt-sensitive” rats that reliably developed hypertension on a high-salt diet. The catch was that “high salt” to these rats was 60 times more than what the average American consumes.

Still, the program was founded to help prevent hypertension, and prevention programs require preventive measures to recommend. Eating less salt seemed to be the only available option at the time, short of losing weight. Although researchers quietly acknowledged that the data were “inconclusive and contradictory” or “inconsistent and contradictory” — two quotes from the cardiologist Jeremiah Stamler, a leading proponent of the eat-less-salt campaign, in 1967 and 1981 —publicly, the link between salt and blood pressure was upgraded from hypothesis to fact.

In the years since, the N.I.H. has spent enormous sums of money on studies to test the hypothesis, and those studies have singularly failed to make the evidence any more conclusive. Instead, the organizations advocating salt restriction today — the U.S.D.A., the Institute of Medicine, the C.D.C. and the N.I.H. — all essentially rely on the results from a 30-day trial of salt, the 2001 DASH-Sodium study. It suggested that eating significantly less salt would modestly lower blood pressure; it said nothing about whether this would reduce hypertension, prevent heart disease or lengthen life.

While influential, that trial was just one of many. When researchers have looked at all the relevant trials and tried to make sense of them, they’ve continued to support Dr. Stamler’s “inconsistent and contradictory” assessment. Last year, two such “meta-analyses” were published by the Cochrane Collaboration, an international nonprofit organization founded to conduct unbiased reviews of medical evidence. The first of the two reviews concluded that cutting back “the amount of salt eaten reduces blood pressure, but there is insufficient evidence to confirm the predicted reductions in people dying prematurely or suffering cardiovascular disease.” The second concluded that “we do not know if low salt diets improve or worsen health outcomes.”

The idea that eating less salt can worsen health outcomes may sound bizarre, but it also has biological plausibility and is celebrating its 40th anniversary this year, too. A 1972 paper in The New England Journal of Medicine reported that the less salt people ate, the higher their levels of a substance secreted by the kidneys, called renin, which set off a physiological cascade of events that seemed to end with an increased risk of heart disease. In this scenario: eat less salt, secrete more renin, get heart disease, die prematurely.

With nearly everyone focused on the supposed benefits of salt restriction, little research was done to look at the potential dangers. But four years ago, Italian researchers began publishing the results from a series of clinical trials, all of which reported that, among patients with heart failure, reducing salt consumption increased the risk of death.

Those trials have been followed by a slew of studies suggesting that reducing sodium to anything like what government policy refers to as a “safe upper limit” is likely to do more harm than good. These covered some 100,000 people in more than 30 countries and showed that salt consumption is remarkably stable among populations over time. In the United States, for instance, it has remained constant for the last 50 years, despite 40 years of the eat-less-salt message. The average salt intake in these populations — what could be called the normal salt intake — was one and a half teaspoons a day, almost 50 percent above what federal agencies consider a safe upper limit for healthy Americans under 50, and more than double what the policy advises for those who aren’t so young or healthy. This consistency, between populations and over time, suggests that how much salt we eat is determined by physiological demands, not diet choices.

One could still argue that all these people should reduce their salt intake to prevent hypertension, except for the fact that four of these studies — involving Type 1 diabetics, Type 2 diabetics, healthy Europeans and patients with chronic heart failure — reported that the people eating salt at the lower limit of normal were more likely to have heart disease than those eating smack in the middle of the normal range. Effectively what the 1972 paper would have predicted.

Proponents of the eat-less-salt campaign tend to deal with this contradictory evidence by implying that anyone raising it is a shill for the food industry and doesn’t care about saving lives. An N.I.H. administrator told me back in 1998 that to publicly question the science on salt was to play into the hands of the industry. “As long as there are things in the media that say the salt controversy continues,” he said, “they win.”

When several agencies, including the Department of Agriculture and the Food and Drug Administration, held a hearing last November to discuss how to go about getting Americans to eat less salt (as opposed to whether or not we should eat less salt), these proponents argued that the latest reports suggesting damage from lower-salt diets should simply be ignored. Lawrence Appel, an epidemiologist and a co-author of the DASH-Sodium trial, said “there is nothing really new.” According to the cardiologist Graham MacGregor, who has been promoting low-salt diets since the 1980s, the studies were no more than “a minor irritation that causes us a bit of aggravation.”

This attitude that studies that go against prevailing beliefs should be ignored on the basis that, well, they go against prevailing beliefs, has been the norm for the anti-salt campaign for decades. Maybe now the prevailing beliefs should be changed. The British scientist and educator Thomas Huxley, known as Darwin’s bulldog for his advocacy of evolution, may have put it best back in 1860. “My business,” he wrote, “is to teach my aspirations to conform themselves to fact, not to try and make facts harmonize with my aspirations.”

A Robert Wood Johnson Foundation Independent Investigator in Health Policy Research and the author of “Why We Get Fat.”