Computer science has long wrestled with the question of how anyone can know what a computer or the programs running on it are doing. The Halting Problem described by Alan Turing in 1936 tells us that in complex cases, it’s impossible to predict what a computer program will do without actually running it. A related problem is knowing whether any defects or errors exist in a computer program. This is Gödel’s territory: his ‘‘incompleteness’’ tells us that it’s hard-to-impossible to prove that a program is free from bugs.
These sound like abstract problems, but they are vital to every computer user’s life. Without knowing what a computer might do in some circumstance, it’s hard to predict whether two programs will collide and cause unpredictable things to happen, like crashes, data-loss, or data-corruption. The inability to prove that programs are bug-free means that clever or lucky adversaries might find new bugs and use them to take over a computer’s functionality, which is how we get spyware and all its related infections.
After a bunch of ineffective answers (read: anti-virus programs), the solution that nearly everyone has converged on is curation. Rather than demanding that users evaluate every program to make sure that it is neither incompetent nor malicious, we delegate our trust to certifying authorities who examine software and give the competent and beneficial programs their seal of approval.
But ‘‘curation’’ covers a broad spectrum of activities and practices. At one end, you have Ubuntu, a flavor of GNU/Linux that offers ‘‘repositories’’ of programs that are understood to be well-made and trustworthy. Ubuntu updates its repositories regularly with new versions of programs that correct defects as they are discovered. Theoretically, an Ubuntu repository could remove a program that has been found to be malicious, and while this hasn’t happened to date, a recent controversy on the proposed removal of a program due to a patent claim confirmed the possibility. Ubuntu builds its repositories by drawing on still other repositories from other GNU/Linux flavors, making it a curator of other curators, and it doesn’t pretend that it’s the only game in town. A few seconds’ easy work is all it takes to enable other repositories of software maintained by other curators, or to install software directly, without a repository. Ubuntu warns you when you do this that you’d better be sure you can trust the people whose software you’re installing, because they could be assuming total control over your system.
At the other end of the scale, you have devices whose manufacturers have a commercial stake in curation, such as Apple and Nintendo. Apple gets 30% of all the sales from the App stores, and then a further 30% of all the transactions taking place within those apps. Naturally, Apple goes to great lengths to ensure that you can’t install ‘‘third party’’ apps on your device and deprive Apple of its cut. They use cryptographic signatures to endorse official apps, and design their devices to reject unsigned apps or apps with a non-matching signature. Nintendo has a similar business model: they charge money for the signature that informs a 3DS that a given program is authorized to run. Nintendo goes further than Apple with the 3DS, which automatically fetches and installs OS updates, and if it detects any tampering with the existing OS, it permanently disables the device.
Both Apple and Nintendo argue that their curatorial process also intercepts low-quality and malicious software, preventing it from entering the platform. I don’t doubt their sincerity. But their curatorial model treats owners as adversaries and executes countermeasures to prevent people from knowing what the programs on their devices are doing. It has to, because a user who can fully inspect the operating system and its processes, and terminate or modify processes at will, can subvert the curation and undermine the manufacturer’s profits by choosing to buy software elsewhere.
This means that when bad things do slip through the curatorial process, the owner of the device has a harder time discovering it. A recent example of this was the scandal of CarrierIQ, a company that makes spyware for mobile phones. CarrierIQ’s software can capture information from the phone’s GPS, look at its files, inspect your text messages and e-mails, and watch you enter your passwords. Carriers had been knowingly and covertly installing this software, shipping an estimated 150,000,000 infected handsets.
The carriers are cagey about why they did this. CarrierIQ characterizes its software as a quality assurance tool that helped carriers gather aggregate statistics useful for planning and tuning their infrastructure. Some industry insiders speculate that CarrierIQ was also used to prevent ‘‘tethering’’ (using your mobile phone as a data modem for your laptop or other device). CarrierIQ ran covertly on both iPhones and Android phones, but CarrierIQ was discovered on Android first.
Android is closer to Ubuntu in its curatorial model than it is to Apple’s iOS. Out of the box, Android devices trust software from Google’s Marketplace, but a checkbox lets users choose to trust other marketplaces, too (Amazon operates an alternative Android marketplace). As a result, Android’s design doesn’t include the same anti-user countermeasures that are needed to defend a kind of curation that treats users adversarially. Trevor Eckhart, the researcher who discovered CarrierIQ in the wild, was able to hunt for suspicious activity in Android because Android doesn’t depend on keeping its workings a secret. A widely distributed, free, and diverse set of tools exist for inspecting and modifying Android devices. Armed with the knowledge of CarrierIQ’s existence, researchers were able to go on to discover it on Apple’s devices, too.
It all comes down to whom users can trust and how they verify the ongoing trustworthiness of those parties, and what they trust them to do. You might trust Apple to ensure high quality, but do you trust them to police carriers to prevent the insertion of CarrierIQ before the phone gets to you? You might trust Nintendo to make a great 3DS, but do you trust them to ensure the integrity of their updating mechanism? After all, a system that updates itself without user permission is a particularly tasty target: if you can trick the device into thinking that it has received an official update, it will install it and nothing the user can do will prevent that.
Let’s get a sense of what trust means in these contexts. In 2010, a scandal erupted in Lower Merion PA, an affluent Philadelphia suburb. The Lower Merion School District had instituted a mandatory laptop program that required students to use school-supplied laptops for all their schoolwork. These computers came loaded with secret software that could covertly operate the laptops’ cameras (when the software ran, the cameras’ green activity lights stayed dark), as well as capturing screengrabs and reading the files on the computers’ drives. This software was nominally in place to track down stolen laptops.
But a student discovered the software’s existence when he was called into his principal’s office and accused of taking narcotics. He denied the charge and was presented with a photo taken the night before in his bedroom, showing him taking what appeared to be pills. The student explained that these were Mike & Ike’s candies, and asked how the hell the principal had taken a picture of him in his bedroom at night? It emerged that this student had been photographed thousands of times by his laptop’s camera without his knowledge: awake and asleep, clothed and undressed.
Lower Merion was just the first salvo. Today, a thriving ‘‘lawful interception’’ industry manufactures ‘‘network appliances’’ for subverting devices for the purposes of government and law-enforcement surveillance. In 2011, Wikileaks activist Jacob Applebaum attended the ‘‘wiretapper’s ball,’’ a lawful interception tradeshow in Washington DC. He left with an armload of product literature from unwitting vendors, and a disturbing account of how lawful interception had become a hotbed of subverting users’ devices to spy on them. For example, one program sent fake (but authenticated) iTunes updates to target computers, and then took over the PCs and gave control of their cameras, microphones, keyboard, screens, and drives to the program’s operators. Other versions disguised themselves as updates to Android or iPhone apps. The mobile versions could also access GPSes, record SMS conversations, and listen in on phone calls.
Around the same time, a scandal broke out in Germany over the ‘‘Staatstrojaner’’ (‘‘state-Trojan’’), a family of lawful interception programs discovered on citizens’ computers, placed there by German government agencies. These programs gathered enormous amounts of data from the computers they infected, and they were themselves very insecure. Technology activists in the Chaos Computer Club showed that it was trivial to assume control over Staatstrojaner-infected machines. Once the government infected you, they left you vulnerable to additional opportunistic infections from criminals and freelance snoops.
This is an important point: a facility that relies on a vulnerability in your computer is equally available to law enforcement and to criminals. A computer that is easy to wiretap is easy to wiretap by everyone, not just cops with wiretapping warrants (assuming you live in a country that still requires warrants for wiretapping).
2011 was also the year that widespread attention came to UEFI, the Unified Extensible Firmware Interface. UEFI is a hardware component that will be included in future PCs that checks operating systems to see if they have valid curatorial signatures before they are allowed to run. In theory, this can be used to ensure that only uninfected operating systems run (an infected system will no longer match the signature).
UEFI has additional potential, though. Depending on whether users are allowed to override UEFI’s judgment, this could also be used to prevent users from running operating systems that they trust more than the signed ones. For example, you might live in Iran, and believe that the Iranian police use UEFI to ensure that only versions of Windows with a built-in wiretapping facility can run on officially imported PCs. Why not? If Germany’s government can justify installing Staatstrojaners on suspects’ PCs, would Iran really hesitate to ensure that they could conduct Staatstrojaner-grade surveillance on anyone without the inconvenience of installing the Staatstrojaner in the first place? German Staatstrojaner investigators believe the software was installed during customs inspections, while computers were out of their owners’ sight. Much easier to just ban the sale of computers that don’t allow surveillance out of the box.
At the same time, a UEFI-like facility in computers would be a tremendous boon to anyone who wants to stay safe. If you get to decide which signatures your UEFI trusts, then you can, at least, know that your Ubuntu computer is running software that matches the signatures from the official Ubuntu repositories, know that your iTunes update was a real iTunes update, and that your Android phone was installing a real security patch. Provided you trust the vendors not to give in to law enforcement pressure (the Iran problem), or lose control of their own updating process (the CarrierIQ problem), you can be sure that you’re only getting software from curators you trust.
As an aside, other mechanisms might be used to detect subversion or loss of control or pressure – you might use a verification process that periodically checks whether your software matches the software that your friends and random strangers also use, which raises the bar on subversion. Now a government or crook has to get a majority of the people you compare notes with to install subverted software, too.
All this is pretty noncontroversial in security circles. Some may quibble about whether Apple or Nintendo would ever be subverted, but no one would say that no company will ever be subverted. So if we’re to use curation as part of a security strategy, it’s important to consider a corrupt vendor as a potential threat.
The answer to this that most of the experts I speak to come up with is this:
The owner (or user) of a device should be able to know (or control) which software is running on her devices.
This is really four answers, and I’ll go over them in turn, using three different scenarios: a computer in an Internet cafe, a car, and a cochlear implant. That is, a computer you sit in front of, a computer you put your body into, and a computer you put in your body.
1. Users know.
The user of a device should be able to know which software is running on her devices.
You can’t choose which programs are running on your device, but every device is designed to faithfully report which programs are running on it.
If you walk into an Internet cafe and sit down at a computer, you can’t install a keylogger on it (to capture the passwords of the next customer), but you can also trust that there are no keyloggers in place on the computer.
If you get behind the wheel of a car, you can’t install a hands-free phone app, but you also know whether its GPS is sending your location to law enforcement or carjackers.
If you have a cochlear implant, you can’t buy a competitor’s superior, patented signal processing software (so you’ll have inferior hearing forever, unless you let a surgeon cut you open again and put in a different implant). But you know that griefers or spies aren’t running covert apps that can literally put voices in your head or eavesdrop on every sound that reaches your ear.
2. Owners know.
The owner of a device should be able to know which software in running on her devices.
The owner of an Internet cafe can install keyloggers on his computers, but users can’t. Law enforcement can order that surveillance software be installed on Internet cafe computers, but users can’t tell if it’s running. If you trust that the manufacturer would never authorize wiretapping software (even if tricked by criminals or strong-armed by governments), then you know that the PC is surveillance-free.
Car leasing agencies or companies that provide company cars can secretly listen in on drivers, secretly record GPS data, and remotely shut down cars. Police (or criminals who can take control of police facilities) can turn off your engine while you’re driving. Again, if you trust that your car’s manufacturer can’t be coerced or tricked into making technology that betrays drivers to the advantage of owners, you’re safe from these attacks.
If you lease your cochlear implant, or if you’re a minor or legally incompetent, your guardians or the leasing company can interfere with your hearing, making you hear things that aren’t there or switching off your hearing in some or all instances. Legislative proposals (like the ones that emanated from the Motion Picture Association of America’s Analog Reconversion Working Group) might have the side-effect of causing you to go deaf in the presence of copyrighted music or movie soundtracks.
3. Owners control.
The owner of a device should be able to control which software in running on her devices.
The owner of an Internet cafe can spy on her customers, even if the original PC vendor doesn’t make an app for this, and can be forced to do so by law enforcement. Users can’t install keyloggers.
Car leasing agencies and companies with company cars can spy on and disable cars, even if the manufacturers don’t support this. Drivers can’t know what locks, spyware, and kill-switches are in the cars they drive.
You can choose to run competitors’ software on your cochlear implant, no matter who made it, without additional surgery. If you’re a minor, your parents can still put voices in your head. If you don’t trust your cochlear implant vendor to resist law-enforcement or keep out criminals, you can choose one you do trust.
4. Users control.
The user of a device should be able to control which software in running on her devices.
An Internet cafe’s owner can’t spy on his customers, and customers can’t spy on each other.
Car leasing agencies and employers can’t spy on drivers. Drivers don’t have to trust that manufacturers will resist government or crooks. Government can’t rely on regulation to ensure emissions, braking characteristics, or speed-governors.
Everyone gets to choose which vendors supply software for their implants. Kids can override parents’ choices about what they can and can’t hear (and so can crazy people or people who lease their implants).
Of these four scenarios, I favor the last one: ‘‘The user of a device should be able to control which software in running on her devices.’’ It presents some regulatory challenges, but I suspect that these will be present no matter what, because crooks and tinkerers will always subvert locks on their devices. That is, if you believe that your self-driving cars are safe because no one will ever figure out how to install crashware on them, you’re going to have problems. Security that tries to control what people do with technology while it is in their possession, and that fails catastrophically if even a few people break it, is doomed to fail.
However, the majority of experts favor three, ‘‘The owner of a device should be able to control which software in running on her devices.’’ I think the explanation for this is the totalizing nature of property rights in our society. At a fundamental level, we have absorbed Lord Blackstone’s notion of property as ‘‘that sole and despotic dominion which one man claims and exercises over the external things of the world, in total exclusion of the right of any other individual in the universe.’’
Especially when it comes to corporate equipment. It’s a hard-to-impossible sell to convince people that employees, not employers, should be able to decide what software runs on their computers and other devices. In their hearts, most employers are Taylorists, devotees of 19th century theories of ‘‘Scientific Management’’ that holds that a corporation’s constituent human parts should have their jobs defined by experts who determine the empirical, purely best way to do something, and the employees should just do it that way.
There’s a lot of lip-service paid to the idea that businesses should ‘‘hire smart people and let them solve problems intelligently.’’ Silicon Valley offices are famed for their commitment to freewheeling, autonomy-friendly workplaces, but IT departments have always been hives of Taylorism.
The mainframe era was dominated by IT workers who got to choose exactly which screens employees were allowed to see. If your boss thought you should only be able to compare column x against column y, that was it. Employees who literally smuggled in Apple ][+s running Lotus 1-2-3 to give them the freedom to compute in other ways were summarily fired.
Until they weren’t. Until the PC guerrillas were recognized by their employers as having personal expertise that was squandered through overtight strictures on how they did their jobs – the IT equivalent of the auto-plants that deployed Japanese-style management where teams were told to get the job done and were left alone.
But guerrillas have a way of winning and becoming the secret police. The PC-dominated workplace has a new IT priesthood that yearns to use UEFI as the ultimate weapon in their war against fellow employees who think they can do their jobs better with software of their own choosing.
But scenario four (users control) offers a different, more flexible, more nuanced approach to solving problems collaboratively with employees. A world in which employees can choose their tools, but know absolutely what those tools are doing (within the limits of computer science) is one where users can be told what they must not do (‘‘don’t run a program that opens holes in the corporate firewall or gets access to your whole drive’’) and where they can know if they’re doing what they’re told.
That only works if you trust your employees, which is a difficult bar for many corporate cultures to hurdle. Some dark part of every corporate id yearns to reduce every process to a pictographic McDonald’s cash register that can be operated by interchangeable commodity laborers who can be moved around, fired, and hired with impunity. The mature and humane face of management will always tell you that ‘‘human resources are our greatest asset’’ but it’s an imperfect veneer over the overwhelming urge to control.
http://www.locusmag.com/Perspectives/2012/03/cory-doctorow-whats-inside-the-box/
No comments:
Post a Comment