October 27, 2003

Remote Controls for Traffic Lights

Many cities have installed systems that let emergency vehicles control traffic lights via infrared remote controls, thereby getting to the scene of an emergency more quickly. This is good. Yesterday's Detroit News reports on the availability of remote controls that allow ordinary citizens to control the same traffic lights. Now traffic engineers worry that selfish people will use the remotes to disrupt the flow of traffic. This could have been avoided by using cryptography in the design of the original system. Instead, we're likely to see a crackdown on the distribution of the remote controls, and the predictable black market in the banned devices. This seems like a classic example of the harm caused by deploying a technology without considering how it might be abused. It would be interesting to know why this happened. Did the vendor not stop to think about the potential for abuse? Did they think that nobody would ever figure out how to abuse the system? Did they fail to realize that anti-abuse methods were available? I wish I knew. [Link credit: Eric Rescorla]
Posted by Ed Felten at 09:38 AM | Comments (98)

October 23, 2003

The sport of web votes

This article reports on the rigging of online HTML voting for the coolest sports uniform. These types of polls are inherently riggable. The upside is that most are innocuous, even pointless, witness the frontpage of CNN. I experimented with another sports vote some time ago, the MLB vote for the Allstar game. In this vote we, the people, get to pick the teams for a special game outside of normal league play in a poll with web voting enabled. At the time MLB was debating contraction, i.e. dropping two teams, probably one from each league. I decided that rigging the vote to favour players from the contracted teams would be a worthwhile effort and as such it's ease should be investigated.

A short series of POST requests was required to implement the vote, also some local processing to defeat the human-in-the-loop test using tuned OCR software. For each place on each team I voted for the relevent contraction player with probability 0.5 and otherwise a random other elligable player. This way my cast multivariate votes were not all suspiciously the same but the net effect was a massive vote for the contraction teams.

Many tricks are available to both sides of this arms race. From the side of voters we note the effectiveness of the following in dispersing hits across IP addresses:

  • Automated harvesting, bench-testing and usage of proxies.
  • For simpler actions, a special web-bug pushed onto a 3rd party website can be very effective, this way you get other unsuspecting surfers to vote for you with genuine looking traffic profiles.
  • Spam tactics can also work with the vote being embedded in a mass email.

In a year without contraction it might be more fun to vote for the players from each league with the worst records over the current season.

So, I implemented this stuff and tested it, but did not of course vote enough to influence the results as that might have got me in the news, like those aforementioned Broncos voters. Speaking of which, if the Bronco voters actually had the intention to make the Broncos lose by obviously ballot-stuffing then they succeeded and the ESPN.com engineers played right along with it. This is called the Hall of Mirrors I think.

The key point is that the adversary in this case does not have to be very sophisticated in order to succeed, even in the face of supposedly sophisticated countermeasures. We know from other areas that adversaries can become arbirtarily sophisticated to achieve specific goals they deem worth effort. In summary, do not allow dangerously stuffable voting mechanisms to influence anything real. Although we seem to have some problems there in the real world .

Posted by byers at 12:18 PM | Comments (126)

Rescorla on Airport ID Checks

Eric Rescorla, at Educated Guesswork, notes a flaw in the security process at U.S. airports -- the information used to verify a passenger's ID is not the same information used to look them up in a suspicious-persons database.
Let's say that you're a dangerous Canadian terrorist, bearing the clearly suspicious name "Guy Lafleur". Now, the American government is aware of your activities and puts you on the CAPPS blacklist to stop you from boarding the plane. Further, let's assume that you're too incompetent to get a fake ID.... You have someone who's not on the blacklist buy you a ticket under an innocuous assumed name, say "Babe Ruth". This is perfectly legitimate and quite easy to do.... Then, the day before the flight you go onto the web and get your boarding pass. You print out two copies, one with your real name and one with the innocuous fake name. Remember, it's just a web page, so it's easy to modify When you go to the airport, you show the security agent your "Guy Lafleur" boarding pass and your real ID. He verifies that they match but doesn't check the watchlist, because his only job is to verify that you have a valid-looking boarding pass and that it matches your ID. Then, when you go to board the plane, you give the gate agent your real boarding pass. Since they don't check ID, you can just walk onboard. What's happened is that whoever designed this system violated a basic security principle that's one of the first things protocol designers learn: information you're using to make a decision has to be the information you verify. Unfortunately, that's not the case here. The identity that's being verified is what's written on a piece of paper and the identity that's being used to check the watchlist is in some computer database which isn't tied to the paper in any way other than your computer and printer, which are easy to subvert.
In a later post, he discusses some ways to fix the problem.
Posted by Ed Felten at 08:23 AM | Comments (397)

October 21, 2003

Another case for disclosure

The New York Times is reporting today on Victoria Secret, who was forced to pay $50,000 in damages due to customer information that leaked from their web site because of a security flaw. It's an interesting precedent to hold companies accountable for their security flaws. Elliot Spizer, the attorney general of New York is quoted as saying "A business that obtains consumers' personal information has a legal duty to ensure that the use and handling of that data complies with representations made about that company's security and privacy practices." An interesting point in the article is that Jason Sudowski, the customer who discovered the flaw, contacted Victoria Secret and was ignored. Then, he contacted MSNBC who contacted Victoria Secret, and they fixed the problem. This is another demonstration that public disclosure is the best way to keep companies accountable for security and privacy.
Posted by Avi Rubin at 08:23 AM | Comments (118)

Insiders Fuel Internet Movie Piracy

The movie industry has long claimed that piracy on the Internet is largely the result of digital video cameras in movie theaters and copies of commercially available DVDs. As the argument goes, copies of recent movies obtained from video cameras are of poor quality and thus of little use to the discerning moviegoer. The industry further claims that lost sales due to pirated versions of commercially available DVDs are significantly impacting their profitability. The argument being circulated in legislative circles states that more must be done to protect the industry from unethical elements of the general public.

A recent study conducted at AT&T Research seems to cast movie piracy in a different light. The study found that high quality (e.g., DVD) copies of movies are showing up on file sharing networks shortly after, and in some cases prior to, theatrical release. In one of the more surprising results, 77% of these samples appear to have been leaked by industry insiders. Indeed, of the movies that had been released on DVD, only 5% first appeared after their DVD release date. This indicates that consumer DVD copying currently represents a relatively minor factor compared with insider leaks. The study concludes with a brief analysis of the movie production and distribution process and offers recommendations for reducing vulnerability to insider threats.

Posted by Patrick McDaniel at 07:31 AM | Comments (98)

October 13, 2003

SunnComm exemplifies threat to disclosers of abusable technologies

Alex Halderman, a graduate student at Princeton, recently showed that SunnComm's technology to limit distribution of digital songs could be broken by holding down the shift key. SunnComm claimed that Halderman did not have the right to publish a paper describing this nor to give it to reporters. They threatened to sue Halderman but backed down when receiving thousands of email protests. This is a scary demonstration of the challenges that we face in bringing abusable technologies to light. SunnComm is shortsighted in not realizing that the public disclosure of such security problems is the best mechanism for ensuring that ultimate products are secure. After all, as a result of Halderman's discovery, people will not be using the shift key to steal music. There is great danger in the DMCA enabling companies like SunnComm to intimidate researchers.
Posted by Avi Rubin at 08:23 AM | Comments (98)

October 03, 2003

Technology is not infallible: A case study of the RIAA pirate-tracking mechanisms

To many, science and technology are black magic, capable of accomplishing great feats but not very well understood. Thus, when a large organization with a strong PR engine (such as the Recording Industry Association of America, or RIAA) announces that it can identify users guilty of illegally sharing copyrighted materials over peer-to-peer networks, it seems natural for people to believe them. Technology is not, however, infallible. In the case of the RIAA, although the RIAA would certainly like us to believe that its pirate-tracking mechanisms are accurate, this article shows that such a belief is not justified. Specifically, the article shows that the RIAA's tracking mechanisms could tag innocent people as copyright offenders. Indeed, the news media has already highlighted two cases in which this appears to have been the case. Even worse, the article shows that a malicious user could frame innocent users for copyright violation. Perhaps the important lesson from all this is that one must be very skeptical about technological claims. We need to evaluate new technologies with a much more critical eye.
Posted by Tadayoshi Kohno at 01:00 AM | Comments (104)