--> ATAC: Abusable Technologies Awareness Center

Welcome to the Abusable Technologies Awareness Center (ATAC). Our mission is to provide current and accurate information about technology that oversteps its bounds. Whether the concerns relate to unexpected privacy violations or inappropriate security, ATAC serves as a clearinghouse for informed discussions. Our panelists, all respected Computer Scientists introduce topics as new disclosures are made, and the forum is open to the public for discussion. This site is hosted at the Information Security Institute at Johns Hopkins University We recently had some problems with our HP ProLiant server, and want to thank Hard Drive Recovery Group for recovering the RAID 10 set. Great work by them!


August 18, 2004

Report from Crypto 2004

Here's the summary of events from last night's work-in-progress session at the Crypto conference. (I've reordered the sequence of presentations to simplify the explanation.)

Antoine Joux re-announced the collision he had found in SHA-0.

One of the Chinese authors (Wang, Feng, Lai, and Yu) reported a family of collisions in MD5 (fixing the previous bug in their analysis), and also reported that their method can efficiently (2^40 hash steps) find a collision in SHA-0. This speaker received a standing ovation, from at least part of the audience, at the end of her talk.

Eli Biham announced new results in cryptanalyzing SHA-1, including a collision in a reduced-round version of SHA-1. The full SHA-1 algorithm does 80 rounds of scrambling. At present, Biham and Chen can break versions of SHA-1 that use up to about 40 rounds, and they seem confident that their attacks can be extended to more rounds. This is a significant advance, but it's well short of the dramatic full break that was rumored.

Where does this leave us? MD5 is fatally wounded; its use will be phased out. SHA-1 is still alive but the vultures are circling. A gradual transition away from SHA-1 will now start. The first stage will be a debate about alternatives, leading (I hope) to a consensus among practicing cryptographers about what the substitute will be.

Posted by Ed Felten at 10:45 AM

August 16, 2004

SHA-1 Break Rumored

There's a rumor circulating at the Crypto conference, which is being held this week in Santa Barbara, that somebody is about to announce a partial break of the SHA-1 cryptographic hashfunction. If true, this will have a big impact, as I'll describe below. And if it's not true, it will have helped me trick you into learning a little bit about cryptography. So read on....

SHA-1 is the most popular cryptographic hashfunction (CHF). A CHF is a mathematical operation which, roughly speaking, takes a pile of data and computes a fixed size "digest" of that data. To be cryptographically sound, a CHF should have two main properties. (1) Given a digest, it must be essentially impossible to figure out what data generated that digest. (2) It must be essentially impossible to find find a "collision", that is, to find two different data values that have the same digest.

CHFs are used all over the place. They're used in most popular cryptographic protocols, including the ones used to secure email and secure web connections. They appear in digital signature protocols that are used in e-commerce applications. Since SHA-1 is the most popular CHF, and the other popular ones are weaker cousins of SHA-1, a break of SHA-1 would be pretty troublesome. For example, it would cast doubt on digital signatures, since it might allow an adversary to cut somebody's signature off one document and paste it (undetectably) onto another document.

At the Crypto conference, Biham and Chen have a paper showing how to find near-collisions in SHA-0, a slightly less secure variant of SHA-1. On Thursday, Antoine Joux announced an actual collision for SHA-0. And now the rumor is that somebody has extended Joux's method to find a collision in SHA-1. If true, this would mean that the SHA-1 function, which is widely used, does not have the cryptographic properties that it is supposed to have.

The finding of a single collision in SHA-1 would not, by itself, cause much trouble, since one arbitrary collision won't do an attacker much good in practice. But history tells us that such discoveries are usually followed by a series of bigger discoveries that widen the breach, to the point that the broken primitive becomes unusable. A collision in SHA-1 would cast doubt over the future viability of any system that relies on SHA-1; and as I've explained, that's a lot of systems. If SHA-1 is completely broken, the result would be significant confusion, reengineering of many systems, and incompatibility between new (patched) systems and old.

We'll probably know within a few days whether the rumor of the finding a collision in SHA-1 is correct.

Posted by Ed Felten at 01:32 PM

July 13, 2004

Security Theater

Lots of people are telling airport-security stories these days. Thus far I have refrained from doing so, even though I travel a lot, because I think the TSA security screeners generally do a good job. But last week I saw something so dumb that I just have to share it.

I'm in the security-checkpoint line at Boston's Logan airport. In front of me is an All-American family of five, Mom, Dad, and three young children, obviously headed somewhere hot and sunny. They have the usual assortment of backpacks and carry-on bags.

When they get through the metal detector, they're told that Mom and Dad had been pre-designated for the more intensive search, where they wand-scan you and go through your bags. This search is a classic example of what Bruce Schneier calls Security Theater, since it looks impressive but doesn't do much good. The reason it doesn't do much good is that it's easy to tell in advance whether you're going to be searched. At one major airport, for example, the check-in agent writes a large red "S" on your boarding pass if you're designated for this search; you don't have to be a rocket scientist to know what this means. So only clueless bad guys will be searched, and groups of bad guys will be able to transfer any contraband into the bags of group members who won't be searched, with plenty of time after the security checkpoint to redistribute it as desired.

But back to my story. Mom and Dad have been designated for search, and the kids have not. So the security screener points to the family's pile of bags and asks which of the bags belong to Mom and Dad, because those are the ones that he is going to search. That's right: he asks the suspected bad guys (and they must be suspected, otherwise why search them) which of their bags they would like to have searched. Mom is stunned, wondering if the screener can possibly be asking what she thinks he's asking. I can see her scheming, wondering whether to answer honestly and have some stranger paw through her purse, or to point instead to little Johnny's bag of toys.

Eventually she answers, probably honestly, and the screener makes a great show of diligence in his search. Security theater, indeed.

Posted by Ed Felten at 09:42 AM

June 07, 2004

The "right" kind of challenge to e-voting security

People often talk about "hacking challenges" and the like. I think that the proposals I have heard are misguided. I've written up what I think is a challenge that actually makes sense. If there were any way to make this happen, perhaps the secretaries of state who make purchasing decisions could exercise the leverage they have over the vendors, I think it would be very convincing.

Posted by Avi Rubin at 10:27 PM

June 06, 2004

Open Source won't save e-voting

In a NYT Magazine article published may 30th, Clive Thompson argues for Open Source code for electronic voting machines:
First off, the government should ditch the private-sector software makers. Then it should hire a crack team of programmers to write new code. Then -- and this is the crucial part -- it should put the source code online publicly, where anyone can critique or debug it. This honors the genius of the open-source movement. If you show something to a large enough group of critics, they'll notice (and find a way to remove) almost any possible flaw. If tens of thousands of programmers are scrutinizing the country's voting software, it's highly unlikely a serious bug will go uncaught.

It may very well be a good thing to have Open Source software for voting, but the assumption that underlies Thompsons's argument--that Open Source is somehow a magic engine for producing bug-free software--is transparently false. Open Source software, like all software, is riddled with bugs. Many of these bugs have security implications. Moreover, these bugs can persist for long periods of time. For instance, the Linux mremap() problem (CAN-2004-0077, described here) has been in every Linux kernel since at least 2.2 (released in 1999) and was only discovered in March of 2004. Alternately, consider the OpenSSL buffer overflows. These had been around since at least 1998 and probably earlier but were only found in 2002. So much for tens of thousands of programmers finding any serious bug.

The truth of the matter is that--contrary to popular myth--practically nobody bothers to audit any Open Source code. Auditing code is a mind-destroyingly boring exercise and it's not even clear what percentage of vulnerabilities a good audit actually finds (practically no research has been done on this point). I'm probably one of the 50-100 people most qualified to audit OpenSSL: I'm a security guy specializing in SSL who uses OpenSSL on a regular basis and I've spent substantial amounts of time groveling through the source code. But the only time I look for security holes is if I happen to run into something that looks fishy. Noone I know seriously believes that we've found the last security hole in OpenSSL or Linux.

This isn't to say that auditing source code isn't worthwhile. It's just that the idea that we can audit the bugs out of a piece of code is in my view fundamentally misguided. What audits are useful for is getting an idea of how good the overall code quality is. If you audit selected pieces of a piece of software and you find a bunch of serious errors, it's safe to conclude that the company needs to shoot the programmers and start over. So, for instance, when Kohno et al. found a bunch of problems in Diebold's e-voting system, the conclusion to draw wasn't that these were all the problems that there were but rather Diebold didn't have the first clue how to write secure software. As Avi Rubin's page on e-voting points out:

To help mitigate the risks identified in the security analyses, Maryland proposed a set of technological changes to Diebold's voting machines as well as procedural changes to the election process. While this may help "raise the bar," it is impossible to know whether any security analysis identifies all the possible vulnerabilities present in an analyzed system. By only patching the known vulnerabilities, Maryland is not actually ensuring that the voting system will be secure. Rather, Maryland should follow security engineering best practices, which state that security can only be assured through a rigorous design process that considers security from a project's conception, not through a set of patches applied after the fact.

So, if we're going to have e-voting, what we really need is a procedure that allows for routinized and systematic review of voting systems. The purpose of this review is not to find vulnerabilities--though of course it would be nice to fix any that are found--but rather to assess whether the vendor is following good software engineering practices, with serious consequences if they are not. Open Source might or might not help us get that sort of review--personally, I expect that to get really thorough review you'll need to pay people to do it--but just making the code Open Source doesn't improve the situation much at all.

Posted by Eric Rescorla at 06:04 PM