December 30, 2003

Hidden data roundup

It is time for the 2003 roundup of information leakage through hidden data.

This summer I finally got around to writing up my scaled exploitation of other peoples' hidden data through MSWord Documents. This has brought me many emails from users and administrators, including some from large organizations with sensitive data. It seems that many people find this cause for concern.

There have been several other events in this domain this year, some old and some new. Some examples have impacted major news stories but some have been simple eye candy.

This year hidden data in the image domain made a minor splash. It seems that some image manipulation programs cache a thumbnail inside image files. Furthermore that thumbnail might not reflect the fact that the image has been cropped.

Try this Google search for more information and a good example of hidden nudity as opposed to hidden text. (Beware, this might be mildly unsafe for work, depending on where you work.)

Falling back to more established media types in this domain, thememoryhole.org brings us a beautiful DOJ example. They did such a nice job that I have not bothered to run my scripts over the it.

The DC sniper trial is still in the news and of course that yielded its own hidden text fiasco back when it was an unresolved panic, but that was last year. See an overview, the redacted and the unredacted versions.

Obviously these examples are related but different in a key aspect. The image case is clueless software messing with its users, the PDF case is clueless users messing with software. Neither are desirable.

I think these two principles are reasonable to require by any normal person:

  • I want no unforseen hidden data leakage through data formats I happen to use.
  • I want my public officials to know how to safeguard data properly, after all, I personally pay them to do exactly that in this growing climate of NatSec sensitivity.

Perhaps the larger issue is that a format used for editing and maintaining data with rich functionality will probably not be suitable for publishing, and vice versa.

Posted by byers at 11:09 AM | Comments (273)

December 27, 2003

Votehere hacked

CNN is running an AP news story that Votehere's computers were hacked, and that the intruder probably had access to their source code. Votehere makes electronic voting software that is sold to many of the vendors of voting machines being used in elections in the US.

This is yet more evidence that the argument about keeping code for voting systems proprietary does not hold water. The bad guys may get access to the code, and we won't always know it. If the code is open, then there's no fear that it will be disclosed to the wrong party. Votehere is an extremely security conscious company with very smart people working there. If this could happen to them, it could surely happen to the other vendors, most of whom are several steps behind Votehere when it comes to security.

Posted by Avi Rubin at 11:12 AM | Comments (405)

Privacy Concerns with Vehicles

The New York Times is reporting about how the combination of GPS and the ability for a car to transmit data lead to serious privacy compromises. It is interesting to note that the company making the technology claims that hackers cannot intercept the information because it is protected. I'm curious if they did a good job with that. Regardless, location data can be subpoenaed. I don't think that our society nor our legal system are prepared for the privacy implications of these technologies. One example in the article is of a man who used a "black box" such as those found in airplanes to track his ex-girlfriend. He received 9 months in prison.
Posted by Avi Rubin at 12:37 AM | Comments (515)

December 18, 2003

The rise of instant messaging

Instant messaging has become what TALKD never was: very popular and useful. It is email without the spam (mostly), and is widely used.

Instant messages are generally sniffable on local networks. Hence, it is a bad idea to IM sensitive details to your colleagues while working at a negotiation table. Instant messages usually go through a central server, which is a fine place to eavesdrop. I suspect that cops and spies have utilized this situation more than once. Companies can set up their own servers (i.e. jabberd) for a better- controlled central site, and interfaces to various IM services.

The "user is away" feature can be used to decide when someone is at home or at work, or even
figure out his standard commuting schedule. I have not yet heard of burglaries based on this
information. You can also tell when someone is away from his desk, perhaps in a meeting.

I decided to bring IM up here because the government is groping with this service, even on secure networks. It usually has a terrible security model, and its users are often ignorant of this fact. But warfighters like this kind of service, too.

There is much confusion on how IM is implemented, and implementation varies by product and situation. It would be nice to set the defaults to something secure, and hide the login sessions to protect user names and passwords.

Posted by Bill Cheswick at 10:57 AM | Comments (660)

December 12, 2003

Face Recognition and False Positives

An Arizona school will install cameras and face-recognition software, to look for "sex offenders, missing children and alleged abductors", according to an AP story by Michelle Rushlo.

Two cameras, which are expected to be operational next week, will scan faces of people who enter the office at Royal Palm Middle School. They are linked to state and national databases of sex offenders, missing children and alleged abductors.

An officer will be dispatched to the school in the event of a possible match, said Maricopa County Sheriff Joe Arpaio.

This looks like a classic security mistake: ignoring the false positive problem.

Claims vary, but it seems safe to assume that this kind of face-recognition technology, deployed in this kind of environment, will have a false positive rate of at least 1%. In other words, for every 100 completely innocent people who pass in front of the camera, on average at least one person will be misidentified as a target (i.e., a sex offender or abductor).

Now let's do the math. If 200 people enter the school office each day, that means two false positives every day -- two pointless trips to the school by police officers. And how often does a sex offender or child abductor enter the office of that particular school? Almost never, I would guess, especially if they know the cameras are there.

So the vast, overwhelming majority of people singled out by this system will be innocent. Is the school really going to call the police every time? Are the police really going to conduct an investigation each time, to tell whether the person is lying when they deny being a sex offender? I doubt they will.

Responding to these calls, and investigating them thoroughly, can't be the best way for police officers to spend their time. Sometimes false positives are the most expensive aspect of a security technology.

Posted by Ed Felten at 12:53 PM | Comments (392)

December 11, 2003

Connectivity and critical systems

Could the August 14th East Coast power blackout have been catalyzed by the Blaster computer worm? As Bruce Schneier observes in a recent article, evidence seems to suggest that it was, probably because some computers at FirstEnergy were connected to the Internet. This suggests a greater problem, and something we must be extremely cautious about in the future: as technologies become increasingly networked, the number of possible vectors for attackers or worms to access systems will radically increase. This is of particular concern when the networked systems are in some ways "critical" since their failures could have dire consequences.

As cases in point, in addition to the computers at FirstEnergy, other critical systems have failed or been attacked because various technologies were networked together. For example, Diebold's ATM machines were infected with the Nachi worm. The Slammer worm caused portions of a 911 emergency response system to fail. And a virus forced cell phones in Japan to call the equivalent of 911. What other critical systems are connected to a network? How will they fail or be attacked in the future? And how serious will those failures or attacks be?

Posted by Tadayoshi Kohno at 03:04 PM | Comments (363)