The Trouble with Shared Secrets in System Design

Happy Key
Photo by Robert Gaal

I’ve been working on on some code recently that deals with encryption and it always gives me the hiblijiblies. It’s the kind of thing where you feel really secure right up to the point that you aren’t secure any more. The biggest area of my worry is securing keys in software. In order for software to be able to encrypt and decrypt things on its own (i.e. without human intervention such as entering a password) the keys need to be in a place that the software can read them. And if the software can read them, then someone else is going to be able to. It can be obscured and hidden, but it can never be as secure as having a password known only in the mind of a user.

There are a couple of classic examples of this problem out there:

CSS or Content Scramble System is the encryption used on DVDs to prevent them being copied and played in unapproved geographical regions. It wasn’t the strongest encryption in the world (it was 1995, afterall) but it wasn’t until 1999 that someone broke it. Ostensibly to create a DVD player for Linux which didn’t have an officially licensed player. Supposedly the initial project worked by disassembling a licensed software player to retrieve its keys.

AACS or Advanced Access Content System is the encryption used for HD-DVD and Blu-Ray in the same way as CSS. In an attempt to be better than CSS it adds a key revocation system. This allows content producers to create disks that won’t work with known compromised keys thereby limiting the damage of an exposed key. It does seem to have worked somewhat better than CSS, but successful attacks against the system are out there.

Trusted Computing is an attempt to block the hole up for good. It’s a set of hardware solutions which allow software to store and retrieve keys securely. Certainly better than software only, even it isn’t immune to attacks.

For me there are three lessons from all this.

First, everything is a trade-off. You can add extra locks to your front door but that just makes it less convenient when you come and go. Finding the right balance is critical.

Second, diminishing returns are really steep when there is a weak link in the chain. Once you know where your weakest link is, it’s much easier to determine when the diminishing returns curve makes any more effort pointless.

And last, it pays to always be paranoid. It’s one thing to admit that you’ve realistically hit the limit of how secure you can make something, and it’s another thing to give up entirely. There are always some small improvements that can be made and they should be explored as though the NSA has focused their resources like a laser on you personally, but only implemented if they realistically help out. Being paranoid doesn’t mean never admitting that the “bad guys” sometimes win.