The web-news-o-sphere (/., digg, reddit, etc.) is all abuzz today with the news that the infamous Kevin Mitnick has declared open source programs to be less secure than proprietary programs. Actually, his actual thesis is quite a bit more nuanced than that, but folks can hardly be blamed for wanting a sound bite...
There are certainly several potential articles here, including the state of the online-web-news-sphere thingy (OWNS?), and the obvious analogy of the legend of Kevin Mitnick to the legendary figures of the "Wild West". However, when a friend pointed out semi-publicly that Mitnick's statement seems obviously true to him and many, I thought maybe a few words from "around the block" on the direct question might be in order. Besides, I'd already typed most of them into an email, and I needed a blog entry for tonight.
I think it's safe to say that there is little actual
knowledge out there about "how secure" code is, or even what
that means. The security expert's view is that talking
about the security of code in isolation from such important
things as security requirements, validation and verification
procedures for the code, and the context in which the code
operates is kind of silly. Whether my code is extensively
tested, whether it has been through formal inspection and
proofs of correctness, whether it's going to be deployed in
a video game or an ATM, whether it will run on a machine in
a locked vault or as a web service: all of these things are
probably more important than whether the source code is
freely available to attackers.
Worse yet, even taking the narrow view, as Kevin Mitnick
does, that code security is about buffer overflows and null
pointer dereferences (both of which can be largely fixed,
BTW, by switching to decent languages), there is little
known about the security of existing code, especially
proprietary code. Testing is a lousy way to discover these
problems, as Mitnick himself points out; what he fails to
point out is that the techniques we have for discovering
them in open source C/C++ code are not worlds better,
although academic programming geeks like myself are working
on this problem (keyword of the week: abstract
interpretation).
Having said all that, I think that an important security
idea is Kerckhoff's Assumption from cryptography. Always
assume that the attacker has complete information about the
system under attack. "security through obscurity is no
security at all". See the Wikipedia article on this topic
for a nice summary.
The point of introducing Kerckhoff's
Assumption in addressing Mitnick's argument should be clear. Source code to most of the
software (at least on Windows) that attackers care about is
readily available on P2P warez sites. For more obscure
programs, there are a lot of potential ways to get the
source. How many companies can be confident that all of
their employees would turn down the princely sum of $500 to
smuggle out a copy? How many companies are confident that
their networks are secure against attack? How many
companies are sure that they know the provenance of all
their code (after all, insecure code coming is just as bad
as insecure code leaking out)?
The open source community operates with Kerckhoff's
Assumption enabled. One consequence of this is that
it is much easier to find and verify an exploit when it is
used. A major reason why we know little about the security
of proprietary code is that the maleficent know more about
that code than we do.
Our ignorance about software security should, in my opinion and others', be a cause for alarm. I believe that our skill in securing software-intensive systems will continue to increase. I believe that open source is a key tool in achieving this goal.