CMS by Drupal
Introduction to network trust establishment [interlude]
Part 1 of this series sketches out some basic structure underlying network trust establishment. I wanted to take a moment to comment on the motivating examples I've seen in the last couple of days…
As you know if you've been following this blog, we got spammed by an anonymous "commenter" this morning. Tonight, I hit the website for one of my favorite webcomics, Pinch of the Glass, only to find that the cool chatback box there had been hit for the second time by some spamming creep. Why can this happen? Why isn't identity and "login" permission enough to prevent this kind of thing?
There are several problems with identity and login as a trust mechanism. Here are four:
To take care of our example problem, we really need to start by having some way that sites can individually gather evidence sufficient to decide whether to allow a given "anonymous" user to leave a blog comment / chatback message. Further, since the universe of potential commenters is large, it needs to be a mechanism that doesn't involve account creation or establishing and maintaining identity. The good news is that a security failure here is typically low-cost: garbage that has to be cleaned up, plus minor loss of reputation (see upcoming Part 2) by the trusting site.
For my case, in particular, who do I want to be able to leave blog comments? Certainly anyone who has "an account" (identity information) on FOB. Note that a decent trust policy and mechanism would mean that these folks can leave a comment without having to log in. Certainly anyone who has an account on a site affiliated with me personally in any way. Broaden "has an account on a site." Anyone who is trusted by a community affiliated with me personally is welcome. There are others, perhaps; I'd have to think a bit.
In short, what is needed is a way for the Drupal code that runs this site to be able to acquire evidence of trust to execute the particular action of leaving a comment, and to be able to compute a trust decision for the action that is in accordance with a trust policy.
The obvious ways to achieve this goal involve some sort of cryptography. However, at Recent Changes Camp 2006, some folks convinced me that the right thing might be to start with a non-crypto mechanism and helped me to devise one. The result is a system we were calling SisterCred. Although at the moment I fear it is stillborn, I will try to revive it: it represents an important step in the war on bad security.
When the rest of this series eventually appears, it will contain a detailed discussion of reputation, roles, other trust evidence, policy description, policy evaluation and subsumption, and steps toward producing it all as open source. I'm looking forward to finding time to write it.