Using an Inappropriate Classifier As a Security Mechanism

Zed Shaw has a story to tell about ACLs (Access Control Lists) as a common security mechanism and how they are incapable of modeling actual regulatory requirements:

(Vimeo: The ACL is Dead)

It’s not really a talk about ACLs, it’s really about how companies work and how to survive and stay sane inside enterprises. I’ll focus here, however, on the technical issue that he uses as a hook.

He poses the technical problem as »ACLs not being Turing-complete«. According to my favorite abstraction of security mechanisms, the classifier model, ACL access control schemes are a type of classifier that does not fit the problem. All security mechanisms distinguish deny from allow, just in different sets of entities and with different boundaries between the two subsets. A low complexity classifier can handle only subsets with a simple boundary between them—most entities have only neighbors of the same class, and those near the boundary have other-class neighbors only in one direction—whereas a complex classifier can model more complex class distinctions. The most complex classification would be a random assignment of classes to entities.

Two things (at least) affect the complexity that a classifier can handle: classifier design and feature extraction. Classifier design defines the boundaries that a classifier can model. Feature extraction defines the parameters or dimensions available to the classifier, the degree of abstraction with which the classifier sees the world. Authentication for instance has a high degree of abstraction, it can distinguish entities but nothing else. Access control is richer in the parameters it uses, including besides the identity of entitites also properties of objects and actions. Yet, as the talk illustrates, these dimensions are not sufficient to properly express regulatory requirements. Whether this is a problem of the mechanism or a problem of the requirements I leave for the reader to ponder.