(youtube)
Archiv der Kategorie: Classifier Security Model
Classifying Vehicles
Security is a classification problem: Security mechanisms, or combinations of mechanisms, need to distinguish that which they should allow to happen from that which they should deny. Two aspects complicate this task. First, security mechanisms often only solve a proxy problem. Authentication mechanisms, for example, usually distinguish some form of token – passwords, keys, sensor input, etc. – rather than the actual actors. Second, adversaries attempt to shape their appearance to pass security mechanisms. To be effective, a security mechanism needs to cover these adaptations, at least the feasible ones.
An everyday problem illustrates this: closing roads for some vehicles but not for others. As a universal but costly solution one might install retractable bollards, issue means to operate them to the drivers of permitted vehicles, and prosecute abuse. This approach is very precise, because classification rests on an artificial feature designed solely for security purposes.
Simpler mechanisms can work sufficiently well if (a) intrinsic features of vehicles are correlated with the desired classification well enough, and (b) modification of these features is subject to constraints so that evading the classifier is infeasible within the adversary model.
Bus traps and sump busters classify vehicles by size, letting lorries and buses pass while stopping common passenger cars. The real intention is to classify vehicles by purpose and operator, but physical dimensions happen to constitute a sufficiently good approximation. Vehicle size correlates with purpose. The distribution of sizes is skewed; there are many more passenger cars than buses, so keeping even just most of them out does a lot. Vehicle dimensions do not change on the fly, and are interdependent with other features and requirements. Although a straightforward way exists to defeat a bus trap – get a car that can pass – this is too expensive for most potential adversaries and their possible gain from the attack.
The Key-Under-the-Doormat Analogy Has a Flaw
The crypto wars are back, and with them the analogy of putting keys under the doormat:
… you can’t build a backdoor into our digital devices that only good guys can use. Just like you can’t put a key under a doormat that only the FBI will ever find.
(Rainey Reitman: An Open Letter to President Obama: This is About Math, Not Politics)
This is only truthy. The problem of distinguishing desirable from undesirable interactions to permit the former and deny the latter lies indeed at the heart of any security problem. I have been arguing for years that security is a classification problem; any key management challenge reminds us of it. I have no doubt that designing a crypto backdoor only law enforcement can use only for legitimate purposes, or any sufficiently close approximation, is a problem we remain far from solving for the foreseeable future.
However, the key-under-the-doormat analogy misrepresents the consequences of not putting keys under the doormat, or at least does not properly explain them. Other than (idealized) crypto, our houses and apartments are not particularly secure to begin with. Even without finding a key under the doormat, SWAT teams and burglars alike can enter with moderate effort. This allows legitimate law enforecement to take place at the cost of a burglary (etc.) risk.
Cryptography can be different. Although real-world implementations often have just as many weaknesses as the physical security of our homes, cryptography can create situations where only a backdoor would allow access to plaintext. If all we have is a properly encrypted blob, there is little hope of finding out anything about its plaintext. This does not imply we must have provisions to avoid that situation no matter what the downsides are, but it does contain a valid problem statement: How should we regulate technology that has the potential to reliably deny law enforcement access to certain data?
The answer will probably remain the same, but acknowledging the problem makes it more powerful. The idea that crypto could not be negotiated about is fundamentalist and therefore wrong. Crypto must be negotiated about and all objective evidence speaks in favor of strong crypto.
OMG, public information found world-readable on mobile phones
In their Black Hat stage performance, employees of a security company showed how apps on certain mobile phones can access fingerprint data if the phone has a fingerprint sensor. The usual discussions ensued about rotating your fingerprints, biometrics being a bad idea, and biometric features being usernames rather than passwords. But was there a problem in the first place? Let’s start from scratch, slightly simplified:
- Authentication is about claims and the conditions under which one would believe certain claims.
- We need authentication when an adversary might profit from lying to us.
- Example: We’d need to authenticate banknotes (= pieces of printed paper issued by or on behalf of a particular entity, usually a national or central bank) because adversaries might profit from making us believe a printed piece of paper is a banknote when it really isn’t.
- Authentication per se has nothing to do with confidentiality and secrets, as the banknotes example demonstrates. All features that we might use to authenticate a banknote are public.
- What really matters is effort to counterfeit. The harder a feature or set of features is to reproduce for an adversary, the stronger it authenticates whatever it belongs to.
- Secrets, such as passwords, are only surrogates for genuine authenticating features. They remain bound to an entity only for as long as any adversary remains uncertain about their choice from a vast space of possible values.
- Fingerprints are neither usernames nor passwords. They are (sets of) biometric features. Your fingerprints are as public as the features of a banknote.
- We authenticate others by sets of biometric features every day, recognizing colleagues, friends, neigbours, and spouses by their voices, faces, ways of moving, and so on.
- We use even weaker (= easier to counterfeit) features to authenticate, for example, police officers. If someone is wearing a police uniform and driving a car with blinkenlights on its roof, we’ll treat this person as a police officer.
- As a side condition for successful attack, the adversary must not only be able to counterfeit authenticating features, the adversary must also go through an actual authentication process.
- Stolen (or guessed) passwords are so easy to exploit on the Internet because the Internet does little to constrain their abuse.
- Attacks against geographically dispersed fingerprint sensors do not scale in the same way as Internet attacks.
Conclusion: Not every combination of patterns-we-saw-in-security-problems makes a security problem. We are leaving fingerprints on everything we touch, they never were and never will be confidential.
Learning from History
Everyone knows the story of Clifford Stoll and and West-German KGB hackers (see the video below) in the late 80s. Does this history teach us something today? What strikes me as I watch this documentary again is the effort ratio between attackers and defenders. To fight a small adversary group, Stoll invested considerable effort, and from some point involved further people and organizations in the hunt. In effect, once they had been detected, the attackers were on their way to being overpowered and apprehended.
Today, we take more organized approaches to security management and incident response. However, at the same time we try to become more efficient: we want to believe in automated mechanisms like data leakage prevention and policy enforcement. But these mechanisms work on abstractions – they are less complicated than actual attacks. We also want to believe in preventive security design, but soon find ourselves engaged in an eternal arms race as our designs never fully anticipate how attackers adapt. Can procedures and programs be smart enough to fend off intelligent attackers, or does it still take simply more brains on the defender’s than on the attacker’s part to win?
(YouTube)
Redefining a Security Problem to Fit Mechanism Capabilities
Bruce Schneier dug out this little gem: Forbidden Spheres. A nuclear weapons lab, so the (apparently unconfirmed) story goes, classified all spherical objects as confidential, rather than just those related to nuclear weapons design. In the security-as-classification paradigm this makes a lot of sense. The desired security policy is to keep everything related to weapons design confidential. This includes objects that might give away information, which one will naturally find in an R&D lab. To enforce this policy, however, requires either comprehensive tracking of all objects, or complicated considerations to decide whether an object is classified as confidential under the policy or not. But a subset of the objects under consideration has a common, easy-to-detect property: they are spherical. The classification required as a prerequisite for policy enforcement becomes much simpler if one considers only this feature. The classification also becomes less perfect, there are classification errors. However, if it’s all about nuclear spheres, than the simplified classifier errs systematically towards the safe side, erroneously classifying innocuous spherical objects as confidential. As long as it doesn’t disturb the intended operations of the lab, this may well be acceptable. This approach would break down if an adversary, or an accident, could easily change relevant shapes without defeating purpose.
Causal Insulation
I just came across an essay by Wolter Pieters that complements my 2009 NSPW paper (mentioned here and here in this blog before) in style and content. In The (social) construction of information security (author’s version as PDF), Pieters discusses security in terms of causal insulation. This notion has its roots in Niklas Luhmann’s sociological theory of risk. Causal insulation means that to make something secure, one needs to isolate it from undesired causes, in the case of security from those that attackers would intentionally produce.On the other hand, some causes need to be allowed as they are necessary for the desired functioning of a system.
I used a similar idea as the basis of my classifier model. A system in an environment creates a range of causalities—cause-effect relationships—to be considered. A security policy defines which of the causes are allowed and which ones are not, splitting the overall space into two classes. This is the security problem. Enforcing this policy is the objective of the security design of a system, its security mechanisms and other security design properties.
A security mechanism, modeled as a classifier, enforces some private policy in a mechanism-dependent space, and maps the security problem to this private space through some kind of feature extraction. In real-world scenarios, any mechanism is typically less complex than the actual security problem. The mapping implies loss of information and may be inaccurate and partial; as a result, the solution of the security problem by a mechanism or a suite of mechanisms becomes inaccurate even if the mechanism works perfectly well within its own reference model. My hope is that the theory of classifiers lends us some conceptual tools to analyze the degree and the causes of such inaccuracies.
What my model does not capture very well is the fact that any part of a system does not only classify causalities but also defines new causalities, I’m still struggling with this. I also struggle with practical applicability, as the causality model for any serious example quickly explodes in size.
Using an Inappropriate Classifier As a Security Mechanism
Zed Shaw has a story to tell about ACLs (Access Control Lists) as a common security mechanism and how they are incapable of modeling actual regulatory requirements:
(Vimeo: The ACL is Dead)
It’s not really a talk about ACLs, it’s really about how companies work and how to survive and stay sane inside enterprises. I’ll focus here, however, on the technical issue that he uses as a hook.
He poses the technical problem as »ACLs not being Turing-complete«. According to my favorite abstraction of security mechanisms, the classifier model, ACL access control schemes are a type of classifier that does not fit the problem. All security mechanisms distinguish deny from allow, just in different sets of entities and with different boundaries between the two subsets. A low complexity classifier can handle only subsets with a simple boundary between them—most entities have only neighbors of the same class, and those near the boundary have other-class neighbors only in one direction—whereas a complex classifier can model more complex class distinctions. The most complex classification would be a random assignment of classes to entities.
Two things (at least) affect the complexity that a classifier can handle: classifier design and feature extraction. Classifier design defines the boundaries that a classifier can model. Feature extraction defines the parameters or dimensions available to the classifier, the degree of abstraction with which the classifier sees the world. Authentication for instance has a high degree of abstraction, it can distinguish entities but nothing else. Access control is richer in the parameters it uses, including besides the identity of entitites also properties of objects and actions. Yet, as the talk illustrates, these dimensions are not sufficient to properly express regulatory requirements. Whether this is a problem of the mechanism or a problem of the requirements I leave for the reader to ponder.
Security as a classification problem
Security requires that one can tell the bad guys and the good guys¹ apart. Security is thus, at least in part, a classification problem. Different approaches to security use different typs of classifiers. The Israeli profiling described in the video above essentially implements one particular decision tree. There is nothing particularly good or bad about this particular tree compared to others, or to entirely different ways of doing the job. What matters in the first place is that the classifier is either correct—it never confuses good and bad—or that it is at least biased in the right direction—it may misclassify good guys as bad guys², but not bad guys as good guys. A secondary consideration is efficency. The Isreali approach to airport security optimizes efficiency for a particular threat model.
¹ Or other entities. Security classification may work on objects, actions, situations, or really any combination of features that might matter.
² Assuming the enforcement stage of the mechanism does not cause permanent damage to entities classified as bad.
Swiss Cheese Security
I’m off for the New Security Paradigms Workshop in Oxford, where I will present what I currently call the Swiss Cheese security policy model. My idea is to model security mechanisms as classifiers, and security problems in a separate world model as classification problems. In such a model we can (hopefully) analyze how well a mechanism or a combination of mechanisms solves the actual problem. NSPW is my first test-driving of the general idea. If it survives the workshop I’m going to work out the details. My paper isn’t available yet; final versions of NSPW papers are to be submitted a few weeks after the workshop.