Schlagwort-Archive: Security Engineering

An Exercise in Lateral Thinking

A year ago, in a slightly heated debate on secure software engineering, I used a photography analogy to make my point. The precise background of this debate does not matter; it should suffice to say that one party – “us” – opined that security engineering is difficult and complicated, while the the other party – “them” – held the view that average software developers need just a couple of tools and examples to improve the results of their work, security-wise. Both sides had a point, considering their respective backgrounds, but they spoke of requirements while we spoke of the difficulty of fulfilling these requirements. To explain my position on the issue, I tranferred the problem from security engineering into a totally unrelated field, photography. They seemed to expect they could turn average people into reasonably good photographers by handing them a highly automated point-and-shoot camera and a few examples of great photos. We ended the quarrel agreeing to disagree.

The train of thought thus started led to my latest paper Point-and-Shoot Security Design: Can We Build Better Tools for Developers? which I finished a few weeks ago, after having presented and discussed an earlier version at this year’s New Security Paradigms Workshop. In this paper I explore the photography analogy in general, interpreting (some aspects of) photography as visual engineering, and the point-and-shoot analogy of tool support in particular. The final version of the paper does not fully reflect its genesis as I moved the photography part, from which everything originates, into a separate section towards the end.

Describing in abstract terms different classes of properties that we can analyze and discuss in a photo, I develop the notion of property degrees, which I then transfer into security. Properties characterize objects, but they do so in different manners:

  • Microscopic properties characterize an object by its parts, and in terms that we can express and evaluate for each part in isolation. Taking a microscopic point of view, we describe a photo by its pixels and the security of a system by its security mechanisms and its defects.
  • Macroscopic properties characterize an object by the way it interacts with its surroundings. Macroscopic properties of a photo represent the reactions the photo evokes in the people viewing it, and the macroscopic security properties of a system characterize the reaction of a threat environment to the presence of this system.
  • In between, mesoscopic properties characterize the object in its entirety (as opposed to the microscopic view) but not its interaction with an environment (as opposed to macroscopic properties). We speak about microscopic properties if we discuss, for instance, the composition of a photo or the security of a system against a certain class of adversaries, considering motivations and capabilities.

Speaking of property degrees as of three distinct classes is a simplification; one should really think of the property degree as a continuum and of the three classes as tendencies. In a rigorous definition, which my paper doesn’t attempt, we would likely end up calling all properties mesoscopic except for those at the ends of the interval.

The ultimate objective of photography and security engineering alike, I argue, is to shape the macroscopic properties of that which one creates. Any object has properties at all three degrees; to design something means to control these properties consciously and deliberately. To do that, one needs to control lower-degree properties to support what one is trying to achieve. However, there are no simple and universal rules how macroscopic properties depend on mesoscopic and microscopic properties. To figure out these dependencies is a challenge that we leave to the artist. That’s necessary in art, but less desirable in security engineering.

Looking at some of the security engineering tools and techniques that we use today, I argue that security engineers enjoy just as much artistic freedom as photographers, although they shouldn’t. Most of our apporaches to security design have a microscopic focus. The few mesoscopic and macroscopic tools we know, such as attack trees and misuse cases, are mere notations and provide little guidance to the security engineer using them.  To do better, we have to develop ways of supporting macroscopic analysis and mesoscopic design decisions. Right now we are stuck in the microscopic world of security features and security bugs, unable to predict how well a security mechanism will protect us or how likely a bug will be exploited in the wild.

Using photography as a model for security engineering is an intermediate impossible, a term coined by Edward de Bono for one aspect of lateral thinking. An intermediate impossible does not make much sense by itself, but serves as a stepping stone to something that might. In the case of point-and-shoot security design, it’s a double impossible, a) ignoring the boundary between art and engineering and, b) ignoring for a moment the adversarial relationships that we are so focused on and, simultaneously, so ignorant of in security. Out of it we get the universal notion of property degrees, and an application of this notion to the specific problems of security.

In a way, this work is a follow-up on my 2009 NSPW paper What Is the Shape of Your Security Policy? Security as a Classification Problem (mentioned here, here, and here). I based my considerations there on the notion of security policies, and later found it difficult to apply my ideas to examples without something bothering me. Security policies tend to become arbitrary when we understand neither what we are trying to achieve nor what it takes to achieve it. If you meticulously enforce a security policy, you still don’t have the slightest idea how secure you are in practice, facing an adversary that cares about your assumptions only to violate them. Property degrees don’t solve this problem, but maybe they make it a bit more explicit.

Safe and sorry

A common delusion in security engineering is the idea that one could secure a system by identifying items that need protection (assets), describing the ways in which they might be damaged (threats or attacks, which are not synonymous but often confused), and then implementing countermeasures or mitigations such that all, or the most common, or the most damaging threats are covered. The system thus becomes secure with respect to the threat model, so the reasoning. This is the model underlying the Common Criteria, and it works fine as a descriptive model. To give an example from everyday life, consider a bicycle as an asset. If your bicycle gets stolen (the threat), your damage is the value of the bicycle plus any collateral damage that the loss may cause you, such as coming late to an appointment, having to pay for a taxi or public transport instead of riding your bicycle, and having to go to the gym for workout instead of getting a workout for free on your way to work. The typical countermeasure against this threat is locking the bicycle to a fence, pole, or other appropriate object. Locking your bicycle reduces the risk of it being stolen. What could possibly go wrong? Besides the obvious residual risk of your countermeasures not being strong enough, this could go wrong:

A bicycle frame locked to a fence, wheels and other parts stolen
Safe and sorry © 2012 Sven Türpe, CC-BY 3.0

 

This (ex-)bicycle was and remains properly locked and no vulnerability in the lock or in items the lock depends on have been exploited. Yet, somebody made a fortune stealing bicycle parts, and somebody else lost a bicycle to an attack. What’s the problem? The problem is the gross simplification in the asset-threat-countermeasure model, which neglects three important factors:

  1. Adaptive adversaries. A countermeasure does not oblige the adversary to stick to the original attack plan that the countermeasure is targeted at. Security measures change the threat model. They don’t force the adversary to give up, they force the adversary to change strategy and tactics.
  2. The victim’s loss and the adversary’s gain are not necessarily the same. In the case of the bicycle above, the lock may reduce the attacker’s gain to the black market value of the removed parts. The victim’s loss is still one bicycle.
  3. Asset dependencies. Thinking of a bicycle as one asset is an abstraction. A bicycle is really a collection of assets—its parts—and an asset by itself. Such dependencies, nested assets in this case, are common.

The bicycle lock, it turns out, is not really a bicycle lock, it’s a bicycle frame lock. It protects only one sub-asset of the bicycle, and an economically motivated adversary can make a gain that seems worth the risk without breaking the lock.

Prescriptive threat modeling—threat modeling done with the aim of finding a proper set of security features for a system—needs to take these issues into account. A good threat model anticipates changes in attacker behavior due to security measures. A good threat model considers not only damages to the victim but also gains of the adversary, as the latter are what motivates the adversary. And good security engineering is biased towards security, always overestimating adversary capabilities and always underestimating the effect of security measures, systematically.

Security Engineering vs. Targeted Attacks

In a followup blog post on Zalewski’s security engineering rant, Charles Smutz argues that security engineering cannot solve the problem of targeted attacks:

»Lastly, while it technically would be possible to engineer defenses that would be effective, very few people really want to live the resulting vault in fort knox, let alone pay for the construction.«

(SmuSec:
Security Engineering Is Not The Solution to Targeted Attacks)

So what can security engineering do for us—and what can we do if we want to take reasonable precautions against targeted attacks?

P.S.: This new paper by Cormac Herley might be losely related: The Plight of the Targeted Attacker in a World of Scale. I haven’t read it yet.

Keine Ahnung von der Sicherheit

Michal Zalewski rantet über Security Engineering und zieht dabei über formale Methoden, Risikomanagement und Fehlertaxonomien her. Das gefällt nicht allen, aber in allen drei Punkten halte ich Zweifel und Kritik für sehr berechtigt:

  • Reale Anwendungen sind für formale Methoden zu groß und zu kurzlebig. Aus der Forschung meldet man begeistert die erfolgreiche formale Verifikation des seL4-Mikrokernels (PDF). Er umfasst 8700 Zeilen C und 600 Zeilen Assembler. Vermutlich hat der JavaScript-Code schon mehr Zeilen, der mir hier den Editor ins Blog baut. Die Verifikation des Kernels war übrigens aufwändiger als seine Entwicklung.
  • Risikomanagement klingt erst einmal gut und vernünftig. Das tut es genau so lange, wie man sich mit der Floskel von Eintrittswahrscheinlichkeit mal Schadenshöhe begnügt. Fragt man aber nach Zahlen, so ist es mit dem Risikomanagement schnell vorbei. Wie hoch ist die Wahrscheinlichkeit eines Angriffs? Um wieviel reduziert eine Sicherheitsmaßnahme das Risiko? Wir wissen es nicht.
  • Taxonomien leiden, ähnlich wie Metriken, unter mangelnder Zweckorientierung. Die fein ausdifferenzierten Defektkatekorien der CWE zum Beispiel nützen vermutlich weder dem Entwickler noch dem Tester etwas: die Fehler sind nicht nach Teststrategien und nur indirekt nach Vermeidungsmöglichkeiten klassifiziert.

Dass wir eigentlich keine rechte Ahnung haben, wie das mit der Sicherheit so funktioniert, war schon im Februar auf dem Smartcard-Workshop meine These. Ich winke deshalb freundlich über den Atlantik, der Mann hat recht (d.h. ich teile seinen Glauben. :-))