Archiv der Kategorie: Property Degrees

Application Layer Snake Oil

TL;DR: The author thinks Snowden’s home security app, Haven, is snake oil regardless of the algorithms it uses. Operational security is at least as hard as cryptography and no app is going to provide it for you.

Bogus cryptography is often being referred to as snake oil—a remedy designed by charlatans to the sole end of selling it to the gullible. Discussions of snake oil traditionally focused on cryptography as such and technical aspects like the choice of algorithms, the competence of their designers and implementers, or the degree of scrutiny a design and its implementation received. As a rule of thumb, a set of algorithms and protocols is widely accepted as probably secure according to current public knowledge, and any poorly motivated deviation from this mainstream raises eyebrows.

However, reasonable choices of encryption algorithms and crypto protocols alone does not guarantee security. The overall application in which they serve as building blocks needs to make sense as well in the light of the threat models this application purports to address. Snake oil is easy to mask at this level. While most low-level snake oil can be spotted by a few simple patterns, the application layer calls for a discussion of security requirements.

Enter Haven, the personal security app released by Freedom of the Press Foundation and Guardian Project and associated in public relations with Edward Snowden. Haven turns a smartphone into a remote sensor that alerts its user over confidential channels about activity in its surroundings. The intended use case is apparently to put the app on a cheap phone and leave this phone wherever one feels surveillance is need; the user’s primary phone will then receive alerts and recordings of sensed activity.

Haven is being touted as “a way to protect their [its users] personal spaces and possessions without compromising their own privacy.” The app allegedly protects its users against “the secret police making people disappear” and against evil maid attacks targeting their devices in their absence. To this end, Haven surveils its surroundings through the smartphone’s sensors for noise, movement, etc. When it detects any activity, the app records information such as photos through the built-in camera and transmits this information confidentially over channels like the Signal messenger and Tor.

Alas, these functions together create a mere securitoy that remains rather ineffective in real applications. The threat model is about the most challenging one can think of short of an alien invasion. A secret police that can make people disappear and get away with it is close to almighty. They will not go through court proceedings to decide who to attack and they will surely not be afraid of journalists reporting on them. Where a secret police makes people disappear there will be no public forum for anyone to report on their atrocities. Just imagine using Haven in North Korea—what would you hope to do, inside the country, after obtaining photos of their secret police?

Besides strongly discouraging your dissemination of any recordings, a secret police can also evade detection through Haven. They might, for example, jam wireless signals before entering your home or hotel room so that your phone has no chance of transmitting messages to you until they have dealt with it. Or they might simply construct a plausible pretense, such as a fire alarm going off and agents-dressed-as-firefighters checking the place. Even if they fail to convince you, you will not be able to react in any meaningful way to the alerts you receive. Even if you were close enough to do anything at all, you would not physically attack agents of a secret police that makes people disappear, would you?

What Haven is trying to sell is the illusion of control where the power differential is clearly in favor of the opponent. Haven sells this illusion to well pampered westerners and exploits their lack of experience with repression. To fall for Haven you have to believe the  premise that repression means a secret police in an otherwise unchanged setting. This premise is false: A secret police making people disappear exists inevitably in a context that limits your access to institutions like courts or media or the amount of support you can expect from them. Secret communication as supported by Haven does not even try to address this problem.

While almost everyone understands the problems with low-level snake oil and how to detect and avoid it, securitoys and application layer snake oil continue to fool (some) journalists and activists. Here are a few warning signs:

  1. Security is the only or primary function of a new product or service. Nothing interesting remains if you remove it.
  2. The product or service is being advertised as a tool to evade repression by states.
  3. The threat model and the security goals are not clearly defined and there is no sound argument relating the threat model, security goals, and security design.
  4. Confidentiality or privacy are being over-emphasized and encryption is the core security function. Advertising includes references to “secure” services like Tor or Signal.
  5. The product or service purports to solve problems of operational security with technology.

When somebody shows you a security tool or approach, take the time to ponder how contact with the enemy would end.

Confidentiality is overrated

Is security about keeping secrets? Not really, although it seems so at first glance. Perhaps this mismatch between perception and reality explains why threats are mounting in the news without much impact on our actual lives.

Confidentiality comes first in infosec’s C/I/A (confidentiality, integrity, availability) trinity. Secrets leaking in a data breach are the prototype of a severe security problem. Laypeople even use encryption and security synonymously. Now that the half-life of secrets is declining, are we becoming less and less secure?

Most real security problems are not about keeping secrets, they are about integrity of control. Think, for example, of the money in your wallet. What matters to you is control over this money, which should abide by certain rules. It’s your money, so you should remain in control of it until you voluntarily give up your control in a transaction. The possibility of someone else taking control of your money without your consent, through force or trickery, is something to worry about and, if such others exist, a real security problem. Keeping the contents of your wallet out of sight is in contrast only a minor concern. Someone peeking into your wallet without taking anything is not much of a threat. Your primary security objective is to remain in control of what is yours most of the times and to limit your losses across the exceptional cases when you are not.

This security objective remains just the same as you move on from a wallet to online banking. What matters most is who controls the balance in which way. In a nutshell, only you (or others with your consent), knowingly and voluntarily, should be able to withdraw money or transfer it from your account; you should not be able to increase your balance arbitrarily without handing in actual money; others should be able to transfer any amount to your account; exceptions apply if you don’t pay your debts.

Confidentiality is only an auxiliary objective. We need confidentiality due to vulnerabilities. Many security mechanisms rely on secrets, such as passwords or keys, to maintain integrity. This is one source of confidentiality requirements. Another is economics: Attackers will spend higher amounts on valuable targets, provided they can identify them. If there is a large number of possible targets but only a few are really valuable, one might try to make the valuable target look like all the others so that attackers have to spread at least part of their effort across many candidate targets. However, strong defenses are still needed in case attackers identify the valuable target in whichever way, random or systematic.

The better we maintain integrity of control, the more secure we are. Systems remain predictable and do what we want despite the presence of adversaries. Confidentiality is only a surrogate where we do not trust our defenses.

Morgen kann vielleicht etwas passieren

»Ich will jedenfalls auf dieses Problem aufmerksam machen: Sicherheitsbedürfnisse sind strukturell unstillbar. Es ist gegen das Argument ‚Morgen kann vielleicht etwas passieren‘ kein Kraut gewachsen.«

— Winfried Hassemer im Streitgespräch mit Wolfgang Schäuble (via Telepolis)

Zu kurz gedacht wäre allerdings, dies – und die Schlussfolgerung, dass man Grenzen setzen müsse – nur auf staatliche Sicherheitsgesetze, -behörden und -projekte zu beziehen. Der Satz gilt in alle Richtungen und für alle Sicherheitsbedürfnisse, also auch zum Beispiel für den Ruf nach mehr Datenschutz, mehr Verschlüsselung, weniger NSA und so weiter.

Morgen kann vielleicht etwas passieren. Das ist kein ausreichender Grund, auf Segnungen des Internet-Zeitalters zu verzichten, auch wenn sie Google, Facebook oder Cloud Computing heißen. Es ist nicht mal ein ausreichender Grund, sich anders zu verhalten und etwa amerikanische Dienstleister zu meiden, öfter zu verschlüsseln oder Datenpakete anders zu routen.

Morgen kann vielleicht etwas passieren. Etwas dagegen zu tun lohnt sich nur, wenn man sein individuelles Risiko nennenswert reduziert und der Aufwand im Verhältnis zur Risikoreduktion steht. Deswegen erlaube ich mir, die Snowden-Enthüllungen mit Interesse zur Kenntnis zu nehmen, in meinem alltäglichen Verhalten aber nicht weiter darauf zu reagieren. Ich habe keinerlei Anhaltspunkte dafür, dass die NSA mein Leben beeinflusst, folglich lohnt es sich auch nicht, individuelle Maßnahmen zu ergreifen.

Die halbe Wahrheit

Langsam spricht sich herum, dass mikoskopische Sicherheitsbetrachtungen zwar nicht unnütz sind, für sich genommen jedoch wenig aussagen. Ob Daten verschlüsselt sind oder nicht, ob ein System Standardmechanismen einsetzt oder sich auf andere Constraints stützt, oder wie ein einzelnes Exemplar aus einem Ökosystem auf eine spezifische Angriffstechnik reagiert – wie sicher oder unsicher wir am Ende sind, werden wir aus diesen Informationen alleine nicht ableiten können. Die WIrkung von Sicherheitsmaßnahmen ist relativ zum Anwendungs- und Bedrohungskontext; wer ernsthaft über Sicherheit diskutieren möchte, muss diesen Kontext behandeln.

Zwei lesenswerte Blogbeiträge zu aktuellen Nachrichten illustrieren dies:

  • Hanno Zulla skizziert in Wahl, Ausweispflicht und Wahlfälschung, wieso es völlig normal und kein größeres Problem ist, ohne Ausweiskontrolle wählen zu dürfen. Wer das Gesamtsystem betrachtet, wird seiner Argumentation folgen können.
  • Volker Weber nimmt die Meldungen über erfolgreich angegriffene Fingerabdrucksensoren im iPhone aufs Korn und findet eine hübsche Analogie: Sensation: Glas ist nicht sicher.

Beiden Fällen liegt eine Checklisten-Mentalität zugrunde: Sicher sei nur, was die Mechanismen X, Y und Z mit den Eigenschaften A, B und C enthalte. Eigentlich sollten wir es besser wissen, spätestens seit sich Datensätze auf Plastikkarten gegenüber vielerlei „sicheren“ Verfahren zum Bezahlen im Internet durchgesetzt haben.

P.S.: Eine differenzierte Betrachtung über angreifbare Fingerabdrucksensoren.

P.P.S.: Das Wertvollste an einem iPhone dürfte für die meisten Paarungen von Zielinstanz und Angreifer das iPhone sein und nicht die Daten darauf.

Sketchifying and the Instagram of Security Design

About a year ago, when I received the review comments on Point-and-Shoot Security Design (discussed in this blog before), I was confronted with a question I could not answer at that time. One reviewer took my photography analogy seriously and asked:

What ist the Instagram of information security?

Tough one, the more so as I never used Instagram. But the question starts making sense in the light of an explanation that I came across recently:

»Ask any teenager if they would want to share multi-photo albums on Instagram, and they wouldn’t understand you. The special thing about Instagram is that they focus on one photo at a time. Every single photo is a piece of handcrafted excellence. When you view an Instagram photo, you know the photo is the best photo of many photos. The user chose to upload that specific photo and spent a good amount of time picking a filter and editing it.«

(The Starbucks Theory — Thoughts on creativity)

One of the points I made in my paper was that design requires design space exploration before refinement and caring about details. Even if one starts to consider security early in a development process, one may still end up bolting security on rather than designing it in if one lets non-security requirements drive the design process and simply looks for security mechanisms to add to the design. One should rather, I theorized, actively search the security design space for solution candidates and evaluate them against threat models to identify viable solutions to the security problems an operational environment will be posing. Designing a secure system is not in the first place about defining a security policy or guaranteeing certain formal, microscopic security properties. Security design is rather about shaping the behavior of adversarial actors such that the resulting incident profile becomes predictable and acceptable.

Today I came across an article by Željko Obrenović, whose work I was unaware of at the time of writing the point-and-shoot paper. In his article Software Sketchifying: Bringing Innovation into Software Development (IEEE Software 30:3 May/June 2013) he outlines the ideas behind Sketchlet, a tool to help nonengineers to try out different interaction designs. I haven’t tried Sketchlet yet, but apparently it allows interaction designers to work with technological components and services, combining and arranging them through a direct manipulation user interface. Without having to program, the designer can take building blocks and play with them to try out ideas. Designers can thus quickly discard bad ideas before taking a selection of apparently good ones into the prototyping and later, the implementation stage.

Conceptually this is pretty close to what I’d like to see for security design. There’s a catch, however: security design deals with dimensions that can’t be experienced immediately, needs to be visualized through evaluation and analysis. Design sketches need to be evaluated in a second dimension against threat models capturing and representing adversary behavior. Nevertheless, Sketchifying looks like an interesting starting point for further exploration.

The misleading microscopic view

The Guardian lists 10 gross ingredients you didn’t know were in your food, ingredients like arsenic, hair, or silicone breast implant filler. Should we react with nausea and disgust? Of course not. Yummy food is yummy food, neither a just detectable trace of someting (arsenic) nor the source of an ingredient (hair) nor possible other uses of the same ingredient (breast implants) have any noticeble impact. That’s by definition: if a dose of anything has a proven adverse health impact, it will be banned from being used in food. The Guardian’s list is an example of microscopic properties that don’t matter macroscopically. Yummy food is yummy food.

We commit the same error when, in security, we look just at the software defects and neglect their security impact. All software has defects; we might easily assemble a list of 10, or 100, or 1000 defects you didn’t know were in your programs. This does not mean they’d all matter and need to be removed. A system is secure if it evokes a predictable and controlled incident profile over its lifetime. some software defects in some systems affect this incident profile in such a way that their removal matters. Others are just traces of poison, or issues appearing problematic by analogy. The problem is: we often don’t know which is which.

Levels of Crime Opportunity

Just came across a crime science paper that expresses an idea similar to my security property degrees:

»In addition, for any crime, opportunities occur at several levels of aggregation. To take residential burglary as an example, a macro level, societal-level cause might be that many homes are left unguarded in the day because most people now work away from home (cf. Cohen and Felson 1979). A meso-level, neighborhood cause could be that many homes in poor public housing estates once used coin-fed fuel meters which offered tempting targets for burglars (as found in Kirkholt, Pease 1991). A micro-level level cause, determining the choices made by a burglar, could be a poorly secured door.«

(Ronald V Clarke: Opportunity makes the thief. Really? And so what?)

Clarke doesn’t elaborate any further on these macro/meso/micro levels of opportunity for crime. Maybe I’m interpreting too much into this paragraph, but in essence he seems to talk about security properties – he is discussing in his paper the proposition that opportunity is a cause of crime and reviews the literature on this subject. Opportunity means properties of places and targets.

An Exercise in Lateral Thinking

A year ago, in a slightly heated debate on secure software engineering, I used a photography analogy to make my point. The precise background of this debate does not matter; it should suffice to say that one party – “us” – opined that security engineering is difficult and complicated, while the the other party – “them” – held the view that average software developers need just a couple of tools and examples to improve the results of their work, security-wise. Both sides had a point, considering their respective backgrounds, but they spoke of requirements while we spoke of the difficulty of fulfilling these requirements. To explain my position on the issue, I tranferred the problem from security engineering into a totally unrelated field, photography. They seemed to expect they could turn average people into reasonably good photographers by handing them a highly automated point-and-shoot camera and a few examples of great photos. We ended the quarrel agreeing to disagree.

The train of thought thus started led to my latest paper Point-and-Shoot Security Design: Can We Build Better Tools for Developers? which I finished a few weeks ago, after having presented and discussed an earlier version at this year’s New Security Paradigms Workshop. In this paper I explore the photography analogy in general, interpreting (some aspects of) photography as visual engineering, and the point-and-shoot analogy of tool support in particular. The final version of the paper does not fully reflect its genesis as I moved the photography part, from which everything originates, into a separate section towards the end.

Describing in abstract terms different classes of properties that we can analyze and discuss in a photo, I develop the notion of property degrees, which I then transfer into security. Properties characterize objects, but they do so in different manners:

  • Microscopic properties characterize an object by its parts, and in terms that we can express and evaluate for each part in isolation. Taking a microscopic point of view, we describe a photo by its pixels and the security of a system by its security mechanisms and its defects.
  • Macroscopic properties characterize an object by the way it interacts with its surroundings. Macroscopic properties of a photo represent the reactions the photo evokes in the people viewing it, and the macroscopic security properties of a system characterize the reaction of a threat environment to the presence of this system.
  • In between, mesoscopic properties characterize the object in its entirety (as opposed to the microscopic view) but not its interaction with an environment (as opposed to macroscopic properties). We speak about microscopic properties if we discuss, for instance, the composition of a photo or the security of a system against a certain class of adversaries, considering motivations and capabilities.

Speaking of property degrees as of three distinct classes is a simplification; one should really think of the property degree as a continuum and of the three classes as tendencies. In a rigorous definition, which my paper doesn’t attempt, we would likely end up calling all properties mesoscopic except for those at the ends of the interval.

The ultimate objective of photography and security engineering alike, I argue, is to shape the macroscopic properties of that which one creates. Any object has properties at all three degrees; to design something means to control these properties consciously and deliberately. To do that, one needs to control lower-degree properties to support what one is trying to achieve. However, there are no simple and universal rules how macroscopic properties depend on mesoscopic and microscopic properties. To figure out these dependencies is a challenge that we leave to the artist. That’s necessary in art, but less desirable in security engineering.

Looking at some of the security engineering tools and techniques that we use today, I argue that security engineers enjoy just as much artistic freedom as photographers, although they shouldn’t. Most of our apporaches to security design have a microscopic focus. The few mesoscopic and macroscopic tools we know, such as attack trees and misuse cases, are mere notations and provide little guidance to the security engineer using them.  To do better, we have to develop ways of supporting macroscopic analysis and mesoscopic design decisions. Right now we are stuck in the microscopic world of security features and security bugs, unable to predict how well a security mechanism will protect us or how likely a bug will be exploited in the wild.

Using photography as a model for security engineering is an intermediate impossible, a term coined by Edward de Bono for one aspect of lateral thinking. An intermediate impossible does not make much sense by itself, but serves as a stepping stone to something that might. In the case of point-and-shoot security design, it’s a double impossible, a) ignoring the boundary between art and engineering and, b) ignoring for a moment the adversarial relationships that we are so focused on and, simultaneously, so ignorant of in security. Out of it we get the universal notion of property degrees, and an application of this notion to the specific problems of security.

In a way, this work is a follow-up on my 2009 NSPW paper What Is the Shape of Your Security Policy? Security as a Classification Problem (mentioned here, here, and here). I based my considerations there on the notion of security policies, and later found it difficult to apply my ideas to examples without something bothering me. Security policies tend to become arbitrary when we understand neither what we are trying to achieve nor what it takes to achieve it. If you meticulously enforce a security policy, you still don’t have the slightest idea how secure you are in practice, facing an adversary that cares about your assumptions only to violate them. Property degrees don’t solve this problem, but maybe they make it a bit more explicit.