How Google determines which ad to display in a slot and how much to charge the advertiser:
(YouTube)
How Google determines which ad to display in a slot and how much to charge the advertiser:
(YouTube)
(YouTube)
Everyone knows the story of Clifford Stoll and and West-German KGB hackers (see the video below) in the late 80s. Does this history teach us something today? What strikes me as I watch this documentary again is the effort ratio between attackers and defenders. To fight a small adversary group, Stoll invested considerable effort, and from some point involved further people and organizations in the hunt. In effect, once they had been detected, the attackers were on their way to being overpowered and apprehended.
Today, we take more organized approaches to security management and incident response. However, at the same time we try to become more efficient: we want to believe in automated mechanisms like data leakage prevention and policy enforcement. But these mechanisms work on abstractions – they are less complicated than actual attacks. We also want to believe in preventive security design, but soon find ourselves engaged in an eternal arms race as our designs never fully anticipate how attackers adapt. Can procedures and programs be smart enough to fend off intelligent attackers, or does it still take simply more brains on the defender’s than on the attacker’s part to win?
(YouTube)
About a year ago, when I received the review comments on Point-and-Shoot Security Design (discussed in this blog before), I was confronted with a question I could not answer at that time. One reviewer took my photography analogy seriously and asked:
What ist the Instagram of information security?
Tough one, the more so as I never used Instagram. But the question starts making sense in the light of an explanation that I came across recently:
»Ask any teenager if they would want to share multi-photo albums on Instagram, and they wouldn’t understand you. The special thing about Instagram is that they focus on one photo at a time. Every single photo is a piece of handcrafted excellence. When you view an Instagram photo, you know the photo is the best photo of many photos. The user chose to upload that specific photo and spent a good amount of time picking a filter and editing it.«
One of the points I made in my paper was that design requires design space exploration before refinement and caring about details. Even if one starts to consider security early in a development process, one may still end up bolting security on rather than designing it in if one lets non-security requirements drive the design process and simply looks for security mechanisms to add to the design. One should rather, I theorized, actively search the security design space for solution candidates and evaluate them against threat models to identify viable solutions to the security problems an operational environment will be posing. Designing a secure system is not in the first place about defining a security policy or guaranteeing certain formal, microscopic security properties. Security design is rather about shaping the behavior of adversarial actors such that the resulting incident profile becomes predictable and acceptable.
Today I came across an article by Željko Obrenović, whose work I was unaware of at the time of writing the point-and-shoot paper. In his article Software Sketchifying: Bringing Innovation into Software Development (IEEE Software 30:3 May/June 2013) he outlines the ideas behind Sketchlet, a tool to help nonengineers to try out different interaction designs. I haven’t tried Sketchlet yet, but apparently it allows interaction designers to work with technological components and services, combining and arranging them through a direct manipulation user interface. Without having to program, the designer can take building blocks and play with them to try out ideas. Designers can thus quickly discard bad ideas before taking a selection of apparently good ones into the prototyping and later, the implementation stage.
Conceptually this is pretty close to what I’d like to see for security design. There’s a catch, however: security design deals with dimensions that can’t be experienced immediately, needs to be visualized through evaluation and analysis. Design sketches need to be evaluated in a second dimension against threat models capturing and representing adversary behavior. Nevertheless, Sketchifying looks like an interesting starting point for further exploration.
An article in the latest issue of CACM (paywalled) quotes some prices advertised by criminals for elementary attack building blocks:
If you want to get rich, don’t waste your time on developing sophisiticated attack techniques. Look at the services available and devise a business model.
A few weeks ago we saw the russian way of obtaining soda from a vending machine. It was simple and robust. French nerds employ an entirely different style:
The Guardian lists 10 gross ingredients you didn’t know were in your food, ingredients like arsenic, hair, or silicone breast implant filler. Should we react with nausea and disgust? Of course not. Yummy food is yummy food, neither a just detectable trace of someting (arsenic) nor the source of an ingredient (hair) nor possible other uses of the same ingredient (breast implants) have any noticeble impact. That’s by definition: if a dose of anything has a proven adverse health impact, it will be banned from being used in food. The Guardian’s list is an example of microscopic properties that don’t matter macroscopically. Yummy food is yummy food.
We commit the same error when, in security, we look just at the software defects and neglect their security impact. All software has defects; we might easily assemble a list of 10, or 100, or 1000 defects you didn’t know were in your programs. This does not mean they’d all matter and need to be removed. A system is secure if it evokes a predictable and controlled incident profile over its lifetime. some software defects in some systems affect this incident profile in such a way that their removal matters. Others are just traces of poison, or issues appearing problematic by analogy. The problem is: we often don’t know which is which.
(via)
I really liked the headline, though I didn’t like what the article said, i.e. that the U.S. President might start a cyber war /launch a preemtive strike without a real conflict with the state in question. The commentator at the Homeland Security Blog gets the point: „While Presidential Policy Directive 20 is secret, what is known about it is sufficient to raise global concern. The US arsenal is stupefying as it is, and cyber capabilities add a new dimension; and preemption brings us back to the fear and insecurity of the chilliest Cold War years. The undertones of the new policy are aggressive and, as of now, there are no known restrictions.“
This video shows a gang of street robbers at work in Bogotá. The commentary is in Spanish, but you’ll probably get some of the interesting point from mere watching: how they change clothes, how one of them picks a victim and indicates the pick to others, or how they literally overpower their target.
(youtube)
What happens when a government enacts and enforces a mandatory helmet law for motorcycle riders? According to criminologists, a reduction in motorbike theft follows. The 1989 study Motorcycle Theft, Helmet Legislation and Displacement by Mayhew et al. (paywalled, see Wikpedia summary) demonstrated this effect empirically looking at crime figures from Germany, where not wearing a helmet on a motorcycle is being fined since 1980. This lead to a 60% drop in motorcycle theft – interestingly, with limited compensation by increases in other types of vehicle theft.
The plausible explanation: motorcycle thieves incur higher risk of being caught when riding without a helmet after a spontaneous, opportunistic theft. Adaptation is possible but costly and risky, too: looking for loot with a helmet in one’s hand is more conspicious, and preparing specifically to steal bicycles reduces flexibility and narrows the range of possible targets.
Which of the following statements do you agree or disagree with, and why?
Do you agree with all, some, or none of these statements? Please elaborate in the comments. I’m not so much interested in nitpicking about the causation of certain incidents – read »your fault« as »in part your fault« if you like so. What interests me rather is the consistency or incocnsistency in our assessment of these matters. If you happen to agree with some of the statements but disagree with others, why is that?
P.S. (2014-09-05): Samuel Liles and Eugene Spafford discuss this matter more thoroughly: What is wrong with all of you? Reflections on nude pictures, victim shaming, and cyber security
Just came across a crime science paper that expresses an idea similar to my security property degrees:
»In addition, for any crime, opportunities occur at several levels of aggregation. To take residential burglary as an example, a macro level, societal-level cause might be that many homes are left unguarded in the day because most people now work away from home (cf. Cohen and Felson 1979). A meso-level, neighborhood cause could be that many homes in poor public housing estates once used coin-fed fuel meters which offered tempting targets for burglars (as found in Kirkholt, Pease 1991). A micro-level level cause, determining the choices made by a burglar, could be a poorly secured door.«
(Ronald V Clarke: Opportunity makes the thief. Really? And so what?)
Clarke doesn’t elaborate any further on these macro/meso/micro levels of opportunity for crime. Maybe I’m interpreting too much into this paragraph, but in essence he seems to talk about security properties – he is discussing in his paper the proposition that opportunity is a cause of crime and reviews the literature on this subject. Opportunity means properties of places and targets.
Now this is an interesting specialization of phishing attacks:
»The scam works like this: Someone obtains of a list of articles submitted to — and under review by — a publisher, along with the corresponding author’s contact information. Then, pretending to be the publisher, the scammers send a bogus email to the corresponding author informing him that his paper has been accepted. The email also contains article processing fee information, advising the author to wire the fee to a certain person. The problem is that it’s not the publisher sending out the acceptance emails — it’s a bad guy.«
(Scholarly Open Access: Fraud Alert: Bogus Article Acceptance Letters)
I doubt this constitutes a sustainable attack business model, but the perpetrators surely deserve credit for being adaptive and creative.
Bruce Schneier dug out this little gem: Forbidden Spheres. A nuclear weapons lab, so the (apparently unconfirmed) story goes, classified all spherical objects as confidential, rather than just those related to nuclear weapons design. In the security-as-classification paradigm this makes a lot of sense. The desired security policy is to keep everything related to weapons design confidential. This includes objects that might give away information, which one will naturally find in an R&D lab. To enforce this policy, however, requires either comprehensive tracking of all objects, or complicated considerations to decide whether an object is classified as confidential under the policy or not. But a subset of the objects under consideration has a common, easy-to-detect property: they are spherical. The classification required as a prerequisite for policy enforcement becomes much simpler if one considers only this feature. The classification also becomes less perfect, there are classification errors. However, if it’s all about nuclear spheres, than the simplified classifier errs systematically towards the safe side, erroneously classifying innocuous spherical objects as confidential. As long as it doesn’t disturb the intended operations of the lab, this may well be acceptable. This approach would break down if an adversary, or an accident, could easily change relevant shapes without defeating purpose.
A year ago, in a slightly heated debate on secure software engineering, I used a photography analogy to make my point. The precise background of this debate does not matter; it should suffice to say that one party – “us” – opined that security engineering is difficult and complicated, while the the other party – “them” – held the view that average software developers need just a couple of tools and examples to improve the results of their work, security-wise. Both sides had a point, considering their respective backgrounds, but they spoke of requirements while we spoke of the difficulty of fulfilling these requirements. To explain my position on the issue, I tranferred the problem from security engineering into a totally unrelated field, photography. They seemed to expect they could turn average people into reasonably good photographers by handing them a highly automated point-and-shoot camera and a few examples of great photos. We ended the quarrel agreeing to disagree.
The train of thought thus started led to my latest paper Point-and-Shoot Security Design: Can We Build Better Tools for Developers? which I finished a few weeks ago, after having presented and discussed an earlier version at this year’s New Security Paradigms Workshop. In this paper I explore the photography analogy in general, interpreting (some aspects of) photography as visual engineering, and the point-and-shoot analogy of tool support in particular. The final version of the paper does not fully reflect its genesis as I moved the photography part, from which everything originates, into a separate section towards the end.
Describing in abstract terms different classes of properties that we can analyze and discuss in a photo, I develop the notion of property degrees, which I then transfer into security. Properties characterize objects, but they do so in different manners:
Speaking of property degrees as of three distinct classes is a simplification; one should really think of the property degree as a continuum and of the three classes as tendencies. In a rigorous definition, which my paper doesn’t attempt, we would likely end up calling all properties mesoscopic except for those at the ends of the interval.
The ultimate objective of photography and security engineering alike, I argue, is to shape the macroscopic properties of that which one creates. Any object has properties at all three degrees; to design something means to control these properties consciously and deliberately. To do that, one needs to control lower-degree properties to support what one is trying to achieve. However, there are no simple and universal rules how macroscopic properties depend on mesoscopic and microscopic properties. To figure out these dependencies is a challenge that we leave to the artist. That’s necessary in art, but less desirable in security engineering.
Looking at some of the security engineering tools and techniques that we use today, I argue that security engineers enjoy just as much artistic freedom as photographers, although they shouldn’t. Most of our apporaches to security design have a microscopic focus. The few mesoscopic and macroscopic tools we know, such as attack trees and misuse cases, are mere notations and provide little guidance to the security engineer using them. To do better, we have to develop ways of supporting macroscopic analysis and mesoscopic design decisions. Right now we are stuck in the microscopic world of security features and security bugs, unable to predict how well a security mechanism will protect us or how likely a bug will be exploited in the wild.
Using photography as a model for security engineering is an intermediate impossible, a term coined by Edward de Bono for one aspect of lateral thinking. An intermediate impossible does not make much sense by itself, but serves as a stepping stone to something that might. In the case of point-and-shoot security design, it’s a double impossible, a) ignoring the boundary between art and engineering and, b) ignoring for a moment the adversarial relationships that we are so focused on and, simultaneously, so ignorant of in security. Out of it we get the universal notion of property degrees, and an application of this notion to the specific problems of security.
In a way, this work is a follow-up on my 2009 NSPW paper What Is the Shape of Your Security Policy? Security as a Classification Problem (mentioned here, here, and here). I based my considerations there on the notion of security policies, and later found it difficult to apply my ideas to examples without something bothering me. Security policies tend to become arbitrary when we understand neither what we are trying to achieve nor what it takes to achieve it. If you meticulously enforce a security policy, you still don’t have the slightest idea how secure you are in practice, facing an adversary that cares about your assumptions only to violate them. Property degrees don’t solve this problem, but maybe they make it a bit more explicit.