Schlagwort-Archive: 2009

Causal Insulation

I just came across an essay by Wolter Pieters that complements my 2009 NSPW paper (mentioned here and here in this blog before) in style and content. In The (social) construction of information security (author’s version as PDF), Pieters discusses security in terms of causal insulation. This notion has its roots in Niklas Luhmann’s sociological theory of risk. Causal insulation means that to make something secure, one needs to isolate it from undesired causes, in the case of security from those that attackers would intentionally produce.On the other hand, some causes need to be allowed as they are  necessary for the desired functioning of a system.

I used a similar idea as the basis of my classifier model. A system in an environment creates a range of causalities—cause-effect relationships—to be considered. A security policy defines which of the causes are allowed and which ones are not, splitting the overall space into two classes. This is the security problem. Enforcing this policy is the objective of the security design of a system, its security mechanisms and other security design properties.

A security mechanism, modeled as a classifier, enforces some private policy in a mechanism-dependent space, and maps the security problem to this private space through some kind of feature extraction. In real-world scenarios, any mechanism is typically less complex than the actual security problem. The mapping implies loss of information and may be inaccurate and partial; as a result, the solution of the security problem by a mechanism or a suite of mechanisms becomes inaccurate even if the mechanism works perfectly well within its own reference model. My hope is that the theory of classifiers lends us some conceptual tools to analyze the degree and the causes of such inaccuracies.

What my model does not capture very well is the fact that any part of a system does not only classify causalities but also defines new causalities, I’m still struggling with this. I also struggle with practical applicability, as the causality model for any serious example quickly explodes in size.

Fighting back

Sherr, M.; Shah, G.; Cronin, E.; Clark, S. and Blaze, M.: Can They Hear Me Now? A Security Analysis of Law Enforcement Wiretaps. CCS’09.

Abstract:

»Although modern communications services are susceptible to third-party eavesdropping via a wide range of possible techniques, law enforcement agencies in the US and other countries generally use one of two technologies when they conduct legally-authorized interception of telephones and other communications traffic. The most common of these, designed to comply with the 1994 Communications Assistance for Law Enforcement Act (CALEA), use a standard interface provided in network switches. This paper analyzes the security properties of these inter- faces. We demonstrate that the standard CALEA interfaces are vulnerable to a range of unilateral attacks by the intercept target. In particular, because of poor design choices in the interception architecture and protocols, our experiments show it is practical for a CALEA-tapped target to over- whelm the link to law enforcement with spurious signaling messages without degrading her own traffic, e ectively preventing call records as well as content from being monitored or recorded. (…)«

600:500.000

Wieviel Angst müssen wir eigentlich vor Straßenraub haben? Hier sind die Zahlen für Leipzig (500.000 Einwohner) in den letzten Jahren:

»Ermittler gehen davon aus, dass die Zahl der Raubstraftaten in diesem Jahr erneut angestiegen ist, sich bei etwa 600 einpegeln dürfte. Im vorigen Jahr hatte die Kripo 547 Fälle registriert, 2007 waren es 590.«

(LVZ: Prügel für ein paar Euro: Etwa 600 Raubstraftaten in diesem Jahr)

Das sind ungefähr 1,2 Fälle auf 1000 Einwohnerjahre.

NSPW 2009 Papers Online

[See only posts in English]

Just a quick note: The final papers for the New Security Paradigms Workshop 2009 are now online, including my own (also here). Two of them got their share of public attention already, Maritza Johnson’s Laissez-faire file sharing (in Bruce Schneier’s blog) and Cormac Herley’s So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users (Schneier’s blog; New School of Information TechnologyHeise.de). For those of you who can afford the trip, the authors will present these two papers again in a session at ACSAC, December 7-11.

Swiss Cheese Security

I’m off for the New Security Paradigms Workshop in Oxford, where I will present what I currently call the Swiss Cheese security policy model. My idea is to model security mechanisms as classifiers, and security problems in a separate world model as classification problems. In such a model we can (hopefully) analyze how well a mechanism or a combination of mechanisms solves the actual problem. NSPW is my first test-driving of the general idea. If it survives the workshop I’m going to work out the details. My paper isn’t available yet; final versions of NSPW papers are to be submitted a few weeks after the workshop.

Production-safe Testing

[See only posts in English]

Software testers increasingly have to deal with production systems. Some tests make sense only with production systems, such as Nessus-style vulnerability scanning. And an increasing number of systems is hard to reproduce in a test bed as the system is really a mashup of services, sharing infrastructure with other systems on various levels of abstraction.

Testing production systems imposes an additional requirement upon the tester, production safety. Testing is production-safe if it does not cause undesired side-effects for the users of the tested or any other system. Potential side effects are manifold: denial of service, information disclosure, real-world effects caused by test inputs, or alteration of production data, to name just a few. Testers of production systems therefore must take precautions to limit the risks of their testing.

Unfortunately it is not yet very clear what this means in practice. Jeremiah Grossman unwittingly started a discussion when he made production-saftey a criterion in his comparison of Website vulnerability assessment vendors. Yesterday he followed up on this matter with a questionnaire, which is supposed to help vendors and their clients to discuss production-safety.

The time is just right to point to our own contribution to this discussion. We felt a lack of documented best practice for production-safe testing, so we documented what we learned over a few years of security testing. The result is a short paper, which my colleague and co-author Jörn is going to present this weekend at the TAIC PART 2009 conference: Testing Production Systems Safely: Common Precautions in Penetration Testing. In this paper we tried to generalize our solutions to the safety problems we encountered.

The issue is also being discussed in the cloud computing community, but their starting point is slightly different. Service providers might want to ban activities such as automated scanning, and deploy technical and legal measures to enforce such a ban. They have good reason to do so, but their users may have equally good reason to do security testing. One proposal being discussed is a ScanAuth API to separate legitimate from rogue scans. Such an API will, however, only solve the formal part of the problem. Legitimate testing still needs to be production-safe.