We interrupt this blog for an unpaid commercial:
Archiv der Kategorie: English
Helmet law FAIL
This is what bicycle helmet laws achieve:
»The New Zealand helmet law (all ages) came into effect on 1 January 1994. It followed Australian helmet laws, introduced in 1990–1992. Pre-law (in 1990) cyclist deaths were nearly a quarter of pedestrians in number, but in 2006–09, the equivalent figure was near to 50% when adjusted for changes to hours cycled and walked. From 1988–91 to 2003–07, cyclists’ overall injury rate per hour increased by 20%. Dr Hillman, from the UK’s Policy Studies Institute, calculated that life years gained by cycling outweighed life years lost in accidents by 20 times. For the period 1989–1990 to 2006–2009, New Zealand survey data showed that average hours cycled per person reduced by 51%. This evaluation finds the helmet law has failed in aspects of promoting cycling, safety, health, accident compensation, environmental issues and civil liberties.«
(Colin F. Clarke:
Evaluation of New Zealand’s bicycle helmet law)
In einem Wort
Google’s Progress Bar
Unterschätzte Risiken: Texting While Walking
Safer Cycling Animations
In einem Wort
Lifecycle
Causal Insulation
I just came across an essay by Wolter Pieters that complements my 2009 NSPW paper (mentioned here and here in this blog before) in style and content. In The (social) construction of information security (author’s version as PDF), Pieters discusses security in terms of causal insulation. This notion has its roots in Niklas Luhmann’s sociological theory of risk. Causal insulation means that to make something secure, one needs to isolate it from undesired causes, in the case of security from those that attackers would intentionally produce.On the other hand, some causes need to be allowed as they are necessary for the desired functioning of a system.
I used a similar idea as the basis of my classifier model. A system in an environment creates a range of causalities—cause-effect relationships—to be considered. A security policy defines which of the causes are allowed and which ones are not, splitting the overall space into two classes. This is the security problem. Enforcing this policy is the objective of the security design of a system, its security mechanisms and other security design properties.
A security mechanism, modeled as a classifier, enforces some private policy in a mechanism-dependent space, and maps the security problem to this private space through some kind of feature extraction. In real-world scenarios, any mechanism is typically less complex than the actual security problem. The mapping implies loss of information and may be inaccurate and partial; as a result, the solution of the security problem by a mechanism or a suite of mechanisms becomes inaccurate even if the mechanism works perfectly well within its own reference model. My hope is that the theory of classifiers lends us some conceptual tools to analyze the degree and the causes of such inaccuracies.
What my model does not capture very well is the fact that any part of a system does not only classify causalities but also defines new causalities, I’m still struggling with this. I also struggle with practical applicability, as the causality model for any serious example quickly explodes in size.
Unterschätzte Risiken: Radon
(youtube)
Flawed Security Economics
I just stumbled upon a piece of economics-of-security reasoning that amazes me:
»Bank robbers steal approximately $100 million per year in the US. (…) To prevent this, banks spend $600 million per year on armored car services and $25 million per year on vault doors. The FBI spends $50 million per year investigating robberies. A good deal more is spent on security guards—approximately 1 million in the US, paid about $24 billion per year (outsourcing makes it difficult to say how many work for banks). In summary, the cost of protecting against bank robberies far exceeds the loss.«
(Michael Lesk, Cybersecurity and Economics, IEEE S&P Nov./Dec. 2011)
I don’t doubt the figures, but the conclusion does not make sense to me. Why should one put the cost of security measures in relation to the losses that they don’t prevent? The $100 million per year are the losses that remain after security. What the security investment prevents is the losses that would occur without it, not the losses that continue to occur despite the effort. I’d love to see an estimation of this quantity. The author even gives a hint towards the possible magnitude, as he continues:
»To look at a less safe society, in a single 2007 bank robbery in Baghdad, robbers made off with $280 million (it was an inside job).«
Perhaps it is even normal for the cost of security to exceed the losses that remain, once the security spending has been optimized?
Using an Inappropriate Classifier As a Security Mechanism
Zed Shaw has a story to tell about ACLs (Access Control Lists) as a common security mechanism and how they are incapable of modeling actual regulatory requirements:
http://vimeo.com/2723800
(Vimeo: The ACL is Dead)
It’s not really a talk about ACLs, it’s really about how companies work and how to survive and stay sane inside enterprises. I’ll focus here, however, on the technical issue that he uses as a hook.
He poses the technical problem as »ACLs not being Turing-complete«. According to my favorite abstraction of security mechanisms, the classifier model, ACL access control schemes are a type of classifier that does not fit the problem. All security mechanisms distinguish deny from allow, just in different sets of entities and with different boundaries between the two subsets. A low complexity classifier can handle only subsets with a simple boundary between them—most entities have only neighbors of the same class, and those near the boundary have other-class neighbors only in one direction—whereas a complex classifier can model more complex class distinctions. The most complex classification would be a random assignment of classes to entities.
Two things (at least) affect the complexity that a classifier can handle: classifier design and feature extraction. Classifier design defines the boundaries that a classifier can model. Feature extraction defines the parameters or dimensions available to the classifier, the degree of abstraction with which the classifier sees the world. Authentication for instance has a high degree of abstraction, it can distinguish entities but nothing else. Access control is richer in the parameters it uses, including besides the identity of entitites also properties of objects and actions. Yet, as the talk illustrates, these dimensions are not sufficient to properly express regulatory requirements. Whether this is a problem of the mechanism or a problem of the requirements I leave for the reader to ponder.
The Point of Penetration Testing
I’m glad to see that the community starts to acknowledge what penetration testing really means. We never encountered the empty hands issue as we never sold the let’s-see-if-we-can-hack-it kind of a penetration test. We carry out vulnerability assessments based on whichever information we can get about a system or product, the more the better, and use testing as a means of getting more, or clarifying information. The point of a penetration test is not to find and demonstrate one or a handful of arbitrary attacks, the point of a penetraion test is to identify vulnerabilities that need attention. Sure, it is fun to give an uberhacker demo after the test, and we do that, too, if we’ve found gaping security holes. But more important in a penetration test is good coverage of the potential vulnerabilities, and a sound assessment of their importance, relative to the purpose and risk profile of a system. I even remember pentesting projects where we found only a few relatively minor issues in a low-risk website – and made our clients happy by telling them just that. There is really no need to be disappointed if a penetration test ends without a root shell. There is no point in withholding information from testers, reducing their efficiency. And there is no point in testing just the infrastructure but not the applications.
In einem Wort
Neat XSS Trick
I just learned a neat XSS trick from a blog post by Andy Wingo. This is old news to the more proficient web application testers among my readers, but it was new to me and I like it. I wasn’t aware that one could redirect function calls in JavaScript so easily:
somefunction = otherfunction;
somewhere in a script makes somefunction a reference to otherfunction. This works with functions of the built-in API, too, so any function can become eval:
alert = eval;
or
window.alert = eval;
So if you can inject a small amount of script somewhere, and a function parameter elsewhere, you can turn the function that happens to be called into the eval you need. This PHP fragment illustrates the approach:
<?php
// XSS vulnerability limited to 35 characters
print substr($_GET["inject1"], 0, 35);
?>
<script>
// URL parameter goes into a seemingly harmless function - which we
// can redefine to become eval
<?php $alertparam = htmlspecialchars(addslashes($_GET["inject2"]), ENT_COMPAT ); ?>
alert('<?=$alertparam?>');
</script>
The first PHP block allows up to 35 characters to be injected into the page unfiltered through the inject1 URL parameter, just enough to add to the page this statement:
<script>alert=eval;</script>
or
<script>window.alert=eval;</script>
The second block embeds the value of the inject2 URL parameter in JavaScript as the parameter of alert() after some (imperfect) escaping. This one has unlimited length but goes into a seemingly harmless function – until somebody exchanges the function behind the identifier alert.
To see it work, put the PHP code on your web server and call it with a URL like this:
xss.php?inject1=<script>window.alert=eval;</script>&inject2=prompt('Did you expect this? (Yes/No)');
This works fine with Firefox 9 and, if XSS filter is disabled, IE 9. Safari 5.1.2 silently blocks the exploit, telling me about it only in debug mode. If you really need to, you’ll probably find an evasion technique against client-side XSS filters. IE seems to need window.alert, while Firefox is happy with just alert in inject1.
Safe and sorry
A common delusion in security engineering is the idea that one could secure a system by identifying items that need protection (assets), describing the ways in which they might be damaged (threats or attacks, which are not synonymous but often confused), and then implementing countermeasures or mitigations such that all, or the most common, or the most damaging threats are covered. The system thus becomes secure with respect to the threat model, so the reasoning. This is the model underlying the Common Criteria, and it works fine as a descriptive model. To give an example from everyday life, consider a bicycle as an asset. If your bicycle gets stolen (the threat), your damage is the value of the bicycle plus any collateral damage that the loss may cause you, such as coming late to an appointment, having to pay for a taxi or public transport instead of riding your bicycle, and having to go to the gym for workout instead of getting a workout for free on your way to work. The typical countermeasure against this threat is locking the bicycle to a fence, pole, or other appropriate object. Locking your bicycle reduces the risk of it being stolen. What could possibly go wrong? Besides the obvious residual risk of your countermeasures not being strong enough, this could go wrong:

This (ex-)bicycle was and remains properly locked and no vulnerability in the lock or in items the lock depends on have been exploited. Yet, somebody made a fortune stealing bicycle parts, and somebody else lost a bicycle to an attack. What’s the problem? The problem is the gross simplification in the asset-threat-countermeasure model, which neglects three important factors:
- Adaptive adversaries. A countermeasure does not oblige the adversary to stick to the original attack plan that the countermeasure is targeted at. Security measures change the threat model. They don’t force the adversary to give up, they force the adversary to change strategy and tactics.
- The victim’s loss and the adversary’s gain are not necessarily the same. In the case of the bicycle above, the lock may reduce the attacker’s gain to the black market value of the removed parts. The victim’s loss is still one bicycle.
- Asset dependencies. Thinking of a bicycle as one asset is an abstraction. A bicycle is really a collection of assets—its parts—and an asset by itself. Such dependencies, nested assets in this case, are common.
The bicycle lock, it turns out, is not really a bicycle lock, it’s a bicycle frame lock. It protects only one sub-asset of the bicycle, and an economically motivated adversary can make a gain that seems worth the risk without breaking the lock.
Prescriptive threat modeling—threat modeling done with the aim of finding a proper set of security features for a system—needs to take these issues into account. A good threat model anticipates changes in attacker behavior due to security measures. A good threat model considers not only damages to the victim but also gains of the adversary, as the latter are what motivates the adversary. And good security engineering is biased towards security, always overestimating adversary capabilities and always underestimating the effect of security measures, systematically.
The German eID system explained and discussed
We just finished reviewing the final, ready-to-print version of our article Electronic Identity Cards for User Authentication—Promise and Practice, which will appear in the upcoming issue of IEEE Security & Privacy Magazine (vol. 10, no. 1, jan/feb 2012, DOI: 10.1109/MSP.2011.148). We outline how the German eID system works and discuss application issues. Here is our abstract:
Electronic identity (eID) cards promise to supply a universal, nation-wide mechanism for user authentication. Most European countries have started to deploy eID for government and private sector applications. Are government-issued electronic ID cards the proper way to authenticate users of online services? We use the German eID project as a showcase to discuss eID from an application perspective. The new German ID card has interesting design features: it is contactless, it aims to protect people’s privacy to the extent possible, and it supports cryptographically strong mutual authentication between users and services. Privacy features include support for pseudonymous authentication and per-service controlled access to individual data items. The article discusses key concepts, the eID infrastructure, observed and expected problems, and open questions. The core technology seems ready for prime time and government projects deploy it to the masses. But application issues may hamper eID adoption for online applications.
We think that eID functions of government-issued ID cards will not replace means of everyday online authentication. With eID, there will be few new use cases for ID cards, eID just supports online versions of the traditional use cases. Most of the traditional use cases in Germany involve bureaucracy or legal requirements: legal protection for children and young persons, required identification of mobile phone users or bank account holders, or procedures of administrative bodies involving »Ausweis bitte!« at some point. For those who followed the debate and rollout in Germany, there should be nothing new in our article, but the article may come in handy as a reference for international audiences.
Our article will be in good company as it will appear in a theme issue on authentication. If I read the preprints collection correctly, there will be a user study by Amir Herzberg and Ronen Margulies, Training Johnny to Authenticate (Safely), and an article by Cormac Herley and Paul van Oorschot, A Research Agenda Acknowledging the Persistence of Passwords (authors‘ version). It seems sexy to many to call the days of the password counted—IBM just predicted password authentication would die out soon—but if I had to make a bet I would follow Cormac and Paul.
The final version of our article will be paywalled by IEEE, but you can find our preprint with essentially the same contents on our website.
Life lessons from an ad man
(YouTube)
In einem Wort
Risk Assessment and Failure Analysis in Multiple Small Illumination Sources During Winter Conditions
Risk Assessment and Failure Analysis in Multiple Small Illumination Sources During Winter Conditions
Robert M. Slade, version 1.0, 20031217
Abstract:
In the author’s immediate socio-cultural environment, the unpacking, testing, placement, and maintenance of Christmas lights has been mandated to be „man’s work.“ (Women will, reluctantly, direct the placement of lights, since it is an observed fact that a man has all the artistic sensitivity of a Volkswagen. The car, not the automotive designers.) Therefore, despite the complete lack of any evidence of competence in domestic „handiness,“ or knowledge of electrical appliances, the author has found himself making an extensive, multi- year study of failure modes in different forms of lighting involving multiple small light sources.
