What is security testing?

[Get only posts in English]

The Sectest08 workshop, which I attended today, was of typical workshop size, so my plan to use the flipchart rather than PowerPoint did work out well.

The Keynote speaker, David Litchfield, gave a pretty good introduction into the kind of security testing that he is doing—bug-hunting of various kinds. He included a live presentation of format string vulnerabilities, presented the notion of surety for what might be missed by the too formal approaches to security and described security testing as exploring interesting avenues and evaluating implications. His talk pretty much covered the issues and topics of my own world of security testing. He embraced the idea that (this type of) security testing might be an art, claiming that the bug-hunting type of security testers were often also into artistic activities such as painting or photography and that teams of testers would work best if they included scientific and artistic types of persons.

One of the points discussed afterwards was the question what really distiguishes safety and security. Nobody had a definitive answer; my own guess is that safety-critical software is subject to much more controlled (i.e., known and understood) conditions. You can be pretty sure about the conditions and situations that e.g. an aircraft may encounter and nobody is going to reuse aircraft software to control nuclear power plants (I hope). In security, on the other hand, we frequently encounter changing environments, some changes being malicious, others non-malicious but poorly understood.

Next speaker, after my attempt to start a discussion with a bold claim, was Kaarina Karppinen. She presented an approach that combined static and dynamic analysis to find what she called architecture violations. I’m not sure how this would help me in the lab but the Software Architecture Visualization and Evaluation (SAVE) that she mentioned might be worth a look, at least a conceptual one.

After this we had a talk on Test generation and execution for security rules in temporal logic from which I, personally, didn’t draw very much. This is not the presenter’s fault, though. I just wasn’t interested in the context, which seemed to be the testing of IT infrastructures for violations of security policies. (Note to self and those who know: reminded me of eSI, although concept and focus are different).

Closer to my work and interests was Christian Schaefer’s presentation. He spoke about runtime monitoring in mobile Java (I2ME) environments. The basic idea is to monitor how a—downloaded and possibly malicious—application interacts with its environment, and to enforce security policies there. These policies are supposed to be meaningful, e.g. limiting the number of connections that can be made, URLs that cannot be accessed and so on. State and behavior can also be tracked throughout a series of program runs. (This somehow reminded me of SpoofGuard, which also has state-tracking security functions but works in a different way and to a different end: SpoofGuard attempts to detect deviations from former interactions that might not have been noticed by the user.) Where I see a problem is in policy development. The issues here may be similar to those encountered with desktop firewalls. How do we know what a program needs—and to which ends? How can we prevent security-critical functions from being accidentally being disabled? Do we have the slightest idea, for an unknown program, what it should or should not be allowed to do? I’m afraid we won’t.

After my real workshop talk Inger Anne Tøndel gave a short presentation on the SODA (Security-oriented Software Development Framework) project. The aim is to define processes and approaches that help average programmers to write more secure code, e.g. by helping them learn from prior experiences their organization has made. Unfortunately she didn’t have very specific information regarding the questions I was interested in, namely how to document vulnerabilities in order to help others learn from them. This is often hard as one needs a lot context to really understand an issue, particularly an interesting one.

During Benoit Baudry’s talk I didn’t take many notes, not my type of subject. Ana Cavalli got us into a discussion again using a rather specific example involving coffee machines and fraud scenarios, though her talk was really on Web applications, functional(?) models and implementations that take shortcuts in implementing the models. Finally, Vianney Darmaillacq presented ideas he is working on based on attack graphs. What I did really appreciate was an (untested) idea at the end that may simplify modeling or make it more realistic: if a transition from one state to another—an exploit—gives the attacker some capabilities while taking away others, ignore what is being taken away. I feel this represents the reality of attacks, attackers never lose capabilities, they only gain. The rule to consider only gains therefore may resemble an important aspect of the attacker’s mindset.

In the end we had a brief discussion, after which Alexander Pretschner concluded that he still didn’t know what security was, really. There may be more workshops in the future.