Archiv der Kategorie: English

Posts in English

Lant*

[Get only posts in English]

My dear fellow attention whores,

Can we please stop inventing new bullshit terms for each and every variant of a variant of an attack scenario? Sure, at times we need new terms naming new concepts. Spam is an example, phishing is another. I don’t complain about these. What bothers me is our tendency to modify these general terms every time some slight modification of the concept appears: from spam to spit, from phishing to pharming, hishing, sishing, or wishing. Other than the useful terms for generic concepts, these creations make our lives harder, not easier. They are confusing us and others.

Why this rant? I got a call this morning from a journalist. She wanted to know everything about whaling. WTF? It turned out she really wanted to know everything about GhostNet and the security issues and attack strategies involved. But she didn’t say so and she seemed fixated upon whaling, which, I have to admit, sounds sort of cool and interesting. However, it lead to a failure in communication. She failed to get across her actual need for information, confusing me with a meaningless term that she had picked up somewhere. I failed to get across to her that I do know my share of computer security and that I might actually be able to answer some of her questions.

Coining new terms isn’t wrong per se. But names are like money. Producing too many makes them all worthless.

Yours sincerely,

Sven

*) Letter-style rant. 😛

Security Annoyances

[Get only posts in English]

Stuart King has blogged a list of his top 5 information security annoyances:

  1. Security awareness programs
  2. Compliance = security
  3. Risk modelling
  4. Where are all the analysts
  5. It’s not my fault

By and large I agree with his list, not least because he seems to have annoyed a few of those who are overconfident about risk trivia and business school quadrant diagrams.

I’d like to add to the list two of my own favorite annoyances:

  1. Just make it hard – for the legitimate users and uses of a system. Attempts to improve information security often make it harder for the users of a system, for the employees of an organization to do their legitimate jobs. Sometimes this is unavoidable, we all know there is often a tradeoff betwen usability and security. The tradeoff turns into a fallacy where the primary impact of security measures is reduced usability while actual security remains more or less the same.
  2. Alice’n’Bob thinking. Academic researchers might be particularly prone to this: thinking and arguing in the Alice’n’Bob world of security textbooks as if it was a suitable model of the real world and the real security issues. It isn’t, which we somethimes forget when we name abstract entities after humans.

25 Random Facts

[Get only posts in English]

Mostly out of my head; feel free to correct me if you spot an error:

  1. d1901fa3176ffd7d77f2cd4dde125829
  2. The most common use for random data in computer security is as a secret: a password, a key, or a session ID for instance.
  3. Random data is the result of a random process.
  4. An ideal toss of a coin generates 1 bit of random data. [Update 2009-11-27: Or maybe not.] For n tosses there are 2n different strings of n bits that could be the result.
  5. Producing random data is hard on a mostly deterministic machine, particularly if large quantities are needed.
  6. Data that looks random doesn’t have to be. For example, base64-encoded data looks random to some people although it isn’t.
  7. The output of a random process has certain statistical properties that can be tested.
  8. A deterministic process can produce output that exposes the same statistical properties as a truly random process for any finite number of bits.
  9. It is therefore impossible, for any finite string of bits, to prove that it is the result of a random process.
  10. It is, however, possible to detect and reject as non-random the results of some deterministic processes on the basis of statistical tests.
  11. A deterministic process designed to produce output that passes some of the statistical randomness tests is called a pseudorandom number generator (PRNG).
  12. There are different qualities of pseudorandom number generators. The simpler ones are unsuitable for security applications.
  13. Data from a PRNG that looks random according to statistical test can still be entirely predictable if the PRNG is known and its internal state can be worked out.
  14. The whole point of using random data in security is to make guessing hard. The idea is that to guess a random secret an attacker will have to search the entire space of possible values in the worst case and half of it in the average case.
  15. Attackers cannot be prevented from guessing secrets. If there is an easier way than guessing, the attacker is assumed to prefer the easier way.
  16. The inner state of some PRNGs can be determined from a small number of subsequent output values.
  17. Up to a certain length and under a number of side conditions, the output of cryptographic primitives such as hash functions or ciphers are expected to expose the statistical properties of random data.
  18. Pseudorandom number generators for security purposes often combine a source of random data with cryptographic methods to produce an almost arbitrary amount of random data. True random data is used to seed the generator, which then expands the data.
  19. Potential sources of random data in a deterministic computer system are external events and digitized noise from analog circuits.
  20. Using digitized noise isn’t easy either. The process might be biased, limiting the randomness obtained.
  21. Humans are poor random number generators unless they execute a random process, e.g. repeatedly tossing a coin.
  22. Stream ciphers combine a (pseudo)random key stream with the data stream using bitwise XOR.
  23. A string of truly random data as a message has maximum entropy. Such random data thus cannot be compressed.
  24. Random data can also be used as test input into software components. This type of testing is called fuzzing.
  25. For an ideal block cipher, flipping a single bit in either the clear text or the key flips each bit of the cipher text with probability 0.5.

Note that I’m sloppy in my use of the term random. Randomness can mean a lot of different things and I’m really making a few assumptions.

Railcar runs 40 kilometers without driver

[Get only posts in English]

In the morning hours of December 8th, 2008 a railcar left the train station of Merseburg, Germany and made a journey of 40 kilometers to Querfurt – all on its own, without a driver or any other person on board. The train finally came to a standstill on an uphill section of the railroad line. Luckily, nobody was injured and no damaged was caused by this ghost train. Apparently the incident was noticed when the railcar began to move and the line could be closed for other railway traffic to avoid collision.

There is no official investigation report so far and the exact causes of the incident remain unknown to the public. However, a programme broadcasted on  January 6th, 2009 by the TV station MDR mentioned some interesting details about the incident. The programme quoted an official from the Federal Railway Authority (Eisenbahn-Bundesamt, EBA) vaguely hinting towards „technical faults“ and „software problems“ as the possible cause. More interesting than that was the description of the incident, which I enrich here with some additional information from Wikipedia:

The railcar was a Bombardier LVT/S, also known as series 672, operated by Burgenlandbahn, a subsidiary of Deutsche Bahn. It had arrived in Merseburg as one part of multiple unit coming from Querfurt. As the trailing unit, to be precise, which means that it still had been self-propelled during its previous service, but remote-controlled from another unit. In Merseburg the train was separated with the intention of using the trailing railcar to operate another service back from Merseburg to Querfurt. This was the one that drove off without waiting for its driver.

The series 672 railcar is equipped with automatic couplers. This makes separating the units really easy: the driver in the leading car stops the multiple unit, pushes a button and drives off with a single railcar, leaving the trailing car behind. It seems that this can leave the former trailing car in a particular condition: with its engines running and the driver’s safety device (aka dead man’s switch) still disabled. According to the TV programme the dead man’s switch, which is mandatory for trains in Germany, has to be disabled in the trailing railcar(s) of a multiple unit where there is no driver to operate it.

This does not explain how the railcar started its uncommanded run in the first place, but it provides a plausible explanation why the train was  not stopped by safety mechanisms. The railroad line, being a secondary line, is not equipped with a train protection system. The dead man’s switch therefore was the only mechanism that could have stopped the  railcar but failed to do so. To prevent further incidents of this kind, procedures have been changed  to ensure that a driver is present in each of the cars when a multiple unit is being separated.

The incident illustrates how straightforward solutions to seemingly simple problems can be subtly wrong. The problem is that the railcar can be operated in different modes that require different configurations of its safety equipment. The system does not enforce, however, that the configuration is appropriate at any time. This may simplify the design of the technology but imposes upon the operator the need to deal with situations of inconsistency that might not be obvious until such an incident occurs.

What’s the harm?

[Get only posts in English]

Nice idea:

»This site is designed to make a point about the danger of not thinking critically. Namely that you can easily be injured or killed by neglecting this important skill. We have collected the stories of over 225,000 people who have been injured or killed as a result of someone not thinking critically.«

(http://whatstheharm.net/)

Thus far their collection comprises »3,284 people killed, 306,068 injured and over $2,815,114,000 in economic damages« caused by pseudoscience, alternative medicine, religion, belief in the supernatural, fears, misinformation and the like. And it’s not always harm that people inflict upon themselves as for instance the section on expert witnesses shows.

British Humour (2)

»Mr Pelling said: “It is pleasing to see just how vigilant our police is at these times of heightened international political tension and the risk of terrorism here at home.

“I am glad my stop and search account as a white, middle-aged male shows that anyone can be suspected of, and questioned about, terrorism, regardless of race, creed or colour.«

(Your Local Guardian: Andrew Pelling MP stopped by cops for taking pictures of East Croydon cycle path, via Crap Cycle Lanes of Croydon)

I am tired of those military metaphors in computer security

[Get only posts in English]

This is an all too common theme in computer security: shouldn’t we learn from the military? After all we are dealing with attack and defense, just as the military, and there is strategy and tactics in both fields, and military victory—or defeat—is just about as mysterious as computer security or insecurity. I think the military analogy is flawed and unlikely to take us anywhere. What we are doing is different in almost every important respect.

First of all, in computer security we do not engage in battle. A battle, conceptually, takes place between two (or more) of a kind. There may be considerable and obvious asymmetries, such as in guerrilla warfare or when one army has more advanced capabilities than another, but there is no intrinsic asymmetry. In particular, each party involved engages in both attack and defense and combinations thereof. I am tired of those military metaphors in computer security weiterlesen

Drunken Computing

8.2.5.5 PARTIES Partition

Start of informative comment

The PARTIES Partition is a hidden partition on the hard drive that BIOS can use for additional storage space and as a virtual drive. In the PARTIES Partition, there is a small section called the BEER. Prior to turning control over to the PARTIES Partition, the BIOS must measure the BEER area into PCR[5].

The partition that is booted to in the PARTIES Partition must also have the initial IPL image code measured into PCR[4] prior to turning control over to this code.

End of informative comment

When executing, this is treated as IPL Code including the measurement of it even if the binary image is already measured into PCR[0].

TCG PC Client Specific Implementation Specification For Conventional BIOS Version 1.20 FINAL, Revision 1.00, page 62

(According to Google, this seems to be there at least since 2003.)