Is it just me, or does the DNS patch only seem to buy us more time?
At most this decreases the chance of a succesful attack 65k times, at worst it doesn't help because of NAT, and if you're running a default MS <= win2k3 OS then it's 2.5k times.
Honestly, I haven't had time to play around with any of the exploits floating around, but given 1 attempt = at most 2 packets (though it's probably much closer to 1, since you can try lots of responses per packet), we can send 32k packetrs pretty quickly, and the figures here also seem to say it works pretty damn quickly.
I'm not going to do any figures, but given how network speeds seem to go constantly upwards (or do we want to speculate about an upper cap?), we're going to reach a problem at some stage where senging 65k times the amount of data is going to be bloody fast again, and this will be an issue all over again.
And if that ever happens; what's left to randomize in the query? nothing as far as I can tell, so is the hope that by then we'll have all switched to DNSSEC, or are we planning on altering the DNS protocol at that point?
Anyway, going in a completely different direction, I want to take issue with the idea that seems to be pervading a lot of descriptions of the DNS bug that poisoning random subdomains isn't an issue.
For your typical attack, yes, poisoning random subdomains is kind of useless, however a lot of the web is held together by the idea that absolutely everything in a DNS tree of a domain is controlled by that domain and is to some extent trustworthy (think cookies and document.domain in JavaScript).
Also, it seems odd that given that the fact that you could poison random domains seems common knowledge to some people Dan is nominated for another pwnie award for XSS-ing arbitrary nonexistant subdomains. Sure, that bug gives you the ability to phish people more easil, but to me the biggest part of that seemed to be the fact that you could easily attack the parent domains form there.
Anyway, the patch, while having it's limitations, seems to buy us some time with both these bugs, and in fact should buy us time with any bugs where responses are forged, so that's always a good thing.
Wednesday, August 06, 2008
Subscribe to:
Post Comments (Atom)
6 comments:
That pwnie nomination was for a different exploit- the provider-in-the-middle attack that he released at Toorcon in April.
http://www.darkreading.com/document.asp?doc_id=151497
Overtaking random subdomains only helps the attacker when the main app also sets document.domain (which is quite often done in OTS JS framworks though). Dan was pretty sketchy on that particular topic in his ToorCon talk.
First of all, sorry if that post was a bit rambling, I had a fever when I wrote it, so my language skills weren't exactly at their best.
@mckt:
Yes, I know. That 'provider-in-the-middle' attack only let Dan XSS non-existant subdomains. Without Dan's latest attack method it was still possible to poison random, but not arbitrary, subdomains. Also, this was implied as being a known issue when people described it. Hence my question about why being able to XSS arbitrary, but only nonexistant, subdomains is worthy of a pwnie nomination when being able to poison DNS for a random, but not arbitrary, subdomain is treated with apathy.
@martin:
Not quite. Firstly, cookies get sent to subdomains, so if you can control a subdomain you get cookies. Secondly, as I've been saying for a while now, document.domain locking in IE is broken, badly, random pieces of JavaScript which have nothing to do with document.domain unlock it (including code found in the Google Analytics javascript). Firefox used to be in the same boat, but it *seems* to be better now (don't hold me to that).
{}x{x:url(x
{}x{x:url(x"
'");}x{x:url(x"
Post a Comment