Is it just me, or does the DNS patch only seem to buy us more time?
At most this decreases the chance of a succesful attack 65k times, at worst it doesn't help because of NAT, and if you're running a default MS <= win2k3 OS then it's 2.5k times.
Honestly, I haven't had time to play around with any of the exploits floating around, but given 1 attempt = at most 2 packets (though it's probably much closer to 1, since you can try lots of responses per packet), we can send 32k packetrs pretty quickly, and the figures here also seem to say it works pretty damn quickly.
I'm not going to do any figures, but given how network speeds seem to go constantly upwards (or do we want to speculate about an upper cap?), we're going to reach a problem at some stage where senging 65k times the amount of data is going to be bloody fast again, and this will be an issue all over again.
And if that ever happens; what's left to randomize in the query? nothing as far as I can tell, so is the hope that by then we'll have all switched to DNSSEC, or are we planning on altering the DNS protocol at that point?
Anyway, going in a completely different direction, I want to take issue with the idea that seems to be pervading a lot of descriptions of the DNS bug that poisoning random subdomains isn't an issue.
For your typical attack, yes, poisoning random subdomains is kind of useless, however a lot of the web is held together by the idea that absolutely everything in a DNS tree of a domain is controlled by that domain and is to some extent trustworthy (think cookies and document.domain in JavaScript).
Also, it seems odd that given that the fact that you could poison random domains seems common knowledge to some people Dan is nominated for another pwnie award for XSS-ing arbitrary nonexistant subdomains. Sure, that bug gives you the ability to phish people more easil, but to me the biggest part of that seemed to be the fact that you could easily attack the parent domains form there.
Anyway, the patch, while having it's limitations, seems to buy us some time with both these bugs, and in fact should buy us time with any bugs where responses are forged, so that's always a good thing.
Wednesday, August 06, 2008
Sunday, August 03, 2008
Is framework-level SQL query caching dangerous?
I was in a bookshop a few months ago and picked up a book about Ruby on Rails, and though I sadly didn't buy it (having already bought more books than I wanted to carry) and I've forgotten it's name, there was an interesting gem in there that stuck in my head.
Ruby on Rails' main method of performing SQL queries (ActiveRecord) since 2.0, by default, caches the results of SELECT queries (though it does string-level matching of queries, so they need to be completely identical, rather than functionaly identical) untill a relevant update query updates the table.
I haven't had a chance to delve into the source to see how granularly the cache is updated (i.e. if a row in the cache was not updated in an update, is the cache still invalidated sinc ethe table was updated?), but in any case, it still seems dangerous.
Caching data that close to the application means that unless you know about the caching and either disable it or ensure that anything else that possibly updates the data also uses the same cache, you may end up with an inconsistent cache.
Assuming that flushing the cache is fairly granular operation (or there is very little activity on the table or users are stored as separate tables, or something similar), it seems feasible that if there is any other technology interacting with the same database, it would be possible to ensure that dodgy data is cached in the Rails app.
A possible example would be a banking site where an SQL query like
Is executed when you view the account details or attempt to transfer money, then if there was another interface to withdraw money (lets say an ATM even; doesn't have to be web-based) that bypassed Rails' SQL cache it is theoretically possible that if we would be able to withdraw the money, but the Rails app would still think that we had money left.
At this point of course things would become very implementation specific, though there are two main ways I could see this going (maybe there are more):
1. We could be lucky and the application itself subtracts the amount we're trying to transfer and then puts that value directly into an SQL query, so it's as if we never withdrew any money.
2. We could be unlucky and an SQL query of this variety is used:
In which case the only thing we gain is the ability to overdraw from an account where we would not be able to usually; not particularly useful unless you've stolen someone else's account already and want to get the most about it, but it does still allow us to bypass a security check.
Does anyone have any thoughts on this? Is this too unlikely? Is no-one ever going to use Rails for anything sensitive? Should I go and read the source next time before posting crap? etc
Or knowledge of the Rails caching mechanism? I'll probably take a look soon, but given I've been meaning to do this for months...
Ruby on Rails' main method of performing SQL queries (ActiveRecord) since 2.0, by default, caches the results of SELECT queries (though it does string-level matching of queries, so they need to be completely identical, rather than functionaly identical) untill a relevant update query updates the table.
I haven't had a chance to delve into the source to see how granularly the cache is updated (i.e. if a row in the cache was not updated in an update, is the cache still invalidated sinc ethe table was updated?), but in any case, it still seems dangerous.
Caching data that close to the application means that unless you know about the caching and either disable it or ensure that anything else that possibly updates the data also uses the same cache, you may end up with an inconsistent cache.
Assuming that flushing the cache is fairly granular operation (or there is very little activity on the table or users are stored as separate tables, or something similar), it seems feasible that if there is any other technology interacting with the same database, it would be possible to ensure that dodgy data is cached in the Rails app.
A possible example would be a banking site where an SQL query like
SELECT * FROM accounts WHERE acct_id = ?
Is executed when you view the account details or attempt to transfer money, then if there was another interface to withdraw money (lets say an ATM even; doesn't have to be web-based) that bypassed Rails' SQL cache it is theoretically possible that if we would be able to withdraw the money, but the Rails app would still think that we had money left.
At this point of course things would become very implementation specific, though there are two main ways I could see this going (maybe there are more):
1. We could be lucky and the application itself subtracts the amount we're trying to transfer and then puts that value directly into an SQL query, so it's as if we never withdrew any money.
2. We could be unlucky and an SQL query of this variety is used:
UPDATE accounts SET balance = balance - ? WHERE acct_id = ?
In which case the only thing we gain is the ability to overdraw from an account where we would not be able to usually; not particularly useful unless you've stolen someone else's account already and want to get the most about it, but it does still allow us to bypass a security check.
Does anyone have any thoughts on this? Is this too unlikely? Is no-one ever going to use Rails for anything sensitive? Should I go and read the source next time before posting crap? etc
Or knowledge of the Rails caching mechanism? I'll probably take a look soon, but given I've been meaning to do this for months...
Subscribe to:
Posts (Atom)