Does Fuzzing really work ?
But that’s besides the point. There was a post made to the dailydave mailinglist titled “Does Fuzzing Really work?” http://lists.immunitysec.com/pipermail/dailydave/2006-September/003551.html in Which Aviram Jenik states the following:
“There’s a lot of talk lately on whether fuzzing can actually be used to find
vulnerabilities – and more importantly, reliably rule out the existence of
unknown vulnerabilities. … our experience shows it can.”
They’re wrong. They base this on a bunch of numbers they got from their own fuzzer (I’m guessing Aviram is one of the people that works on the fuzzer). Some of those numbers are:
“The FTP protocol has 310 “scenarios” of
valid FTP sessions. If you try to overflow each time a different part of the
command in every scenario you get a little over 12M attack combinations. If
you use some of our nifty beSTORM 2.0 optimizations you get to 70,679 attack
FTP is too simple you say? With more complex protocols like SIP you have
>15,000 scenarios and something like 40,680,459 attack vectors after
See, their numbers are wrong. I don’t know the exact numbers either. No one knows. They’re only testing what they have tests for (duh !). But one of the things I figured out when I started to play with fuzzers is that if you take any given fuzzer and make some changes (add a test for a length of 0 in a certain protocol for example) (and hence, change the numbers) you find new 0day. So their numbers are incomplete. For example, In their http tests I’m sure they have code that generates url’s. But they probably forgat something. Does it also generate ipv6 urls? if so, does it also generate ipv6 url’s with “%”? (http://[::1]%eth0/ for example). maybe the network device name parser has a trivial stacksmash. Does it have a list of all url scheme’s that might be usefull to fuzz for 1 particular http(-alike) daemon? What about incorrectly formed encoded url’s. http://isec.pl/vulnerabilities/isec-0020-mozilla.txt for example. I don’t know of any fuzzer out there that efficiently fuzzes these kinds of bugs except for my own . I once wrote a url fuzzer that does all of these things any many more things, it was pretty big, but more to the point, I’m sure I forgat something.
“Sounds scary at first, but a SIP server capable of handling
500 requests per second would take only 22 hours to test, …”
Assuming their number are complete (which they’re not !) this means you can test all possible combinations that matter in 22 hours, how nice. However, in reality you know there are things you missed. So to compensate for this, one of the things I like to do is every once in a while substitute 1 thing I’m fuzzing with something random and do this in an endless loop. Some people seem to think this is useless (Arrogantly assuming their numbers are complete), but that’s not true. I’ve had success with this in the past. Finding bugs after days or even weeks. Where the trigger is something I totally didn’t anticipate. A nice example here is a bug in the linux tcp/ip stack that dave jones found with isic earlier this year: http://kernelslacker.livejournal.com/35361.html
quoting even more:
“My point is to those people who mock fuzzers – you either tried the wrong
kind, or you tried them a long time ago. I’m not saying that buffer overflows
are suddenly obsolete (don’t delete that ZERT bookmark just yet!). But
nowadays there is no reason for an FTP server to come out with buffer
overflows; there’s just no excuse.”
If their numbers aren’t complete there might be fuzzable things they’re not fuzzing, and hence there is an excuse. But let’s assume they’re testing all that they can test. Even then there are still issues that might not have been caught by their fuzzer. for example, earlier this year ISS x-force released an advisory for a remote signal race in sendmail (discovered by Mark Dowd). This one would be a bitch to fuzz, and I can’t think of a way to do it in an efficient manner.
Then again. Maybe this was all just to advertise their new fuzzer.