In the past year or so, I've read many articles on Java malware (most recently Gregg Keizer's article at Computer World and Dustin's blogpost at "Inspired by Actual Events" ). The fact that many of the top offenders are based on vulnerabilities I've found (CVE-2008-5353, CVE-2010-0840, CVE-2010-0094) makes me wince every time I read one of these (I don't consider myself to be one of the bad guys just because I like to break things).
None of the articles pointed out one important thing that might very well be distorting Java malware infection rates, and it's this: The fact that Microsoft Security Essentials, or any other Anti-virus correctly identified the presence of a malicious piece of Java code on a user's machine does not mean that user was infected. Infection might not even be likely.
The way Java works with Applets is that when there's an applet tag on a page, Java downloads the relevant code and saves it in the Java cache (...\username\Application Data\Sun\Java\Deployment\cache). The code is then executed from disk. If your Java is up-to-date, CVE-2008-5353, CVE-2010-0840, CVE-2010-0094 all fail to execute, resulting in a SecurityException. But the offending code will remain in the cache and Security Essentials will find it and report it. It will inflate the statistics.
I verified this on a Windows XP with the latest Java (Java 6 update 25), executing the CVE-2008-5353 Applet from Metasploit, which is detected by Security Essentials. As the Java is up-to-date, the exploit never executed. Nevertheless Security Essentials correctly identified CVE-2008-5353.
In conclusion, I do recognize that Java is notoriously badly updated by a big part of the user base and we wouldn't see tons of Java malware if some of it wasn't successful, but I think the Java malware infection rates are inflated by cases where users have malware in their cache that was never able to escape the sandbox of an updated Java.
A few thoughts on Fuchsia security
3 years ago
4 comments:
I suppose then they are only measuring "number of attacks" instead of "number of successful attacks". I'm not sure the number is inflated in that case.
It would be great to see both metrics, but that's a bit hairy. That is, if you can detect a successful attack you can probably prevent it instead.
Back in the day, we'd create patches that raised red flags, in addition to preventing exploitation, when a vulnerable condition was triggered. It's not so straight-forward to do that these days, but can provide a neat metric.
Just $0.02
Thanks for the insight. I totally agree with you, if you say "number of attacks" I have no problem with that.
And I understand that it indeed must be pretty hairy to get the right numbers.
My pet peeve here is solely with the fact that most articles say things like "x% of PCs are running malware", "x% of PCs are infected" - those wordings very clearly imply successful attacks.
The term "infected" is being thrown around pretty loosely here.
Could msft have tried to show how many trojans were detected on the systems with the cached java malware? in turn, would that final number be enough to write about for msft?
I played around with AVs and metasploit recently.
CVE-2010-0094 is detected pretty well by AVs. Still, if you use obfuscation like proguard the detection rate is good (more than 50% at virustotal).
Regards to CVE-2010-0840 the detection rate is pretty bad because of the nature of vulnerability (--> trusted chaining). But with some modifications the detection right goes lesser than 10% which is really bad in terms of malware detection (but pretty good for a pentester ;-).
Post a Comment