Always A Bad Day For Adversaries

Tag: vulnerability

Don’t Hold Developers Blameless for Security Woes – Security Must Begin In Development

“WordPress is a very popular platform for attackers to target,” he said. “There’s not a whole lot WordPress can do if people neglect to update their WordPress or plug-in software.”  Roel Schouwenberg, senior researcher for Kaspersky Lab

SC Magazine recently published a summary of the Kaspersky report identifying WordPress as the attack vector for intruders to establish infrastructure to spread the Flashback/Flashfake OSX Trojan.  The article included quotes from Roel Schouwenberg, senior researcher for Kaspersky Lab.

Mr. Schouwenberg (@schouw), as seen in the above quote, held the WordPress developers effectively blameless for their product being used as the primary infection vector for a 600,000+ member botnet – about 2% of all Mac OS installations.

Don’t get me wrong, if you choose to host your own publicly facing service on the Internet (such as this one) you have an ethical duty to ensure that it is not being used for evil to the best of your abilities.  This includes regular updates and patching.

Additionally, Internet users share responsibility in ensuring the security of their own machines to the best of their ability.  This includes running effective Anti-Virus (yes, I’m talking to you OSX users) and patching software (such as Java Virtual Machine exploited by this trojan).

However, neither the operators of blogs (such as myself) or the end-users of computers (again, such as myself) cannot be expected to reverse engineer every piece of software or service that we execute to search for vulnerabilities and then patch those.  That is the sole responsibility of the software developer.  Therefore, they hold at least some of the blame.

Additionally, patching is almost always a losing security proposition. This is because the patch will almost always be in response to a known vulnerability.  Which is why zero-day exploits are still so effective.

The answer is to write secure software in the beginning.  I’m not saying that all software should be, or will be, free of vulnerabilities – I have written over 1,000,000 lines of code myself and I understand fully what it means to write secure code and the incredible, and maybe impossible, challenge that entails.  But, we must begin to hold software developers more accountable for vulnerabilities rather than placing all the blame on the users.  The developers are the only ones who can effectively preempt the attackers by preventing vulnerabilities in the first place and requiring a third-party plug-in architecture (such as in WordPress) to be secure as well (i.e. using sandboxes).

Microsoft has done a tremendous job in this endeavor.  It hasn’t been easy but over the last 10 years they have made significant progress as shown by the reduced vulnerability and exposure surface of their software as well as third-party software running on their operating system.  It can be, and should be, done.

 

Why Malware Numbers Don’t Matter and What it Means for Security Accounting

McAfee recently reported over 75 million new malware samples detected in 2011. This number, while shocking, no longer matters as an absolute value. It also highlights a glaring flaw in network defense philosophy.

First, this number is only calculated from all detected hashes. Any changes in the binary results in a new, unique, hash. This means that only a small change by the adversary is necessary to effect a “new” piece of malware. A simple thought experiment: if there were 75 million malware samples, each with only one byte difference between them – this method would count 75 million “unique” pieces of malware.

Second, the number alone says nothing about the threat environment. It does not illustrate the attack vectors, vulnerabilities, or exposures used by the malware; nor does it describe the danger or effectiveness of the various malware samples. Maybe there is only one piece of malware and it’s 75 million varieties are all harmless. 75 million is now a very large number signifying nothing.

However, it does matter as a relative value showing the number of unique samples over time. For example, in 2007 unique malware samples rose 565% from the previous year [from A Brief History of Malware]. The velocity of unique malware samples detected in the wild (or the slope of the line if you prefer) is clearly increasing.

Why? It means that malware authors and operators are exploiting the primary network defense practice: default allow all – the black list. Defenders are still stuck in the “allow all” mind-set to trust everything except code which does not pass certain tests or follows certain behavior. To exploit this mind-set an adversary only has to change their malware enough to bypass these filters (e.g. AntiVirus). As defenders update their blacklists/AntiVirus/firewalls, the malware authors make a small change or re-pack and re-deploy the malware bypassing the new rules/filters/etc.

For an adversary, changing their capability slightly and re-deploying is a relatively inexpensive operation – particularly with pervasive exploit kits such as BlackHole. Whereas the cost for the defender to find the new malware, develop a signature, and deploy that signature is relatively costly leaving the security accounting on the side of the adversary.

To win this battle, the defender must switch to a known-good model, or “deny all with exceptions.” Also known as the white list. However, as we have seen – this simply adds a new target for the adversary: the white list itself.

Powered by WordPress & Theme by Anders Norén