Always A Bad Day For Adversaries

Month: February 2012

Why Malware Numbers Don’t Matter and What it Means for Security Accounting

McAfee recently reported over 75 million new malware samples detected in 2011. This number, while shocking, no longer matters as an absolute value. It also highlights a glaring flaw in network defense philosophy.

First, this number is only calculated from all detected hashes. Any changes in the binary results in a new, unique, hash. This means that only a small change by the adversary is necessary to effect a “new” piece of malware. A simple thought experiment: if there were 75 million malware samples, each with only one byte difference between them – this method would count 75 million “unique” pieces of malware.

Second, the number alone says nothing about the threat environment. It does not illustrate the attack vectors, vulnerabilities, or exposures used by the malware; nor does it describe the danger or effectiveness of the various malware samples. Maybe there is only one piece of malware and it’s 75 million varieties are all harmless. 75 million is now a very large number signifying nothing.

However, it does matter as a relative value showing the number of unique samples over time. For example, in 2007 unique malware samples rose 565% from the previous year [from A Brief History of Malware]. The velocity of unique malware samples detected in the wild (or the slope of the line if you prefer) is clearly increasing.

Why? It means that malware authors and operators are exploiting the primary network defense practice: default allow all – the black list. Defenders are still stuck in the “allow all” mind-set to trust everything except code which does not pass certain tests or follows certain behavior. To exploit this mind-set an adversary only has to change their malware enough to bypass these filters (e.g. AntiVirus). As defenders update their blacklists/AntiVirus/firewalls, the malware authors make a small change or re-pack and re-deploy the malware bypassing the new rules/filters/etc.

For an adversary, changing their capability slightly and re-deploying is a relatively inexpensive operation – particularly with pervasive exploit kits such as BlackHole. Whereas the cost for the defender to find the new malware, develop a signature, and deploy that signature is relatively costly leaving the security accounting on the side of the adversary.

To win this battle, the defender must switch to a known-good model, or “deny all with exceptions.” Also known as the white list. However, as we have seen – this simply adds a new target for the adversary: the white list itself.

Beware Occam’s Razor

Occam’s Razor: A principle that generally recommends that, from among competing hypotheses, selecting the one that makes the fewest new assumptions usually provides the correct one, and that the simplest explanation will be the most plausible until evidence is presented to prove it false.  [Wikipedia]

Intrusion analysts face a unique problem: discovering and tracking adversaries not wanting to be discovered or tracked.  These adversaries will take significant action to prevent their operations from being discovered [see Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection for examples].  Metasploit, a common exploitation framework, works hard to prevent their tool and modules from detection and signature including creating an Secure Sockets Layer (SSL) encrypted command-and-control channel.

In this environment, the cyber adversary is using Occam’s Razor against the analyst to elude detection.  Given a set of network traffic or events, an adversary wants any unintended observer to believe their traffic is normal and benign.  An adversary practices Denial and Deception in every packet.

Therefore, if an intrusion analyst relies too heavily on Occam’s Razor to describe a network event as benign, but a clever adversary is hiding their activities in that traffic – the adversary wins.

On the other hand, if an analyst does not employ Occam’s Razor effectively in their work, every packet will look suspicious wasting their precious time on unimportant events.

Richards Heuer’s preeminent work on the Psychology of Intelligence Analysis describes a very effective framework for employing the theory of Competing Hypothesis to decide which possible conclusion is the best given a set of facts and assumptions.  However, a common attack on Heuer’s framework is that an adversary utilizing Denial and Deception can easily defeat the analyst and have them select the conclusion the adversary wishes and not the one best describing the activity.

In the end, an intrusion analyst must use both a questioning and suspicious mind with a modicum of Occam’s Razor.

Cyber Moats? Really?!

Cyber Intrusions Into Air Force Computers Take Weeks to Detect

Can someone please explain a “cyber moat?”

We have lots of problems.  We are stuck in a forensic mind-set. Our defensive techniques don’t evolve as fast as the should. We are out numbered.

I never considered the problem of cyber moats.

It must be worse than I thought.

I really hope that we can get beyond 13th century castle defense analogies.

Don’t get me wrong, there are things to learn from physical fortification.  However, like castles in the time of cannon, there are times when offensive capabilities must force defenders to change their understanding.

I’m afraid that we are drawing too much from physical protection theory.  I find counter intelligence to be a better theoretical underpinning – assume you will be penetrated, reduce damage once exploited, focus on prevention and early detection.

From my experience with senior leadership, I find that we continue to pander to their lack of understanding in the domain by drawing inferences and analogies from domains they tend to understand better – namely physical protection.  However, to move forward we must escape from this trap if we are to evolve.

Powered by WordPress & Theme by Anders Norén