Always A Bad Day For Adversaries

Author: Sergio Caltagirone

Sergio is the head of threat intelligence analysis at Microsoft and operates a global threat intelligence mission to discover, understand, track, and disrupt malicious activity against Microsoft and its customers. He is passionate about empowering defenders with timely and accurate information and moving information security from a reactive to a proactive posture.

Before Microsoft he worked for the United States Government for 8+ years and built and led several threat intelligence missions.

Sergio grew up in Western Washington State and attended the University of Portland where he received his Bachelor of Science in Computer Science and also a degree in Theology with a strong liberal arts background. He went on to the University of Idaho in 2005 where he received his Master of Science in Computer Science. At Idaho Sergio expanded his education by becoming the first computer science student allowed to take Law classes where he focused on legal topics connected to computer security.

Sergio has been very active in research and innovation receiving his first patent working with cognitive psychologists on graphical passwords (US20100169958) and had over 12 publications and a thesis on the topic of Active Response. He has gone on to work in several organizations doing computer and network security, forensics, and intrusion analysis.

Sergio is also the Chief Scientist of The Center for Cyber Intelligence Analysis and Threat Research working towards the goal of moving cyber from an art to a science.

Sergio Caltagirone+

Death by a Thousand Cuts: Proliferation, The Biggest Cyber Threat

The cyber community is always teaming with conversations about the newest/greatest threats, exploits, or malware.  Who remembers the Morris Worm?  Nobody but students of computer security and computing historians.  The sendmail and fingerd exploits were long patched and RFC 1135 written to memorialize the event.  Today, the Boston Museum of Science displays the Morris Worm source code stored on a floppy disk.  Over the last year it has been Stuxnet.

Outsiders, and even insiders, think that we are only one exploit/worm/virus away from total destruction. However, any single rational-actor adversary with a capability, even an advanced and dangerous capability, is relatively limited in their damage potential.

The biggest cyber threat is not any one particular capability or vulnerability, but rather that we will die a death by a thousand cuts.  The biggest threat to the global network is the proliferation of offensive cyber tradecraft in the hands of many capable actors.

U.S. General Accounting Office put the total damages of the Morris Worm at $100K – $10M.  This is small compared to the estimated $5.5B in worldwide damages caused by the ILOVEYOU worm in 2000.  Yet, the tradecraft of self-replicating computer code began with the Morris Worm and proliferated into the ILOVEYOU worm 12 years later.

The danger with Stuxnet is not the worm itself, it is that others will learn tradecraft from Stuxnet such as more advanced malware droppers, the targeting of industrial control systems (e.g. SCADA), and better obfuscation techniques.  In total, Stuxnet will make networks harder to protect for years to come and in the meantime Stuxnet will be a museum display.

Hacker Motivations or Hackers Need To Eat Too

New research appears to raise questions over the conventional wisdom that pure nation-state cyberspies rarely, if ever, dabble in traditional financial cybercrime.  –  “Cybercriminal By Day, Cyber Spy By Night?” in Dark Reading on 1 March 2012

Dark Reading (@darkreading) wrote from the RSA 2012 conference of an intriguing analytic correlation made by the Dell SecureWorks Counter Threat Unit between the RSA attackers and cyber financial crimes.

The article is interesting in two ways.  First, it showcases some good analytic tradecraft correlating seemingly independent activities through adversary personas and infrastructure (in this case domain name registration).  Second, it asks the question: can a hacker be both a spy and cyber criminal?

The fact that an adversary will be using their skills for two purposes supposedly challenges “conventional wisdom.”  Normally, intrusion analysts work towards identifying the motivation of the hacker/attacker to gauge the best response (hopefully) and potentially offer clues to attribution.  There are many “conventional” terms we use to describe “hacker motivations”: script kiddies, espionage, hacktivism, black/white hat, etc. (see McAfee’s 7 Types of Hacker Motivations).

However, we often look too much towards our technical understanding and fail to acknowledge basic human motivations: safety, physiological needs (water, shelter, food, etc), love, esteem, and self-actualization [see “A Theory of Human Motivation” by Abraham Maslow or a summary of Motivations on Wikipedia].

Hackers, as all humans, are not above the basic motivations which include greed.  This would be a very simple hypothesis of why a cyber espionage actor would turn to cyber crime – for financial gain.  Maybe they were not being paid enough in their espionage job and “honeymoon” as cyber criminals, or they were simply contractors to multiple customers (a state vs. a criminal organization).  Money is a highly motivating factor.

I use the case of the “Wiley Hacker” by Cliff Stoll (on the Reading List) while teaching to highlight that a hacker working day-in-and-day-out needs to eat, live, and provide for the most basic human motivations.  Therefore, it is perfectly reasonable to ask: if they are hacking all day/every day, how are they providing for these motivations?  Is somebody paying them to hack?  Are they living in their parents’ basement?  Do they have a trust fund?  All of these are perfectly reasonable hypotheses with varying degrees of likelihood.  But they all lead to other questions of attribution and higher motivation.

If, in fact, “conventional wisdom” is that espionage actors are not motivated by money to use their skills in other endeavors, an even more fundamental understanding of human motivation contradicts that wisdom.  “Conventional wisdom” is simply another term for analytic assumption and this again highlights that analytic assumptions easily cloud judgement.

A New Security Accounting or How to Win Against a Formidable Adversary

Many intrusion analysts are constantly plagued by a nagging thought that we are fighting a losing battle.  The problem only gets worse, it never seems to get better.  There are only more hackers, more damage, more vulnerabilities, more exploits, more toolkits, etc.  Everyday we feel overwhelmed and under-resourced.

This feeling is not wrong.  Our instinct is correct.  We are fighting a losing battle.  There are many more adversaries than there are network defenders.  The adversary needs only one vulnerability, one exposure, or one exploit to win – while we need to find and patch all the vulnerabilities and exposures and prevent all exploits to just stay even.  We have already lost before even playing the game.

To win this battle, or bring it to a draw, we must initiate a new security accounting.  We must change our thinking.

First, we must accept loss.  We must understand that we will be penetrated and exploited.  We must focus on early detection, discovery, and the minimization of loss/mitigation.  We must not count every intrusion as a failure.  This is a game to be played over decades, not days.

Second, we must be truthful with ourselves and then truthful with others.  No more counting scans detected by the firewall as “millions of blocked intrusions.”

Third, we must stop accounting for security in terms of money/resources we have spent to secure ourselves.  It is a self-centered and foolish accounting.  We must start focusing on how much did we force the adversary to spend in money/resources to exploit our network – what was their $ per Megabyte of data stolen.  The larger we make that ratio the more secure we become: (1) we will reduce the number of adversaries operating against us because only the most resourced will be able to gain any profit from their operations, (2) we will reduce the effectiveness of the adversaries which do operate against us by increasing their costs and decreasing their gains.

Some may say that this is a losing proposition.  What about the adversary willing to spend $10 million to exploit my network and steal my intellectual property, but I can only spend $1 million to protect it?  You’re screwed.  The adversary obviously values your data more than you.  The only hope is to band together with other targets/victims to combine your forces in the hopes of creating parity with the adversary.

An analogy: if one country is willing to spend billions to create a military to defeat another country, and the target country cannot even spend millions in defense – they will likely lose.  Their only hope is to create an alliance with other countries in the hope of (1) creating an effective combined force to battle their adversary or (2) being able to pull other handles (e.g. trade/economics/etc) costing the hostile country enough to make the attack worthless.

In the end, it comes down to a relationship built on value.  As long as the adversary is making a profit (however that is defined) there is no incentive for them to stop.

There are two types of victims: victims of opportunity and victims of interest.

Victims of opportunity are victims because they were available to the adversary at the right time but possess little value.  If the adversary was to lose access they would likely not notice.  These organizations can utilize standard security practices to protect themselves reducing their likelihood of becoming a victim.  Example: a home computer infected with a botnet.

Victims of interest are victims because they possess great value to the adversary.  If the adversary were to lose access to the victim it would be noticed, and the adversary would spend resources regaining access and maintaining that access.  The adversary will not stop victimizing the organization until the relationship between adversary and victim changes and the victim no longer provides enough benefit to justify the cost of exploitation.  Example: Advanced Persistent Threats.

Therefore, a security strategy must be based on the adversary/victim relationship.  The only way to win against a formidable adversary, one in a considerably better position than yourself, is to make it too costly for them to wage war.  (NOTE: the cost will be different for each adversary, some may be sensitive to finance while others might be sensitive to jail/loss of freedom, etc.)

The Art of Intrusion Analysis and Incident Response

“In every block of marble I see a statue as plain as though it stood before me, shaped and perfect in attitude and action. I have only to hew away the rough walls that imprison the lovely apparition to reveal it to the other eyes as mine see it.”  Michelangelo (1476-1564)

Michelanglo was once asked how he came to carve such a beautiful statue of an Angel in the Basilica of San Domenico. His response is seen above.

I have many times expressed that intrusion analysis and incident response is more art than science.  Expertise lies with experience rather than book knowledge and gut instinct is invaluable and as likely correct as an educated guess.

I then wondered: if Intrusion Analysis is an art, to which art should it compared?

I recalled this, one of my favorite artistic quotes, and how aptly it applies to the domain of intrusion discovery and analysis.

In many ways, the answers we analysts seek is in the data.  It only requires us to “hew away the rough walls” of the unimportant data revealing the activity of interest.

I teach many new analysts that to find the new and unknown you must distinguish the old and known, remove that, and you are left with what you are seeking.

Analysts Should Expect the Unexpected

Diocyde tweeted a good but older (2009) article about hiding malicious executable code (malware) in the Windows Registry: Malware IN the Registry a.k.a. if it can’t be Done, Why Am I Looking At it?

The post is a good description of what almost all incident responders/intrusion analysts encounter regularly: Is that right?  How could that be?  Hey [analyst sitting in the next cubicle] is this what I think it is?

After 9 years of intrusion analysis in various organizations, I can say that this happens to me very regularly.  Now, I expect the unexpected.  Not much surprises me any longer.

However, it is fun to watch a new analyst come upon these things and me, nonchalantly, describe what they are seeing and how cool it is.  They are always astonished at the tactics of the adversaries and the lengths to which they will go.

While we should always expect the unexpected, we should never lose our respect for the adversary and their ability to find new ways to astound and confound us.  For when we lose that, we blind ourselves.

Why Malware Numbers Don’t Matter and What it Means for Security Accounting

McAfee recently reported over 75 million new malware samples detected in 2011. This number, while shocking, no longer matters as an absolute value. It also highlights a glaring flaw in network defense philosophy.

First, this number is only calculated from all detected hashes. Any changes in the binary results in a new, unique, hash. This means that only a small change by the adversary is necessary to effect a “new” piece of malware. A simple thought experiment: if there were 75 million malware samples, each with only one byte difference between them – this method would count 75 million “unique” pieces of malware.

Second, the number alone says nothing about the threat environment. It does not illustrate the attack vectors, vulnerabilities, or exposures used by the malware; nor does it describe the danger or effectiveness of the various malware samples. Maybe there is only one piece of malware and it’s 75 million varieties are all harmless. 75 million is now a very large number signifying nothing.

However, it does matter as a relative value showing the number of unique samples over time. For example, in 2007 unique malware samples rose 565% from the previous year [from A Brief History of Malware]. The velocity of unique malware samples detected in the wild (or the slope of the line if you prefer) is clearly increasing.

Why? It means that malware authors and operators are exploiting the primary network defense practice: default allow all – the black list. Defenders are still stuck in the “allow all” mind-set to trust everything except code which does not pass certain tests or follows certain behavior. To exploit this mind-set an adversary only has to change their malware enough to bypass these filters (e.g. AntiVirus). As defenders update their blacklists/AntiVirus/firewalls, the malware authors make a small change or re-pack and re-deploy the malware bypassing the new rules/filters/etc.

For an adversary, changing their capability slightly and re-deploying is a relatively inexpensive operation – particularly with pervasive exploit kits such as BlackHole. Whereas the cost for the defender to find the new malware, develop a signature, and deploy that signature is relatively costly leaving the security accounting on the side of the adversary.

To win this battle, the defender must switch to a known-good model, or “deny all with exceptions.” Also known as the white list. However, as we have seen – this simply adds a new target for the adversary: the white list itself.

Beware Occam’s Razor

Occam’s Razor: A principle that generally recommends that, from among competing hypotheses, selecting the one that makes the fewest new assumptions usually provides the correct one, and that the simplest explanation will be the most plausible until evidence is presented to prove it false.  [Wikipedia]

Intrusion analysts face a unique problem: discovering and tracking adversaries not wanting to be discovered or tracked.  These adversaries will take significant action to prevent their operations from being discovered [see Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection for examples].  Metasploit, a common exploitation framework, works hard to prevent their tool and modules from detection and signature including creating an Secure Sockets Layer (SSL) encrypted command-and-control channel.

In this environment, the cyber adversary is using Occam’s Razor against the analyst to elude detection.  Given a set of network traffic or events, an adversary wants any unintended observer to believe their traffic is normal and benign.  An adversary practices Denial and Deception in every packet.

Therefore, if an intrusion analyst relies too heavily on Occam’s Razor to describe a network event as benign, but a clever adversary is hiding their activities in that traffic – the adversary wins.

On the other hand, if an analyst does not employ Occam’s Razor effectively in their work, every packet will look suspicious wasting their precious time on unimportant events.

Richards Heuer’s preeminent work on the Psychology of Intelligence Analysis describes a very effective framework for employing the theory of Competing Hypothesis to decide which possible conclusion is the best given a set of facts and assumptions.  However, a common attack on Heuer’s framework is that an adversary utilizing Denial and Deception can easily defeat the analyst and have them select the conclusion the adversary wishes and not the one best describing the activity.

In the end, an intrusion analyst must use both a questioning and suspicious mind with a modicum of Occam’s Razor.

Cyber Moats? Really?!

Cyber Intrusions Into Air Force Computers Take Weeks to Detect

Can someone please explain a “cyber moat?”

We have lots of problems.  We are stuck in a forensic mind-set. Our defensive techniques don’t evolve as fast as the should. We are out numbered.

I never considered the problem of cyber moats.

It must be worse than I thought.

I really hope that we can get beyond 13th century castle defense analogies.

Don’t get me wrong, there are things to learn from physical fortification.  However, like castles in the time of cannon, there are times when offensive capabilities must force defenders to change their understanding.

I’m afraid that we are drawing too much from physical protection theory.  I find counter intelligence to be a better theoretical underpinning – assume you will be penetrated, reduce damage once exploited, focus on prevention and early detection.

From my experience with senior leadership, I find that we continue to pander to their lack of understanding in the domain by drawing inferences and analogies from domains they tend to understand better – namely physical protection.  However, to move forward we must escape from this trap if we are to evolve.

Page 5 of 5

Powered by WordPress & Theme by Anders Norén