Always A Bad Day For Adversaries

Tag: tradecraft

The Cost of Bad Threat Intelligence

There is no doubt that threat intelligence is now “a thing.” At RSA 2015 I couldn’t help but notice how many vendor booths were hawking their relevance to threat intelligence.  I hear about a threat intelligence start-up almost weekly.  That is not surprising given venture capital is flowing and C-suite customers are now investing in “threat intelligence.”  Everyone wants a piece of the pie.

While market growth for threat intelligence produces innovations it also produces negative by-products (welcome to capitalism).  The most concerning by-product is the reduction in threat intelligence quality.

A growing number of published threat intelligence reports contain inaccuracies and poor analysis.  A growing number of indicators across a variety of producers are either stale, irrelevant, or generate so many false positives to be useless.

What so many fail to realize is the cost of poor quality intelligence.  Here are some of the costs:

  • If a single threat intelligence-sourced alert generates $1000 worth of time to investigate a false positive, it is easy to see how that relatively small amount can multiple within an organization and across enterprises worldwide.
  • If an intelligence producer reports incorrectly categorizes a threat as APT (say instead of cyber crime) an organization’s security response to the threat will be (and should be) different likely involving a deeper investigation.  Again, this additional, and likely unnecessarily deep, investigation is costly in both time and resources.
  • Every poor quality report costs time to read and digest.  Time that could be spent understanding a high-quality report.
  • Every poor association or correlation derails an analytic effort at an organization.

Because organizational security resources are finite and already stretched thin these mistakes, errors, and poor practices consume critical resources which could be spent on other problems and reduces the security of an organization.

Two market elements have caused this quality reduction:

  • A need to garner attention in the growing cacophony of the threat intelligence market feeding a “first to publish” mentality which usually results in a “rush to publish.”
  • A lack of customer education resulting in a poor evaluation of providers thereby incentivizing the wrong aspects of threat intelligence – such as volume of indicators over their quality or relevance

Obviously, only threat intelligence providers can solve the problem, but what pressures can help drive effective change?  Here are some:

  • Threat intelligence customers armed with evaluation criteria (particularly quality metrics) which helps them leverage threat intelligence effectively without generating unnecessary costs – this will help create market drivers for higher quality
  • Industry must self-police bad intelligence by being honest with ourselves and each other.
  • Threat intelligence aggregation platforms should have quality assessment capabilities informing the intelligence consumer of potential problems (likewise they are also be in a position to highlight timely, relevant, and unique intelligence of great value)
  • Threat intelligence analysts trained in analytic tradecraft stressing quality and accepting an ethical duty

Security professionals practicing threat intelligence must understand the implications of mistakes and poor analysis.  Bad intelligence can and does decrease the security effectiveness of an organization. Therefore it is an ethical duty of the threat intelligence practitioner to reduce errors. Threat intelligence is difficult – intelligence by definition attempts to illuminate the unknown and works by making judgments with imperfect data – errors are natural to the domain.  But, with proper practices and procedures bad intelligence can, and must, be minimized.

Death by a Thousand Cuts: Proliferation, The Biggest Cyber Threat

The cyber community is always teaming with conversations about the newest/greatest threats, exploits, or malware.  Who remembers the Morris Worm?  Nobody but students of computer security and computing historians.  The sendmail and fingerd exploits were long patched and RFC 1135 written to memorialize the event.  Today, the Boston Museum of Science displays the Morris Worm source code stored on a floppy disk.  Over the last year it has been Stuxnet.

Outsiders, and even insiders, think that we are only one exploit/worm/virus away from total destruction. However, any single rational-actor adversary with a capability, even an advanced and dangerous capability, is relatively limited in their damage potential.

The biggest cyber threat is not any one particular capability or vulnerability, but rather that we will die a death by a thousand cuts.  The biggest threat to the global network is the proliferation of offensive cyber tradecraft in the hands of many capable actors.

U.S. General Accounting Office put the total damages of the Morris Worm at $100K – $10M.  This is small compared to the estimated $5.5B in worldwide damages caused by the ILOVEYOU worm in 2000.  Yet, the tradecraft of self-replicating computer code began with the Morris Worm and proliferated into the ILOVEYOU worm 12 years later.

The danger with Stuxnet is not the worm itself, it is that others will learn tradecraft from Stuxnet such as more advanced malware droppers, the targeting of industrial control systems (e.g. SCADA), and better obfuscation techniques.  In total, Stuxnet will make networks harder to protect for years to come and in the meantime Stuxnet will be a museum display.

Hacker Motivations or Hackers Need To Eat Too

New research appears to raise questions over the conventional wisdom that pure nation-state cyberspies rarely, if ever, dabble in traditional financial cybercrime.  –  “Cybercriminal By Day, Cyber Spy By Night?” in Dark Reading on 1 March 2012

Dark Reading (@darkreading) wrote from the RSA 2012 conference of an intriguing analytic correlation made by the Dell SecureWorks Counter Threat Unit between the RSA attackers and cyber financial crimes.

The article is interesting in two ways.  First, it showcases some good analytic tradecraft correlating seemingly independent activities through adversary personas and infrastructure (in this case domain name registration).  Second, it asks the question: can a hacker be both a spy and cyber criminal?

The fact that an adversary will be using their skills for two purposes supposedly challenges “conventional wisdom.”  Normally, intrusion analysts work towards identifying the motivation of the hacker/attacker to gauge the best response (hopefully) and potentially offer clues to attribution.  There are many “conventional” terms we use to describe “hacker motivations”: script kiddies, espionage, hacktivism, black/white hat, etc. (see McAfee’s 7 Types of Hacker Motivations).

However, we often look too much towards our technical understanding and fail to acknowledge basic human motivations: safety, physiological needs (water, shelter, food, etc), love, esteem, and self-actualization [see “A Theory of Human Motivation” by Abraham Maslow or a summary of Motivations on Wikipedia].

Hackers, as all humans, are not above the basic motivations which include greed.  This would be a very simple hypothesis of why a cyber espionage actor would turn to cyber crime – for financial gain.  Maybe they were not being paid enough in their espionage job and “honeymoon” as cyber criminals, or they were simply contractors to multiple customers (a state vs. a criminal organization).  Money is a highly motivating factor.

I use the case of the “Wiley Hacker” by Cliff Stoll (on the Reading List) while teaching to highlight that a hacker working day-in-and-day-out needs to eat, live, and provide for the most basic human motivations.  Therefore, it is perfectly reasonable to ask: if they are hacking all day/every day, how are they providing for these motivations?  Is somebody paying them to hack?  Are they living in their parents’ basement?  Do they have a trust fund?  All of these are perfectly reasonable hypotheses with varying degrees of likelihood.  But they all lead to other questions of attribution and higher motivation.

If, in fact, “conventional wisdom” is that espionage actors are not motivated by money to use their skills in other endeavors, an even more fundamental understanding of human motivation contradicts that wisdom.  “Conventional wisdom” is simply another term for analytic assumption and this again highlights that analytic assumptions easily cloud judgement.

Powered by WordPress & Theme by Anders Norén