Always A Bad Day For Adversaries

Tag: intelligence analysis

The Cost of Bad Threat Intelligence

There is no doubt that threat intelligence is now “a thing.” At RSA 2015 I couldn’t help but notice how many vendor booths were hawking their relevance to threat intelligence.  I hear about a threat intelligence start-up almost weekly.  That is not surprising given venture capital is flowing and C-suite customers are now investing in “threat intelligence.”  Everyone wants a piece of the pie.

While market growth for threat intelligence produces innovations it also produces negative by-products (welcome to capitalism).  The most concerning by-product is the reduction in threat intelligence quality.

A growing number of published threat intelligence reports contain inaccuracies and poor analysis.  A growing number of indicators across a variety of producers are either stale, irrelevant, or generate so many false positives to be useless.

What so many fail to realize is the cost of poor quality intelligence.  Here are some of the costs:

  • If a single threat intelligence-sourced alert generates $1000 worth of time to investigate a false positive, it is easy to see how that relatively small amount can multiple within an organization and across enterprises worldwide.
  • If an intelligence producer reports incorrectly categorizes a threat as APT (say instead of cyber crime) an organization’s security response to the threat will be (and should be) different likely involving a deeper investigation.  Again, this additional, and likely unnecessarily deep, investigation is costly in both time and resources.
  • Every poor quality report costs time to read and digest.  Time that could be spent understanding a high-quality report.
  • Every poor association or correlation derails an analytic effort at an organization.

Because organizational security resources are finite and already stretched thin these mistakes, errors, and poor practices consume critical resources which could be spent on other problems and reduces the security of an organization.

Two market elements have caused this quality reduction:

  • A need to garner attention in the growing cacophony of the threat intelligence market feeding a “first to publish” mentality which usually results in a “rush to publish.”
  • A lack of customer education resulting in a poor evaluation of providers thereby incentivizing the wrong aspects of threat intelligence – such as volume of indicators over their quality or relevance

Obviously, only threat intelligence providers can solve the problem, but what pressures can help drive effective change?  Here are some:

  • Threat intelligence customers armed with evaluation criteria (particularly quality metrics) which helps them leverage threat intelligence effectively without generating unnecessary costs – this will help create market drivers for higher quality
  • Industry must self-police bad intelligence by being honest with ourselves and each other.
  • Threat intelligence aggregation platforms should have quality assessment capabilities informing the intelligence consumer of potential problems (likewise they are also be in a position to highlight timely, relevant, and unique intelligence of great value)
  • Threat intelligence analysts trained in analytic tradecraft stressing quality and accepting an ethical duty

Security professionals practicing threat intelligence must understand the implications of mistakes and poor analysis.  Bad intelligence can and does decrease the security effectiveness of an organization. Therefore it is an ethical duty of the threat intelligence practitioner to reduce errors. Threat intelligence is difficult – intelligence by definition attempts to illuminate the unknown and works by making judgments with imperfect data – errors are natural to the domain.  But, with proper practices and procedures bad intelligence can, and must, be minimized.

Beware Occam’s Razor

Occam’s Razor: A principle that generally recommends that, from among competing hypotheses, selecting the one that makes the fewest new assumptions usually provides the correct one, and that the simplest explanation will be the most plausible until evidence is presented to prove it false.  [Wikipedia]

Intrusion analysts face a unique problem: discovering and tracking adversaries not wanting to be discovered or tracked.  These adversaries will take significant action to prevent their operations from being discovered [see Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection for examples].  Metasploit, a common exploitation framework, works hard to prevent their tool and modules from detection and signature including creating an Secure Sockets Layer (SSL) encrypted command-and-control channel.

In this environment, the cyber adversary is using Occam’s Razor against the analyst to elude detection.  Given a set of network traffic or events, an adversary wants any unintended observer to believe their traffic is normal and benign.  An adversary practices Denial and Deception in every packet.

Therefore, if an intrusion analyst relies too heavily on Occam’s Razor to describe a network event as benign, but a clever adversary is hiding their activities in that traffic – the adversary wins.

On the other hand, if an analyst does not employ Occam’s Razor effectively in their work, every packet will look suspicious wasting their precious time on unimportant events.

Richards Heuer’s preeminent work on the Psychology of Intelligence Analysis describes a very effective framework for employing the theory of Competing Hypothesis to decide which possible conclusion is the best given a set of facts and assumptions.  However, a common attack on Heuer’s framework is that an adversary utilizing Denial and Deception can easily defeat the analyst and have them select the conclusion the adversary wishes and not the one best describing the activity.

In the end, an intrusion analyst must use both a questioning and suspicious mind with a modicum of Occam’s Razor.

Powered by WordPress & Theme by Anders Norén