Indicators for research and response; analytics for detection

Indicators of Compromise (IOCs), the lingua franca of threat intelligence.  Almost every intel sharing conversation begins and ends with indicators; commercial intelligence platforms revolve around them; most intelligence consumers end their interest there.  Does a better way exist?  Security analytics!

The Problem with Indicators in Detection

For all the focus given to indicators we know that they have the shortest lifespan of all intelligence exhaust (see the Pyramid of Pain by David J. Bianco).  In many cases, we see single use or victim specific indicators making sharing of these useless.  In general, adversaries tend towards shortening the indicator lifespan – or removing them; for instance Locky recently transitioned to hardcoded RSA keys to remove the vulnerability of connecting to a command and control (C2) server.

Broad based indicator sharing is fraught with problems.  First, it assumes that the same indicators will be leveraged against multiple victims.  This is certainly the case for some threats.  But not all.  Second, quality will likely be a problem.  For instance, DHS Automated Indicator Sharing (AIS) states:

Indicators are not validated by DHS as the emphasis is on velocity and volume: our partners tell us they will vet the indicators they receive through AIS, so the Department’s goal is to share as many indicators as possible as quickly as possible. However, when the government has useful information about an indicator, we will assign a reputation score.   – DHS Automated Information Sharing

Further, AIS contributors can choose to remain anonymous.  Think about the problems of blindly consuming thousands of non-validated anonymously sourced indicators.  How exactly do you effectively validate an anonymously contributed indicator?  Previously, I wrote on the cost of poor intelligence.  Just one instance of 8.8.8.8 by an anonymous contributor could cause massive issues.

Indicators of Compromise are only threat exhaust –  the necessary by-product of malicious activity.  Short-lived and increasingly single use, indicators pose a poor basis for detection – and it’s getting worse.  I’m not advocating for throwing indicators out entirely – they serve their purpose, but should not form the entire basis of threat intelligence detection.

Analytics For Detection

As the Pyramid of Pain suggests, we must move towards behavioral based detection focusing on whole classes of threats.  I’d much rather rely on an analytic detecting overwriting Windows registry keys for a “sticky keys” attack than hoping someone shares an IP address of a random hop point used before to remote desktop (RDP) into a host.  In the analytic case I catch every adversary using sticky keys, in the case of the indicator I catch only one adversary – with the hope they use the same infrastructure again.

Where do you find analytics?

  • The best place is your red team – ask them to describe their techniques and procedures.  Read their reports!  (I know – a stretch for some)
  • Read threat intelligence reports on adversary behaviors.
  • Ask your threat intelligence provider!  (Who you already abuse with information requests anyways – right?)
  • Check out MITRE’s Cyber Analtyics Repository.

The Place for Indicators – Research and Response

Indicator sharing works within a small group of organizations that share a “victim space” (as the Diamond Model refers to victims with shared threats).  This greatly increases the value of shared indicators because the likelihood of attackers reusing indicators increases.  However, indicator sharing outside the “shared victim space” reduces their value and increases their cost.  Research and response receive the greatest value from shared indicators as it allows a method of communicating observables discovered in attacks allowing analysts to pivot deeper into malicious activity seen by others.

Your Own Intelligence is the Best

In the end, to achieve greater detection capability organizations must invest in security analytics and reduce their reliance (and insistence) on indicators from externals.  The best indicators in the world are those from your organization’s own telemetry – your own threat intelligence is the most relevant.  Otherwise, look suspiciously at indicators from others and instead ask to share analytics!

Note: Security analytics are a dirty word – overused and often misused.  To be clear, I define analytics in this post as indicator-independent behavioral detection derived from the knowledge of bad stuff (i.e. Threat Intelligence)