Always A Bad Day For Adversaries

Category: Detection

Indicators and Security Analytics: Their Place in Detection and Response

Indicators for research and response; analytics for detection

Indicators of Compromise (IOCs), the lingua franca of threat intelligence.  Almost every intel sharing conversation begins and ends with indicators; commercial intelligence platforms revolve around them; most intelligence consumers end their interest there.  Does a better way exist?  Security analytics!

The Problem with Indicators in Detection

For all the focus given to indicators we know that they have the shortest lifespan of all intelligence exhaust (see the Pyramid of Pain by David J. Bianco).  In many cases, we see single use or victim specific indicators making sharing of these useless.  In general, adversaries tend towards shortening the indicator lifespan – or removing them; for instance Locky recently transitioned to hardcoded RSA keys to remove the vulnerability of connecting to a command and control (C2) server.

Broad based indicator sharing is fraught with problems.  First, it assumes that the same indicators will be leveraged against multiple victims.  This is certainly the case for some threats.  But not all.  Second, quality will likely be a problem.  For instance, DHS Automated Indicator Sharing (AIS) states:

Indicators are not validated by DHS as the emphasis is on velocity and volume: our partners tell us they will vet the indicators they receive through AIS, so the Department’s goal is to share as many indicators as possible as quickly as possible. However, when the government has useful information about an indicator, we will assign a reputation score.   – DHS Automated Information Sharing

Further, AIS contributors can choose to remain anonymous.  Think about the problems of blindly consuming thousands of non-validated anonymously sourced indicators.  How exactly do you effectively validate an anonymously contributed indicator?  Previously, I wrote on the cost of poor intelligence.  Just one instance of 8.8.8.8 by an anonymous contributor could cause massive issues.

Indicators of Compromise are only threat exhaust –  the necessary by-product of malicious activity.  Short-lived and increasingly single use, indicators pose a poor basis for detection – and it’s getting worse.  I’m not advocating for throwing indicators out entirely – they serve their purpose, but should not form the entire basis of threat intelligence detection.

Analytics For Detection

As the Pyramid of Pain suggests, we must move towards behavioral based detection focusing on whole classes of threats.  I’d much rather rely on an analytic detecting overwriting Windows registry keys for a “sticky keys” attack than hoping someone shares an IP address of a random hop point used before to remote desktop (RDP) into a host.  In the analytic case I catch every adversary using sticky keys, in the case of the indicator I catch only one adversary – with the hope they use the same infrastructure again.

Where do you find analytics?

  • The best place is your red team – ask them to describe their techniques and procedures.  Read their reports!  (I know – a stretch for some)
  • Read threat intelligence reports on adversary behaviors.
  • Ask your threat intelligence provider!  (Who you already abuse with information requests anyways – right?)
  • Check out MITRE’s Cyber Analtyics Repository.

The Place for Indicators – Research and Response

Indicator sharing works within a small group of organizations that share a “victim space” (as the Diamond Model refers to victims with shared threats).  This greatly increases the value of shared indicators because the likelihood of attackers reusing indicators increases.  However, indicator sharing outside the “shared victim space” reduces their value and increases their cost.  Research and response receive the greatest value from shared indicators as it allows a method of communicating observables discovered in attacks allowing analysts to pivot deeper into malicious activity seen by others.

Your Own Intelligence is the Best

In the end, to achieve greater detection capability organizations must invest in security analytics and reduce their reliance (and insistence) on indicators from externals.  The best indicators in the world are those from your organization’s own telemetry – your own threat intelligence is the most relevant.  Otherwise, look suspiciously at indicators from others and instead ask to share analytics!

Note: Security analytics are a dirty word – overused and often misused.  To be clear, I define analytics in this post as indicator-independent behavioral detection derived from the knowledge of bad stuff (i.e. Threat Intelligence)

Security Must Not Forget the User

Hotel Internet Network Intrusion Detection System at Work

I received this message from my hotel Internet provider which took action to limit my access for 10 minutes to 56Kbs due to some unknown intrusion detection signature/heuristic.

I was both impressed that a hotel would have such a device in-line to protect the general Internet from aggressive and potentially damaging users and angered by the punitive action taken against my innocuous activity.

Like a well-trained security professional, I immediately took action to mitigate any damage to my system from unwanted malware.   I ran my security tools (anti-virus, software updates,  spyware/adware removal, etc.).  After those did not find anything I assumed a false positive, sucked up the slow Internet, went and read a book, and then returned to my ‘blazing fast’ 2Mbs Internet access.

It was not an hour later that I again received the message and punitive action.  This time I ran Wireshark and sniffed all the traffic to/from my laptop for the next hour and analyzed the output.  I found nothing of interest.  I was now convinced this was a false positive after using my years of security knowledge and forensic ability and finding nothing suspicious on my laptop.  (I am not going to assume there was nothing, but I can only go as far as I can).

Now I was just upset.  We in security like to think of ourselves as more knowledgeable than the average user about threats and mitigations.  We can find threats they cannot and we can furthermore mitigate those threats for them without their knowledge protecting them on the front-lines (e.g. Gateway, ISP, etc.).

However, we must also remember that computing systems are here for users – that is their entire purpose, to ultimately provide a benefit to human users.  Therefore, security must always take the user into account and include them whenever possible.

Security must begin and end with the user.  This means that when security is first envisioned it must understand the purpose of the system and the needs of a user.  If security were to make a system unusable, then there is no purpose in the system even existing and hence our existence as security professionals is questioned.  Second, this means that users must be included when possible in the security cycle.

We must help users help themselves!  Messages such as the one above (e.g. “There is a problem with your system”) do no good.  It does not help solve any problem.  It actually makes the problem worse because now the user must spend time trying to fix a problem that may or may not exist.  http://www.frontend.com/design/effective-error-messages.html

Second, it does not inform, increase the knowledge of, or educate the user in any way.  This message did not inform the specific detection (e.g. signature/heuristic), suggest effective mitigation, nor provide a suggested severity of the threat.

[important]We in the security community need to better incorporate human factors/user interface knowledge into security and integrate the user from the beginning to the end of our security engineering.[/important]

 

Why Malware Numbers Don’t Matter and What it Means for Security Accounting

McAfee recently reported over 75 million new malware samples detected in 2011. This number, while shocking, no longer matters as an absolute value. It also highlights a glaring flaw in network defense philosophy.

First, this number is only calculated from all detected hashes. Any changes in the binary results in a new, unique, hash. This means that only a small change by the adversary is necessary to effect a “new” piece of malware. A simple thought experiment: if there were 75 million malware samples, each with only one byte difference between them – this method would count 75 million “unique” pieces of malware.

Second, the number alone says nothing about the threat environment. It does not illustrate the attack vectors, vulnerabilities, or exposures used by the malware; nor does it describe the danger or effectiveness of the various malware samples. Maybe there is only one piece of malware and it’s 75 million varieties are all harmless. 75 million is now a very large number signifying nothing.

However, it does matter as a relative value showing the number of unique samples over time. For example, in 2007 unique malware samples rose 565% from the previous year [from A Brief History of Malware]. The velocity of unique malware samples detected in the wild (or the slope of the line if you prefer) is clearly increasing.

Why? It means that malware authors and operators are exploiting the primary network defense practice: default allow all – the black list. Defenders are still stuck in the “allow all” mind-set to trust everything except code which does not pass certain tests or follows certain behavior. To exploit this mind-set an adversary only has to change their malware enough to bypass these filters (e.g. AntiVirus). As defenders update their blacklists/AntiVirus/firewalls, the malware authors make a small change or re-pack and re-deploy the malware bypassing the new rules/filters/etc.

For an adversary, changing their capability slightly and re-deploying is a relatively inexpensive operation – particularly with pervasive exploit kits such as BlackHole. Whereas the cost for the defender to find the new malware, develop a signature, and deploy that signature is relatively costly leaving the security accounting on the side of the adversary.

To win this battle, the defender must switch to a known-good model, or “deny all with exceptions.” Also known as the white list. However, as we have seen – this simply adds a new target for the adversary: the white list itself.

Powered by WordPress & Theme by Anders Norén